Who am I?

About me

I'm an artificial intelligence and computational neuroscience researcher interested in closing the gap between deep learning and the human mind. Current deep neural networks excel at perceptual tasks, but have yet to break through in the kinds of problems that humans solve through conscious, effortful thought. I aim to understand this more elusive mode of information processing so that we can mechanize it and build true, general artificial intelligence.

Currently, I'm a PhD student at Mila supervised by professors Guillaume Lajoie and Yoshua Bengio, where my research focuses on the development of AI models with inductive biases from consciousness and high-level human cognition. In particular, I study how compositional representations can model the processes involved in conscious human thought, and help address problems such as out-of-distribution generalization. I'm also a part-time student researcher at Google on the Paradigms of Intelligence team, where I work on unifying action and prediction in large generative sequence models.

Previously, I was a research assistant in the Department of Cognitive Science at Johns Hopkins University, where I worked with my advisor Mick Bonner to better comprehend human vision. Specifically, the group seeks to reverse engineer the representations and algorithms of human visual cognition through computational modeling, neuroimaging, and behavioral experiments. This work relies heavily on deep artificial neural networks as theoretical models of information processing in the human brain, and broadly makes use of large-scale computational methods to characterize visual perception.

Outside of research, I'm also very passionate about teaching. I work as a Data Science Instructor at Lighthouse Labs, where I design and teach my own lectures on various topics in machine learning to students enrolled in an intensive coding bootcamp. The students come from all types of academic and professional backgrounds, which makes for a unique and exciting learning dynamic!

Education

Education

I am currently pursuing a PhD under the supervision of professors Guillaume Lajoie and Yoshua Bengio, where my research focuses on the development of AI models with inductive biases from consciousness and high-level human cognition.

I did research into the computational basis of human vision with my advisor, Mick Bonner. Most of my work involved using deep neural networks (DNNs) as computational models, in addition to collected and analyzing fMRI/behavioral data. Projects of mine included:

  • Modeling the dynamics and representations of human scene understanding during exploration using viewpoint-invariant generative DNNs.
  • Synthesizing synthetic stimuli to selectively control neural activity in different brain regions. We did this mainly by applying gradient-based optimization methods to DNN neural encoding models.
  • Evaluating multiplicative feature interactions as a canonical nonlinear computation in visual cortex. We found that multiplicative interactions between DNN features produced representations that could explain significantly more neural activity than the original features.

My curriculum was broadly focused in computer software. I also took a number of machine learning courses in my final year after completing a 1-year computer vision internship at ModiFace.

I did my final year capstone project with Jonathan Rose. My team and I designed an application for human memory augmentation, where your phone could passively listen to all of the speech in your surroundings, transcribe it, and make it permanently available for future reference using Google Search-style queries. The idea was that the application would make it easy to retrieve the contents of any past conversation you've ever had. We were partially inspired by an episode of Black Mirror, but our hope was that the outcome would be a bit less dystopian.

I went to CEGEP in Quebec, which is a 2-year college program between high-school and university.

During my final project, I constructed a cloud chamber to observe cosmic solar radiation, partially replicating the experiments that first discovered antimatter and various subatomic particles.

Experience

Experience

AI Student Researcher 2024-present
Google - Paradigms of Intelligence team

Unifying action and prediction in large generative sequence models, supervised by João Sacramento.

Data Science Instructor 2020-present
Lighthouse Labs

  • Design and teach lectures on topics in machine learning to students enrolled in an intensive coding bootcamp.
  • Provide mentorship to students for course material, projects, and career development.

Computational Neuroscience TA 2021
Neuromatch Academy

Lead groups of students through tutorial exercises and explain concepts in computational neuroscience.

Machine Learning Researcher 2017-2019
ModiFace

  • Develop computer vision machine learning models for augmented reality in the beauty industry.
  • Write research papers on makeup rendering and skin condition diagnostics using deep learning.

Computer Vision Contractor 2018
Precious

  • Develop computer vision machine learning models related to facial perception for a mobile app that automatically makes curated baby photo albums for new parents.
  • Deploy models to iOS devices such that they run efficiently on mobile hardware.

Software Developer Intern 2016
Orbis Investments

Full-stack web development using AngularJS, Angular Material, ASP.NET MVC, Web API, and SQL Server in order to improve internal workflow efficiency for financial reporting.

Research

Research

Publications

For the most up-to-date list of publications, see my Google Scholar profile.

2024 A Complexity-Based Theory of Compositionality. Eric Elmoznino, Thomas Jiralerspong, Yoshua Bengio, Guillaume Lajoie Preprint
2024 In-context learning and Occam's razor. Eric Elmoznino, Tom Marty, Tejas Kasetty, Leo Gagnon, Sarthak Mittal, Mahan Fathi, Dhanya Sridhar, Guillaume Lajoie Preprint
2024 Multi-agent cooperation through learning-aware policy gradients. Alexander Meulemans, Seijin Kobayashi, Johannes von Oswald, Nino Scherrer, Eric Elmoznino, Blake Richards, Guillaume Lajoie, Blaise Aguera y Arcas, João Sacramento Preprint
2024 Amortizing intractable inference in large language models. Edward J. Hu, Moksh Jain, Eric Elmoznino, Younesse Kaddar, Guillaume Lajoie, Yoshua Bengio, Nikolay Malkin ICLR talk — best paper honorable mention
2024 Sources of Richness and Ineffability for Phenomenally Conscious States. Eric Elmoznino, Xu Ji, George Deane, Axel Constant, Guillaume Dumas, Guillaume Lajoie, Jonathan Simon, Yoshua Bengio Neuroscience of Consciousness
2024 High-performing neural network models of visual cortex benefit from high latent dimensionality. Elmoznino E., & Bonner M. F. PLOS Computational Biology
2024 Does learning the right latent variables necessarily improve in-context learning? Elmoznino E., Sarthak Mittal, Leo Gagnon, Sangnie Bhardwaj, Dhanya Sridhar, Guillaume Lajoie ICLR Workshop poster
2024 Convolutional architectures are cortex-aligned de novo. Atlas Kazemian, Elmoznino E., Michael F. Bonner Preprint
2023 Discrete, compositional, and symbolic representations through attractor dynamics. Andrew Nam, Elmoznino E., Nikolay Malkin, Chen Sun, Yoshua Bengio, Guillaume Lajoie NeurIPS Workshop talk
2023 Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. Patrick Butlin, Robert Long, Eric Elmoznino, Yoshua Bengio, Jonathan Birch, Axel Constant, George Deane, Stephen M. Fleming, Chris Frith, Xu Ji, Ryota Kanai, Colin Klein, Grace Lindsay, Matthias Michel, Liad Mudrik, Megan A. K. Peters, Eric Schwitzgebel, Jonathan Simon, Rufin VanRullen Preprint
2023 Scene context is predictive of unconstrained object similarity judgments. Caterina Magri, Eric Elmoznino, Michael F. Bonner Cognition
2023 Learning Macro Variables with Auto-encoders. Maitreyi Swaroop, Eric Elmoznino, Dhanya Sridhar NeurIPS Workshop poster
2020 Visual representations derived from multiplicative interactions. Elmoznino E., & Bonner M. F. NeurIPS Workshop poster
2019 A new procedure, free from human assessment that automatically grades some facial skin structural signs. Comparison with assessments by experts, using referential atlases of skin ageing. Jiang R., Kezele I., Levinshtein A., Flament F., Zhang J., Elmoznino E., Ma J., Ma J., Coquide J., Arcin V., Omoyuri E., Aarabi P. International Journal of Cosmetic Science

Invited Talks & Podcasts

2024 Why can't we describe our conscious experiences? An information theoretic attractor dynamics perspective of ineffability — Models of Consciousness conference
2024 Consciousness, ineffability, and AI safety — Mila AI Safety Reading Group
2023 Sampling discrete objects through continuous attractor dynamics — Mila GFlowNet Reading Group
2023 Why can't we describe our conscious experiences? An information theoretic attractor dynamics perspective of ineffability — Computational Phenomenology Group
2023 Why can't we describe our conscious experiences? An information theoretic attractor dynamics perspective of ineffability — Active Inference Institute podcast
2023 Why can't we describe our conscious experiences? An attractor dynamics perspective of the ineffability of qualia — University of Toronto guest lecture
2020 How does the brain work? Cognitive science research — Sabes
2020 Introduction to Programming with Python — UofTHacks

Patents

2022 System and method for image processing using deep neural networks. Levinshtein A., Chang C., Phung E., Kezele I., Guo W., Elmoznino E., Jiang R., Aarabi P. U.S. Patent No. 11216988
2021 Image-to-image translation using unpaired data for supervised learning. Elmoznino E., Kezele I., Aarabi P. U.S. Patent Application No. 17096774
2020 System and method for augmented reality using conditional cycle-consistent generative image-to-image translation models. Elmoznino E., Ma H., Kezele I., Phung E., Levinshtein A., Aarabi P. U.S. Patent Application No. 16683398
2020 Machine image colour extraction and machine image construction using an extracted colour. Elmoznino E., Aarabi P., Zhang Y. U.S. Patent Application No. 16854975
2020 Automatic image-based skin diagnostics using deep learning. Jiang R., Ma J., Ma H., Elmoznino E., Kezele I., Levinshtein A., Charbit J., Despois J., Perrot M., Antoinin F., Flament R.S., Parham A. U.S. Patent Application No. 16702895
Get in Touch

Contact

Mila - Quebec AI Institute
6666 St-Urbain, #200
Montreal, QC, H2S 3H1