Who am I?

About me

I'm an artificial intelligence and computational neuroscience researcher interested in closing the gap between deep learning and the human mind. Current deep neural networks excel at perceptual tasks, but have yet to break through in the kinds of problems that humans solve through conscious, effortful thought. I aim to understand this more elusive mode of information processing so that we can mechanize it and build true, general artificial intelligence.

Currently, I'm a PhD student at Mila supervised by professors Guillaume Lajoie and Yoshua Bengio, where my research focuses on the development of AI models with inductive biases from consciousness and high-level human cognition. In particular, I study how the dynamic recomposition of neural modules can model the processes involved in conscious human thought, and help address problems such as out-of-distribution generalization.

Previously, I was a research assistant in the Department of Cognitive Science at Johns Hopkins University, where I worked with my advisor Mick Bonner to better comprehend human vision. Specifically, the group seeks to reverse engineer the representations and algorithms of human visual cognition through computational modeling, neuroimaging, and behavioral experiments. This work relies heavily on deep artificial neural networks as theoretical models of information processing in the human brain, and broadly makes use of large-scale computational methods to characterize visual perception.

Outside of research, I'm also very passionate about teaching. I work as a Data Science Instructor at Lighthouse Labs, where I design and teach my own lectures on various topics in machine learning to students enrolled in an intensive coding bootcamp. The students come from all types of academic and professional backgrounds, which makes for a unique and exciting learning dynamic!

Education

Education

I am currently pursuing a PhD under the supervision of professors Guillaume Lajoie and Yoshua Bengio, where my research focuses on the development of AI models with inductive biases from consciousness and high-level human cognition.

I did research into the computational basis of human vision with my advisor, Mick Bonner. Most of my work involved using deep neural networks (DNNs) as computational models, in addition to collected and analyzing fMRI/behavioral data. Projects of mine included:

  • Modeling the dynamics and representations of human scene understanding during exploration using viewpoint-invariant generative DNNs.
  • Synthesizing synthetic stimuli to selectively control neural activity in different brain regions. We did this mainly by applying gradient-based optimization methods to DNN neural encoding models.
  • Evaluating multiplicative feature interactions as a canonical nonlinear computation in visual cortex. We found that multiplicative interactions between DNN features produced representations that could explain significantly more neural activity than the original features.

My curriculum was broadly focused in computer software. I also took a number of machine learning courses in my final year after completing a 1-year computer vision internship at ModiFace.

I did my final year capstone project with Jonathan Rose. My team and I designed an application for human memory augmentation, where your phone could passively listen to all of the speech in your surroundings, transcribe it, and make it permanently available for future reference using Google Search-style queries. The idea was that the application would make it easy to retrieve the contents of any past conversation you've ever had. We were partially inspired by an episode of Black Mirror, but our hope was that the outcome would be a bit less dystopian.

I went to CEGEP in Quebec, which is a 2-year college program between high-school and university.

During my final project, I constructed a cloud chamber to observe cosmic solar radiation, partially replicating the experiments that first discovered antimatter and various subatomic particles.

Experience

Experience

Data Science Instructor 2020-present
Lighthouse Labs

  • Design and teach lectures on topics in machine learning to students enrolled in an intensive coding bootcamp.
  • Provide mentorship to students for course material, projects, and career development.

Computational Neuroscience TA 2021
Neuromatch Academy

Lead groups of students through tutorial exercises and explain concepts in computational neuroscience.

Machine Learning Researcher 2017-2019
ModiFace

  • Develop computer vision machine learning models for augmented reality in the beauty industry.
  • Write research papers on makeup rendering and skin condition diagnostics using deep learning.

Computer Vision Contractor 2018
Precious

  • Develop computer vision machine learning models related to facial perception for a mobile app that automatically makes curated baby photo albums for new parents.
  • Deploy models to iOS devices such that they run efficiently on mobile hardware.

Software Developer Intern 2016
Orbis Investments

Full-stack web development using AngularJS, Angular Material, ASP.NET MVC, Web API, and SQL Server in order to improve internal workflow efficiency for financial reporting.

Research

Research

Publications

For the most up-to-date list of publications, see my Google Scholar profile.

2023 Consciousness in Artificial Intelligence: Insights from the Science of Consciousness. Patrick Butlin, Robert Long, Eric Elmoznino, Yoshua Bengio, Jonathan Birch, Axel Constant, George Deane, Stephen M. Fleming, Chris Frith, Xu Ji, Ryota Kanai, Colin Klein, Grace Lindsay, Matthias Michel, Liad Mudrik, Megan A. K. Peters, Eric Schwitzgebel, Jonathan Simon, Rufin VanRullen Preprint
2023 Sources of Richness and Ineffability for Phenomenally Conscious States. Eric Elmoznino, Xu Ji, George Deane, Axel Constant, Guillaume Dumas, Guillaume Lajoie, Jonathan Simon, Yoshua Bengio Preprint
2023 Scene context is predictive of unconstrained object similarity judgments. Caterina Magri, Eric Elmoznino, Michael F. Bonner
2022 High-performing neural network models of visual cortex benefit from high latent dimensionality. Elmoznino E., & Bonner M. F. Preprint
2020 Visual representations derived from multiplicative interactions. Elmoznino E., & Bonner M. F. NeurIPS Workshop SVRHM
2019 A new procedure, free from human assessment that automatically grades some facial skin structural signs. Comparison with assessments by experts, using referential atlases of skin ageing. Jiang R., Kezele I., Levinshtein A., Flament F., Zhang J., Elmoznino E., Ma J., Ma J., Coquide J., Arcin V., Omoyuri E., Aarabi P. International Journal of Cosmetic Science

Conference Talks & Posters

2022 (Talk) Montreal AI Symposium. Elmoznino E., & Bonner M. F. Montreal AI Symposium
2022 (Poster) High-performing neural network models of visual cortex benefit from high latent dimensionality. Elmoznino E., & Bonner M. F. Cognitive Computational Neuroscience
2022 (Talk) Latent dimensionality scales with the performance of deep learning models of visual cortex. Elmoznino E., & Bonner M. F. Vision Sciences Society
2021 (Talk) Model dimensionality scales with the performance of deep learning models for biological vision. Elmoznino E., & Bonner M. F. Neuromatch 4.0
2021 (Poster) High-performing computational models of visual cortex are marked by high effective dimensionality. Elmoznino E., & Bonner M. F. Vision Sciences Society

Invited Talks

2023 Why can't we describe our conscious experiences? An information theoretic attractor dynamics perspective of ineffability — Computational Phenomenology Group
2023 Why can't we describe our conscious experiences? An information theoretic attractor dynamics perspective of ineffability — Active Inference Institute podcast
2023 Why can't we describe our conscious experiences? An attractor dynamics perspective of the ineffability of qualia — University of Toronto guest lecture
2020 How does the brain work? Cognitive science research — Sabes
2020 Introduction to Programming with Python — UofTHacks

Patents

2022 System and method for image processing using deep neural networks. Levinshtein A., Chang C., Phung E., Kezele I., Guo W., Elmoznino E., Jiang R., Aarabi P. U.S. Patent No. 11216988
2021 Image-to-image translation using unpaired data for supervised learning. Elmoznino E., Kezele I., Aarabi P. U.S. Patent Application No. 17096774
2020 System and method for augmented reality using conditional cycle-consistent generative image-to-image translation models. Elmoznino E., Ma H., Kezele I., Phung E., Levinshtein A., Aarabi P. U.S. Patent Application No. 16683398
2020 Machine image colour extraction and machine image construction using an extracted colour. Elmoznino E., Aarabi P., Zhang Y. U.S. Patent Application No. 16854975
2020 Automatic image-based skin diagnostics using deep learning. Jiang R., Ma J., Ma H., Elmoznino E., Kezele I., Levinshtein A., Charbit J., Despois J., Perrot M., Antoinin F., Flament R.S., Parham A. U.S. Patent Application No. 16702895
Get in Touch

Contact

Mila - Quebec AI Institute
6666 St-Urbain, #200
Montreal, QC, H2S 3H1