Discovering Interesting Papers

4 minute read

Published:

Interesting NeuroAI/LLM Cognition/miscellaneous papers

We all know that kind of feeling when you discover some very interesting paper but you never really have time to read it. Maybe you just throw it to Zotero and it stays there forever. Pobably posting the interesting/useful papers here will give me some more motivation to revisit them later? Let’s try.

1.29.2024

Neural tuning and representational geometry, Nature Reviews Neuroscience, 2021 Nikolaus Kriegeskorte & Xue-Xin Wei

1.30.2024

Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings, Nature Machine Intelligence, 2023 Jascha Achterberg et al.

2.1.2024

Transformer as a hippocampal memory consolidation model based on NMDAR-inspired nonlinearity, NeurIPS, 2023

2.5.2024

Brains and algorithms partially converge in natural language processing, Communications Biology, 2022

2.7.2024

No Coincidence, George: Capacity-Limits as the Curse of Compositionality, PsyArXiv, 2022

2.12.2024

Structural constraints on the emergence of oscillations in multi-population neural networks, eLife, 2024

Oscillatory neural networks, YouTube

2.14

Dynamics of Sparsely Connected Networks of Excitatory and Inhibitory Spiking Neurons

2.16

Using large language models to study human memory for meaningful narratives

Mechanisms of Gamma Oscillations

2.17

A call for embodied AI

2.18

Circular and unified analysis in network neuroscience

2.20-2.27

I was at AAAI 2024 for nearly a week. I learned a lot and will share some papers I came across from talks/posters at the conference.

On the Paradox of Learning to Reason from Data

CRAB: Assessing the Strength of Causal Relationships Between Real-World Events

Passive learning of active causal strategies in agents and language models

SPARTQA: A Textual Question Answering Benchmark for Spatial Reasoning

Hallucination is Inevitable: An Innate Limitation of Large Language Models

Direct Preference Optimization: Your Language Model is Secretly a Reward Model

3.1

Three aspects of representation in neuroscience

Redefining "Hallucination" in LLMs: Towards a psychology-informed framework for mitigating misinformation

Distributed representations of words and phrases and their compositionality

3.2

Neural Turing Machines

A Critical Review of Causal Reasoning Benchmarks for Large Language Models

3.3

Recurrent Models of Visual Attention

Massive Activations in Large Language Models

Multiple Object Recognition with Visual Attention

Attention is not all you need anymore

The Annotated Transformer

Attention and Memory in Deep Learning

3.7

Large language models surpass human experts in predicting neuroscience results

3.8

Encoding and decoding in fMRI

My favorite math jokes

3.9

Memory in humans and deep language models: Linking hypotheses for model augmentation

3.11

Are Emergent Abilities of Large Language Models a Mirage?

Mathematical introduction to deep learning

3.12

Memory and attention in deep learning

World Models and Predictive Coding for Cognitive and Developmental Robotics: Frontiers and Challenges

Mastering Memory Tasks with World Models

Mechanism for feature learning in neural networks and backpropagation-free machine learning models

3.13

Brain-inspired intelligent robotics: The intersection of robotics and neuroscience

Papers mentioned in this article

3.14

One model for the learning of language

3.15

The pitfalls of next-token prediction

3.16

Do Llamas Work in English? On the Latent Language of Multilingual Transformers

Using large language models to study human memory for meaningful narratives

3.18

Neuroscience needs behavior

3.23

Traveling waves shape neural population dynamics enabling predictions and internal model updating

Task interference as a neuronal basis for the cost of cognitive flexibility

A Technical Critique of Some Parts of the Free Energy Principle

3.24

Theories of Error Back-Propagation in the Brain

Neurosymbolic AI

3.26

Spatially embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings

Traveling waves shape neural population dynamics enabling predictions and internal model updating

3.27

Reconstructing computational system dynamics from neural data with recurrent neural networks

3.29

A useful guide of how to pronounce common math symbols

3.30

A Review of Neuroscience-Inspired Machine Learning

3.31

Collective intelligence: A unifying concept for integrating biology across scales and substrates

4.3

An Introduction to Model-Based Cognitive Neuroscience

What does it mean to understand a neural network?

What is a GPT by 3Blue1Brown

4.5

Nonmonotonic Plasticity: How Memory Retrieval Drives Learning

Single Cortical Neurons as Deep Artificial Neural Networks

4.17

The brain's unique take on algorithms

Cognition is an emergent property

4.18

Catalyzing next-generation Artificial Intelligence through NeuroAI

4.19

Toward a formal theory for computing machines made out of whatever physics offers

Natural and Artificial Intelligence: A brief introduction to the interplay between AI and neuroscience research

4.22

Time, Love, Memory

Thinking About Science

Reasoning ability is (little more than) working-memory capacity?! - ScienceDirect

What Is Life - Wikipedia

How do Large Language Models Handle Multilingualism?

4.24

Empowering Working Memory for Large Language Model Agents

4.26

Context-dependent computation by recurrent dynamics in prefrontal cortex

Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation

4.29

Concurrent maintenance of both veridical and transformed working memory representations within unique coding schemes

5.1

A formal model of capacity limits in working memory - ScienceDirect

The Thermodynamics of Mind: Trends in Cognitive Sciences