Module 2 Research Update

  Рет қаралды 1,169

MITCBMM

MITCBMM

Күн бұрын

Moderator: Gabriel Kreiman
Speakers: Mengmi Zhang, Jie Zheng, and Will Xiao (Module 2)
Abstracts:
Speaker: Mengmi Zhang
Title: The combination of eccentricity, bottom-up, and top-down cues explain conjunction and asymmetric visual search
Abstract: Visual search requires complex interactions between visual processing, eye movements, object recognition, memory, and decision making. Elegant psychophysics experiments have described the task characteristics and stimulus properties that facilitate or slow down visual search behavior. En route towards a quantitative framework that accounts for the mechanisms orchestrating visual search, here we propose an image-computable biologically-inspired computational model that takes a target and a search image as inputs and produces a sequence of eye movements. To compare the model against human behavior, we consider nine foundational experiments that demonstrate two intriguing principles of visual search: (i) asymmetric search costs when looking for a certain object A among distractors B versus the reverse situation of locating B among distractors A; (ii) the increase in search costs associated with feature conjunctions. The proposed computational model has three main components, an eccentricity-dependent visual feature processor learnt through natural image statistics, bottom-up saliency, and target-dependent top-down cues. Without any prior exposure to visual search stimuli or any task-specific training, the model demonstrates the essential properties of search asymmetries and slower reaction time in feature conjunction tasks. Furthermore, the model can generalize to real-world search tasks in complex natural environments. The proposed model unifies previous theoretical frameworks into an image-computable architecture that can be directly and quantitatively compared against psychophysics experiments and can also provide a mechanistic basis that can be evaluated in terms of the underlying neuronal circuits.
Speaker: Jie Zheng
Title: Neurons detect cognitive boundaries to structure episodic memories in humans
Abstract:While experience is continuous, memories are organized as discrete events. Cognitive boundaries are thought to segment experience and structure memory, but how this process is implemented remains unclear. We recorded the activity of single neurons in the human medial temporal lobe during the formation and retrieval of memories with complex narratives. Neurons responded to abstract cognitive boundaries between different episodes. Boundary-induced neural state changes during encoding predicted subsequent recognition accuracy but impaired event order memory, mirroring a fundamental behavioral tradeoff between content and time memory. Furthermore, the neural state following boundaries was reinstated during both successful retrieval and false memories. These findings reveal a neuronal substrate for detecting cognitive boundaries that transform experience into mnemonic episodes and structure mental time travel during retrieval.
Speaker: Will Xiao
Title: Adversarial images for the Primate Brain
Abstract: Deep artificial neural networks have been proposed as a model of primate vision. However, these networks are vulnerable to adversarial attacks, whereby introducing minimal noise can fool networks into misclassifying images. Primate vision is thought to be robust to such adversarial images. We evaluated this assumption by designing adversarial images to fool primate vision. To do so, we first trained a model to predict responses of face-selective neurons in macaque inferior temporal cortex. Next, we modified images, such as human faces, to match their model-predicted neuronal responses to a target category, such as monkey faces, with a small budget for pixel value change. These adversarial images elicited neuronal responses similar to the target category. Remarkably, the same images fooled monkeys and humans at the behavioral level. These results call for closer inspection of the adversarial sensitivity of primate vision, and show that a model of visual neuron activity can be used to specifically direct primate behavior.

Пікірлер: 2
@bymyself1208
@bymyself1208 3 жыл бұрын
The question of whether the orientation preference is inherited from the feature asymmetry in training images is spot-on. I am assuming that the vertical lighting conditions are more common in natural images.
@vishwasnarayan
@vishwasnarayan 3 жыл бұрын
AWesome I was waiting for this video
Invariance and equivariance in brains and machines
52:51
MITCBMM
Рет қаралды 7 М.
An Unknown Ending💪
00:49
ISSEI / いっせい
Рет қаралды 46 МЛН
Фейковый воришка 😂
00:51
КАРЕНА МАКАРЕНА
Рет қаралды 7 МЛН
Are place cells just memory cells? Probably yes | Stefano Fusi, Columbia University
41:04
The Theoretical Neuroscience Channel
Рет қаралды 930
How fly neurons compute the direction of visual motion
56:32
Generative AI in a Nutshell - how to survive and thrive in the age of AI
17:57
Linguistics, Style and Writing in the 21st Century - with Steven Pinker
53:41
The Royal Institution
Рет қаралды 1,3 МЛН
Language Models as World Models
1:13:23
MITCBMM
Рет қаралды 4,4 М.
Intelligence Without Brains
1:29:12
World Science Festival
Рет қаралды 1,2 МЛН
Computational theories of cognition
1:25:08
Dynamic field theory
Рет қаралды 367