Past Meetings of Interest to SIGART members

 

Language and the Human Brain:
The Classical Models and New Perspectives

Dr. Ayse Pinar Saygin

Institute of Cognitive Neuroscience & Wellcome Trust Functional Imaging Laboratory
University College, London will give a seminar entitled

All interested are kindly invited.

Abstract: Opinions regarding the fundamental character of neural organisation of language functions have been of debate since the earliest days of neurology. The purpose of this talk is to present the "classical" theory of language organisation in the human brain, in particular of language disorders following brain injury (aphasia). According to these theories, specific regions in the human brain subserve specific language processes, and damage to these regions will result in certain behavioural and mental deficits. We will evaluate the predictions of these theories by studying behavioural profiles of individual clinical cases, lesion-symptom relationships in groups of patients, as well as a brief look at neuroimaging studies on healthy control subjects. In the light of this evidence (a large portion of which has become available only in the last decade owing to advances in imaging technology), we find that while the basic syndromes defined by the classical theories of aphasia remain useful, there is little that is "textbook" about the neural correlates of the disorders. Instead, language is a complex and rather distributed system that shares behavioural and neural resources with skills and processes from (evolutionarily earlier) non-linguistic domains.

Speaker: Ayse Pinar Saygin is a Marie Curie research fellow at the Institute of Cognitive Neuroscience and Wellcome Trust Functional Imaging Laboratory at University College London. She is a cognitive neuroscientist interested in how humans perceive, attend to, and understand objects and events in the world. She has received her PhD in Cognitive Science from University of California San Diego (with Dr Marty Sereno and Dr Elizabeth Bates), M.S in Computer Science from Bilkent University, and B.S. in Mathematics from Middle East Technical University. She primarily uses functional neuroimaging and neuropsychological methods, as well as psychophysical methods. Her current projects focus on neuroimaging and transcranial magnetic stimulation (TMS) studies of attention, awareness, cross-modal binding, and motion perception. She also has a grant from Kavli Institute on Mind and Brain on how humans perceive and react to humanoid robots and androids. In addition to research, Dr. Saygin is actively involved in teaching, mentoring, scientific writing and communication (http://cogsci-online.ucsd.edu), science advocacy in society, and supporting women in science and engineering. She is dedicated to interdisciplinary research, as well as international collaboration and is working with colleagues in the United States, England, Italy, Austria, Germany, Australia and Japan. For more information on Dr Saygin's work, refer to http://www.fil.ion.ucl.ac.uk/~asaygin

 

on Friday, 13 Oct, at 12:40
in MM 451,  Informatics Institute, METU.
Please note that this is in METU, not BILKENT!

All interested are warmly invited.



 

Cognitively Inspired Artificial Intelligence Systems

Albert Ali Salah

Bogazici University

14:30 - Thursday 15 June
in MM 451,  Informatics Institute, METU.
Please note that this is in METU, not BILKENT!

All interested are warmly invited.



 

Vision in Cognitive Systems

Norbert Kruger

( MediaLab, Aalborg University Copenhagen, Denmark )

Artificial cognitive systems, which are able to communicate with humans, require the ability to express and recognize symbols in terms of spoken language or gestures. It is however currently assumed that symbolic entities already arise at early stages of neuronal processing. We have addressed this problem focusing on visual scene analysis, and we have developed an Early Cognitive Vision system based on a new form of multi-modal, self-modifying image description, which gradually approaches a symbolic level of representation. First, information is coded in terms of local image descriptors (called visual primitives) that can be understood as functional abstraction of hyper-columns in the primary visual cortex. These primitives code a rich repertoire of visual information such as position, orientation, contrast transition, colour, optic flow, depth as well as information about ``junction-ness'' or ``edge-ness''. They can be understood as condensed and meaningful descriptors of a local image patch. In the second step, a number of functional relations are defined on our primitives formalizing contextual information as expressed in classical Gestalt laws, known from Psychology, as well as spatial-temporal dependencies in the Euclidian space. In this way, primitives, and the relations defined upon them, formalize structural dependencies between visual events and entities emerge, to which a semantic structure can be attributed much in the sense of an early visual symbol. In a third step and in analogy to feedback processes in the human visual system, we make use of the contextual information: By using the primitives as initialisers of recurrent processes realising their functional relations, the locally extracted and hence ambiguous information becomes connected and by that disambiguated. This finally results in meaningful, complete and reliable scene interpretations that are applicable in artificial systems. Within the two European projects PACOplus (2006-2009) and DRIVSCO (2006-2008), our early cognitive vision system will serve as a basis for higher cognitive tasks. More specifically, it will be extended in two ways: (1) It will be used as a visual module for cognitive robots that are equipped with a rich repertoire of actuators and senses. In that way, visual perception will be linked to and grounded in actions by so called Object-Action Complexes (OACs). Through interaction of humans and robots, these OACs will be made explicit and are supposed to lead to a shared language. (2) Aiming at a real-time vision machine, we will develop a hybrid architecture in which early vision processes become implemented on a highly parallel architecture while the symbol based early cognitive vision stage become implemented on a powerful sequential computer architecture. Finally, a project will be outlined in which by integrating predictive processes into unsupervised feature learning schemes, we aim at the learning of symbols as the entities mediating these predictions.

at 14:00 on Thursday 13th April
in room BMB5 (METU)
Please note that this is in METU, not BILKENT!