The ComCo-2019 workshop pursues to contribute to the re-integration of Cognitive Science and Artificial Intelligence. There is a schism between low- and high-level cognition: a lot is known about the neural signals underlying basic sensorimotor processes and also a fair bit about the cognitive processes involved in reasoning, problem solving, or language. However, explaining how high-level cognition can arise from low level mechanisms is a long-standing open problem in Cognitive Science.
In order to bridge this gap, this workshop tackles problems such as grammar learning, structured representations, or the production of complex behaviors with neural modeling.
With ComCo we are bringing together experts studying the mind from a computational point of view to better understand human and machine intelligence. If you are interested in Cognitive Science, Deep Learning, Neuroscience, Linguistics and related topics, this workshop is the right one for you!
Theoretical neuroscientist and authority on brain imaging. Research focused on models of functional integration in the human brain and the principles that underlie neuronal interactions.
Associated Talk: Day 1 09:00, Active inference & deep generative models
Professor of Linguistics at the University of Maryland, Director of the Maryland Language Science Center, Associate Director of the Neuroscience and Cognitive Science Program. Research focused on structure, learning and encoding of human language.
Associated Talk: Day 1 16:30, The role of time in misunderstanding
Post-doctoral research associate working with Chris Eliasmith at the Centre for Theoretical Neuroscience at the University of Waterloo. Research focused on large-scale modelling of brain models.
Associated Talk: Day 1 13:30, Cognitive Computing with Neurons: Large-scale Functional Brain Modelling with the Neural Engineering Framework
PhD student at the Institute for Logic, Language and Computation, under supervision of Willem Zuidema. Research focused on understanding how recurrent neural networks can understand and learn the structures that occur in natural language.
Associated Talk: Day 1 10:30, What do they learn? Neural Networks, compositionality and interpretability
Associate Professor in the Department of Brain and Cognitive Sciences, MIT. Research focused on theoretical and applied questions in the processing and acquisition of natural language.
Associated Talk: Day 2 16:00, Expectation-based language processing in minds and machines
Assistant Professor at the Donders Institute in Nijmegen and Senior Research Associate in the MRC Cognition and Brain Sciences Unit, University of Cambridge. Research focused on principles of neural information processing using tools from machine learning and deep learning, applied to neuroimaging data recorded at high temporal (EEG/MEG) and spatial (fMRI) resolution.
Associated Talk: Day 2 09:00, Understanding vision at the interface of computational neuroscience and artificial intelligence
Research scientist at Duolingo. Research focused on natural language processing, including computational pragmatics, natural language grounding, semantic parsing, and multilingual applications.
Associated Talk: Day 2 10:30, Leveraging the speaker-listener symmetry
Agenda
Day 1, Tuesday
Registration
Welcoming
Active inference & deep generative models
Speaker: Karl Friston, Professor of Neuroscience Affiliation: University College London, UK
This presentation considers deep temporal models in the brain. It builds on previous formulations of active inference to simulate behaviour and electrophysiological responses under deep (hierarchical) generative models of discrete state transitions. The deeply structured temporal aspect of these models means that evidence is accumulated over distinct temporal scales, enabling inferences about narratives (i.e., temporal scenes). We illustrate this behaviour in terms of Bayesian belief updating – and associated neuronal processes – to reproduce the epistemic foraging seen in reading. These simulations reproduce these sort of perisaccadic delay period activity and local field potentials seen empirically; including evidence accumulation and place cell activity. Finally, we exploit the deep structure of these models to simulate responses to local (e.g., font type) and global (e.g., semantic) violations; reproducing mismatch negativity and P300 responses respectively. These simulations are presented as an example of how to use basic principles to constrain our understanding of system architectures in the brain – and the functional imperatives that may apply to neuronal networks.
Coffee break
What do they learn? Neural Networks, compositionality and interpretability
Speaker: Dieuwke Hupkes, PhD student with Willem Zuidema Affiliation: University of Amsterdam, the Netherlands
Artificial neural networks have become increasingly successful in
many domains of natural language processing, such as machine translation,
language modelling and even syntactic parsing, but we have still very
little idea of how they achieve these impressive performances. In this
talk, I will talk about research aiming at opening such _black box_ neural
networks, with a specific focus on how they represent hierarchical compositionality.
First, I will discuss studies in which neural networks are trained on controlled artificial
languages with a strong hierarchical structure and present an investigation of
how recurrent neural networks learn to process these structures. In
relation to this, I will also introduce diagnostic classification, one
of the techniques commonly used to probe the internal representations
of neural networks. In the second part of this talk, I will consider neural
language models, that are trained on natural data. I will discuss several studies
concerning their ability to present long distance dependencies such as subject-verb agreement,
commonly considered a proxy of their ability to process hierarchical structures. In
this part of the talk, I will also highlight several other interpretation
techniques, such as contextual decomposition, neuron ablation and diagnostic interventions.
Poster session
Lunch break
Cognitive Computing with Neurons: Large-scale Functional Brain Modelling with the Neural Engineering Framework
Speaker: Terrence Stewart, Post-doctoral research associate working with Chris Eliasmith Affiliation: University of Waterloo, Canada
Biological systems manage to perform complex calculations
using large numbers of low-power spiking components (i.e. neurons).
This talk will present the Neural Engineering Framework; a general
method for combining massively parallel components using weighted
connections such that the overall system computes desired functions.
This method was used to create Spaun, the world's first (and so far
only) spiking-neuron brain model capable of performing multiple
cognitive tasks. This talk will focus on the fact that using neurons
to compute forces us to think about cognitive algorithms in new ways,
and how this affects the representation of symbols, space, and time in
biological systems.
Contributed talk: Active dendrites implement probabilistic temporal logic gates
Speaker: Pascal Nieters, PhD student Affiliation: University of Osnabrück, Germany
Poster Session + coffee break
Contributed talk: Modeling reciprocal belief coordination in social interaction based on free energy minimization
Speaker: Sebastian Kahl, Research Assistant Affiliation: University of Bielefeld, Germany
The role of time in misunderstanding
Speaker: Colin Phillips, Professor of Linguistics Affiliation: University of Maryland, USA
Human speaking and understanding is generally robust and successful,
but selective vulnerabilities provide clues to the mechanisms that
ensure success. We have examined a number of “linguistic illusions”
as model systems for understanding comprehension and production mechanisms,
using a combination of cognitive, neural, and computational approaches.
A routine finding from our human experimentation is that illusions are
even more selective than we expected. They depend on rather specific
linguistic and temporal properties. Some types of computational models
yield more insight into these highly specific error profiles.
Botanical garden tour
Dinner
Day 2, Wednesday
Understanding vision at the interface of computational neuroscience and artificial intelligence
Speaker: Tim Kietzmann Affiliation: Donders Institute and University of Cambridge
This talk will describe our recent methodological advances in
understanding information processing in the human brain and
artificial vision systems. A central theme of our work is the
combination of neuroimaging and deep learning, a powerful computational
framework for obtaining models of cortical information processing
and task-performing vision systems. Operating in this interdisciplinary
research area combining neuroscience and machine learning, we have
recently demonstrated that recurrent neural network architectures
not only provide a better model for human visual processing, but
that they also exhibit benefits for computer vision applications.
This insight was made possible by a novel mechanism to directly infuse
brain data into large-scale recurrent neural networks. The overall
approach not only offers unique insights into neural computational
principles, but also offers great future potential for improving AI
systems by allowing them to tap into the highly robust and efficient
visual processing capacities of the brain.
Coffee break
Leveraging the speaker-listener symmetry
Speaker: Will Monroe, Research scientist at Duolingo Affiliation: Duolingo, Stanford University, USA
Natural language presents a remarkable symmetry between production and understanding: as a result of participation in conversation as speakers and listeners, people (mostly) end up speaking the same language they hear. Rational speech act (RSA) models of pragmatic linguistic behavior propose that speaker and listener reason probabilistically about each other’s goals and private knowledge. I argue that RSA is a compelling way of implementing an inductive bias towards speaker-listener symmetry for machine understanding and production of natural language. When trained on various reference game corpora, a model combining RSA and machine learning produces more human-like utterances and interpretations than straightforward machine learning classifiers. Furthermore, the insight relating the listener and speaker roles enables the use of a generation model to improve understanding, as well as suggesting a new way to evaluate natural language generation systems in terms of an understanding task.
Poster session
Lunch
Contributed talk: Learning semantically meaningful world representations through embodiment
Speaker: Viviane Clay, PhD student Affiliation: University of Osnabrück, Germany
Contributed talk: Learning robust visual representations using data augmentation invariance
Speaker: Alex Hernández-García, PhD student Affiliation: University of Osnabrück, Germany
Contributed talk: A new approach to characterizing sensory and abstract representations in the brain
Speaker: Dimitrios Pinotsis, Associate Professor Affiliation: City, University of London, UK and MIT, USA
Coffee break
Contributed talk: Mechanisms of self-organisation bridging the gap between low- and high-level cognition
Speaker: Pieter de Vries, Assistant Professor Affiliation: University of Groningen, Netherlands
Expectation-based language processing in minds and machines
Speaker: Roger Levy, Associate Professor in the Department of Brain and Cognitive Sciences Affiliation: Massachusetts Institute of Technology, USA
Psycholinguistics and computational linguistics are the two fields
most dedicated to accounting for the computational operations required
to understand natural language. Today, both fields find themselves
responsible for understanding the behaviors and inductive biases of
"black-box" systems: the human mind and artificial neural network
(ANN) models, respectively. In this talk I highlight how the two
fields can productively contribute to one another, with a focus on
incremental language processing. I first describe the surprisal
theory of interpretation, prediction, and differential processing
difficulty in human language comprehension. I then show how surprisal
theory and controlled experimental paradigms from psycholinguistics
can help us probe ANN language model behavior for evidence of human-like
grammatical generalizations. We find that ANNs exhibit a range of subtle
behaviors, including embedding-depth tracking and garden-pathing over long
stretches of text, that suggest representations homologous to incremental
syntactic state in human language processing. These ANNs also learn
abstract word-order preferences and many generalizations about the
long-distance filler-gap dependencies that are a hallmark of natural
language syntax, perhaps most surprisingly including many filler-gap
"island" constraints. However, even when trained on a human lifetime's
worth of linguistic input these ANNs fail to learn some key basic facts
about other core grammatical dependencies.
Best poster award of 250€ kindly sponsored by Salt and Pepper
Bus lines 21 and 22 Route planner App. Bus lines 21 and 22 are direct lines between Osnabrück's central station ( Osnabrück Hbf/ZOB ) and our building, bus stop Hochschulen Westerberg.
At least every 20 minutes, 15-18 minutes travelling time
By car
It is also easy to get to Osnabrück by car since the city is located close to three major highways (Autobahnen):
A1: running from the Ruhrgebiet to Hamburg
A30: running from Amsterdam to Berlin
A33: running from Bielefeld to Kassel
For details you may check a Route Planner ( e.g. Map24, Google Maps etc. )
Accommodation
Please find below (in alphabetical order) a rough overview of hotels (of different price categories) in the nearer vicinity to the venues: