Close

Unfortunately, the registration deadline has already passed

If you paid the fee before the 15th of September but forgot to register - please, write us at team@comco2019.com

Photos

View in Google Photos:
https://photos.app.goo.gl/cpuGra9jSTnm2yCg7


Videos

Day 1

Karl Friston
Active inference & deep generative models
Dieuwke Hupkes
What do they learn? Neural Networks, compositionality and interpretability
Terrance Stewart
Cognitive Computing with Neurons: Large-scale Functional Brain Modelling with the Neural Engineering Framework
Pascal Nieters
Active dendrites implement probabilistic temporal logic gates

Sebastian Kahl
Modeling reciprocal belief coordination in social interaction based on free energy minimization
Colin Phillips
The role of time in misunderstanding

Day 2

Will Monroe
Leveraging the speaker-listener symmetry

Viviane Clay
Learning semantically meaningful world representations through embodiment

Alex Hernández-García
Learning robust visual representations using data augmentation invariance

Dimitrios Pinotsis
A new approach to characterizing sensory and abstract representations in the brain

Peter de Vries
Mechanisms of self-organisation bridging the gap between low- and high-level cognition

Roger Levy
Expectation-based language processing in minds and machines


Computational
Cognition 2O19

Osnabrück, Germany
GO TO COMCO 2023

Topics covered

The ComCo-2019 workshop pursues to contribute to the re-integration of Cognitive Science and Artificial Intelligence. There is a schism between low- and high-level cognition: a lot is known about the neural signals underlying basic sensorimotor processes and also a fair bit about the cognitive processes involved in reasoning, problem solving, or language. However, explaining how high-level cognition can arise from low level mechanisms is a long-standing open problem in Cognitive Science.

In order to bridge this gap, this workshop tackles problems such as grammar learning, structured representations, or the production of complex behaviors with neural modeling.

With ComCo we are bringing together experts studying the mind from a computational point of view to better understand human and machine intelligence. If you are interested in Cognitive Science, Deep Learning, Neuroscience, Linguistics and related topics, this workshop is the right one for you!

Keynote Speakers

Karl Friston

Theoretical neuroscientist and authority on brain imaging. Research focused on models of functional integration in the human brain and the principles that underlie neuronal interactions.

Associated Talk: Day 1 09:00, Active inference & deep generative models

Colin Phillips

Professor of Linguistics at the University of Maryland, Director of the Maryland Language Science Center, Associate Director of the Neuroscience and Cognitive Science Program. Research focused on structure, learning and encoding of human language.

Associated Talk: Day 1 16:30, The role of time in misunderstanding

Terrence Stewart

Post-doctoral research associate working with Chris Eliasmith at the Centre for Theoretical Neuroscience at the University of Waterloo. Research focused on large-scale modelling of brain models.

Associated Talk: Day 1 13:30, Cognitive Computing with Neurons: Large-scale Functional Brain Modelling with the Neural Engineering Framework

Dieuwke Hupkes

PhD student at the Institute for Logic, Language and Computation, under supervision of Willem Zuidema. Research focused on understanding how recurrent neural networks can understand and learn the structures that occur in natural language.

Associated Talk: Day 1 10:30, What do they learn? Neural Networks, compositionality and interpretability

Roger Levy

Associate Professor in the Department of Brain and Cognitive Sciences, MIT. Research focused on theoretical and applied questions in the processing and acquisition of natural language.

Associated Talk: Day 2 16:00, Expectation-based language processing in minds and machines

Tim Kietzmann

Assistant Professor at the Donders Institute in Nijmegen and Senior Research Associate in the MRC Cognition and Brain Sciences Unit, University of Cambridge. Research focused on principles of neural information processing using tools from machine learning and deep learning, applied to neuroimaging data recorded at high temporal (EEG/MEG) and spatial (fMRI) resolution.

Associated Talk: Day 2 09:00, Understanding vision at the interface of computational neuroscience and artificial intelligence

Will Monroe

Research scientist at Duolingo. Research focused on natural language processing, including computational pragmatics, natural language grounding, semantic parsing, and multilingual applications.

Associated Talk: Day 2 10:30, Leveraging the speaker-listener symmetry

Agenda

Day 1, Tuesday

Registration
Welcoming
Karl Friston

Active inference & deep generative models

Speaker: Karl Friston, Professor of Neuroscience
Affiliation: University College London, UK

This presentation considers deep temporal models in the brain. It builds on previous formulations of active inference to simulate behaviour and electrophysiological responses under deep (hierarchical) generative models of discrete state transitions. The deeply structured temporal aspect of these models means that evidence is accumulated over distinct temporal scales, enabling inferences about narratives (i.e., temporal scenes). We illustrate this behaviour in terms of Bayesian belief updating – and associated neuronal processes – to reproduce the epistemic foraging seen in reading. These simulations reproduce these sort of perisaccadic delay period activity and local field potentials seen empirically; including evidence accumulation and place cell activity. Finally, we exploit the deep structure of these models to simulate responses to local (e.g., font type) and global (e.g., semantic) violations; reproducing mismatch negativity and P300 responses respectively. These simulations are presented as an example of how to use basic principles to constrain our understanding of system architectures in the brain – and the functional imperatives that may apply to neuronal networks.
Coffee break
Dieuwke Hupkes

What do they learn? Neural Networks, compositionality and interpretability

Speaker: Dieuwke Hupkes, PhD student with Willem Zuidema
Affiliation: University of Amsterdam, the Netherlands

Artificial neural networks have become increasingly successful in many domains of natural language processing, such as machine translation, language modelling and even syntactic parsing, but we have still very little idea of how they achieve these impressive performances. In this talk, I will talk about research aiming at opening such _black box_ neural networks, with a specific focus on how they represent hierarchical compositionality. First, I will discuss studies in which neural networks are trained on controlled artificial languages with a strong hierarchical structure and present an investigation of how recurrent neural networks learn to process these structures. In relation to this, I will also introduce diagnostic classification, one of the techniques commonly used to probe the internal representations of neural networks. In the second part of this talk, I will consider neural language models, that are trained on natural data. I will discuss several studies concerning their ability to present long distance dependencies such as subject-verb agreement, commonly considered a proxy of their ability to process hierarchical structures. In this part of the talk, I will also highlight several other interpretation techniques, such as contextual decomposition, neuron ablation and diagnostic interventions.
Poster session
Lunch break
Terrence Stewart

Cognitive Computing with Neurons: Large-scale Functional Brain Modelling with the Neural Engineering Framework

Speaker: Terrence Stewart, Post-doctoral research associate working with Chris Eliasmith
Affiliation: University of Waterloo, Canada

Biological systems manage to perform complex calculations using large numbers of low-power spiking components (i.e. neurons). This talk will present the Neural Engineering Framework; a general method for combining massively parallel components using weighted connections such that the overall system computes desired functions. This method was used to create Spaun, the world's first (and so far only) spiking-neuron brain model capable of performing multiple cognitive tasks. This talk will focus on the fact that using neurons to compute forces us to think about cognitive algorithms in new ways, and how this affects the representation of symbols, space, and time in biological systems.
Pascal Nieters

Contributed talk: Active dendrites implement probabilistic temporal logic gates

Speaker: Pascal Nieters, PhD student
Affiliation: University of Osnabrück, Germany

Poster Session + coffee break
Sebastian Kahl

Contributed talk: Modeling reciprocal belief coordination in social interaction based on free energy minimization

Speaker: Sebastian Kahl, Research Assistant
Affiliation: University of Bielefeld, Germany

Colin Phillips

The role of time in misunderstanding

Speaker: Colin Phillips, Professor of Linguistics
Affiliation: University of Maryland, USA

Human speaking and understanding is generally robust and successful, but selective vulnerabilities provide clues to the mechanisms that ensure success. We have examined a number of “linguistic illusions” as model systems for understanding comprehension and production mechanisms, using a combination of cognitive, neural, and computational approaches. A routine finding from our human experimentation is that illusions are even more selective than we expected. They depend on rather specific linguistic and temporal properties. Some types of computational models yield more insight into these highly specific error profiles.
Botanical garden tour
Dinner

Day 2, Wednesday

Tim Kietzmann

Understanding vision at the interface of computational neuroscience and artificial intelligence

Speaker: Tim Kietzmann
Affiliation: Donders Institute and University of Cambridge

This talk will describe our recent methodological advances in understanding information processing in the human brain and artificial vision systems. A central theme of our work is the combination of neuroimaging and deep learning, a powerful computational framework for obtaining models of cortical information processing and task-performing vision systems. Operating in this interdisciplinary research area combining neuroscience and machine learning, we have recently demonstrated that recurrent neural network architectures not only provide a better model for human visual processing, but that they also exhibit benefits for computer vision applications. This insight was made possible by a novel mechanism to directly infuse brain data into large-scale recurrent neural networks. The overall approach not only offers unique insights into neural computational principles, but also offers great future potential for improving AI systems by allowing them to tap into the highly robust and efficient visual processing capacities of the brain.
Coffee break
Will Monroe

Leveraging the speaker-listener symmetry

Speaker: Will Monroe, Research scientist at Duolingo
Affiliation: Duolingo, Stanford University, USA

Natural language presents a remarkable symmetry between production and understanding: as a result of participation in conversation as speakers and listeners, people (mostly) end up speaking the same language they hear. Rational speech act (RSA) models of pragmatic linguistic behavior propose that speaker and listener reason probabilistically about each other’s goals and private knowledge. I argue that RSA is a compelling way of implementing an inductive bias towards speaker-listener symmetry for machine understanding and production of natural language. When trained on various reference game corpora, a model combining RSA and machine learning produces more human-like utterances and interpretations than straightforward machine learning classifiers. Furthermore, the insight relating the listener and speaker roles enables the use of a generation model to improve understanding, as well as suggesting a new way to evaluate natural language generation systems in terms of an understanding task.
Poster session
Lunch
Viviane Clay

Contributed talk: Learning semantically meaningful world representations through embodiment

Speaker: Viviane Clay, PhD student
Affiliation: University of Osnabrück, Germany

Alex Hernández-García

Contributed talk: Learning robust visual representations using data augmentation invariance

Speaker: Alex Hernández-García, PhD student
Affiliation: University of Osnabrück, Germany

Dimitrios Pinotsis

Contributed talk: A new approach to characterizing sensory and abstract representations in the brain

Speaker: Dimitrios Pinotsis, Associate Professor
Affiliation: City, University of London, UK and MIT, USA

Coffee break
Pieter de Vries

Contributed talk: Mechanisms of self-organisation bridging the gap between low- and high-level cognition

Speaker: Pieter de Vries, Assistant Professor
Affiliation: University of Groningen, Netherlands

Roger Levy

Expectation-based language processing in minds and machines

Speaker: Roger Levy, Associate Professor in the Department of Brain and Cognitive Sciences
Affiliation: Massachusetts Institute of Technology, USA

Psycholinguistics and computational linguistics are the two fields most dedicated to accounting for the computational operations required to understand natural language. Today, both fields find themselves responsible for understanding the behaviors and inductive biases of "black-box" systems: the human mind and artificial neural network (ANN) models, respectively. In this talk I highlight how the two fields can productively contribute to one another, with a focus on incremental language processing. I first describe the surprisal theory of interpretation, prediction, and differential processing difficulty in human language comprehension. I then show how surprisal theory and controlled experimental paradigms from psycholinguistics can help us probe ANN language model behavior for evidence of human-like grammatical generalizations. We find that ANNs exhibit a range of subtle behaviors, including embedding-depth tracking and garden-pathing over long stretches of text, that suggest representations homologous to incremental syntactic state in human language processing. These ANNs also learn abstract word-order preferences and many generalizations about the long-distance filler-gap dependencies that are a hallmark of natural language syntax, perhaps most surprisingly including many filler-gap "island" constraints. However, even when trained on a human lifetime's worth of linguistic input these ANNs fail to learn some key basic facts about other core grammatical dependencies.
Best poster award of 250€ kindly sponsored by Salt and Pepper
Session close

Visit

Venue

Bohnenkamp-HausBohnenkamp-Haus
Botanical Garden, OsnabrückBotanical Garden, Osnabrück
Bohnenkamp-Haus HallBohnenkamp-Haus Hall
Botanical Garden Osnabrück - Google MapsOpen map in new window
Bohnenkamp-Haus
Albrechtstraße 29, 49076 Osnabrück

Travel

By plane

Münster Osnabrück International Airport
35km / app. 0.5hShuttle bus connection X15. At least every hour
Dortmund Airport
110km / app. 2hTrain connection DTM Dortmund Airport. At least every hour
Bremen Airport
120km / app. 1.5hTrain connection Flughafen, Bremen. At least every hour
Hannover Airport
140km / app. 2hTrain connection HAJ Hanover Airport. At least every hour
Düsseldorf Airport
175km / app. 2hTrain connection Düsseldorf Flughafen. At least every hour
Hamburg Airport
245km / app. 2.5hTrain connection Hamburg Airport. At least every hour
Frankfurt Airport
335km / app. 3.5-4.5hTrain connection Frankfurt (M) Flughafen Fernbf. At least every hour

By train & bus

Airport shuttle bus Münster-Osnabrück (X15)
Shuttle service

Homepage of the Deutsche Bahn
Train connections and ticket reservation

Bus lines 21 and 22
Route planner App. Bus lines 21 and 22 are direct lines between Osnabrück's central station ( Osnabrück Hbf/ZOB ) and our building, bus stop Hochschulen Westerberg. At least every 20 minutes, 15-18 minutes travelling time

By car

It is also easy to get to Osnabrück by car since the city is located close to three major
highways (Autobahnen):
  • A1: running from the Ruhrgebiet to Hamburg
  • A30: running from Amsterdam to Berlin
  • A33: running from Bielefeld to Kassel
For details you may check a Route Planner ( e.g. Map24, Google Maps etc. )

Accommodation

Please find below (in alphabetical order) a rough overview of hotels (of different price categories) in the nearer vicinity to the venues:
1.6km from venue
1.4km from venue
1.1km from venue
1.7km from venue
1.1km from venue
0.9km from venue
1.7km from venue
1.4km from venue
0.8km from venue

For low budget accommodation, you may have a look at these lodges:

0.7km from venue


Poster sessions

Dear participants, the poster format should be A0. Free choice of portrait or landscape.

Day 1, Tuesday






Day 2, Wednesday