Seminars at the Faculty of Informatics

Talks@IDSIA: Two talks by Prof Markus Hutter

Markus Hutter is a former senior researcher of IDSIA who has recently moved to Australia, where is now Associate Professor in the RSISE at  
the Australian National University in Canberra, Australia, and NICTA adjunct. He holds a PhD and BSc in physics and a Habilitation, MSc,  
and BSc in informatics. Since 2000, his research is centered around the information-theoretic foundations of inductive reasoning and  
reinforcement learning, which resulted in 50+ published research papers and several awards. His book "Universal Artificial Intelligence" (Springer, EATCS, 2005) develops the first sound and complete theory of AI. He also runs the Human Knowledge Compression Contest (50'000 Euro H-prize).

We are glad to host two special talks Professor Hutter will deliver on the 23rd and the 26th of June. The details follow at the end of this  

When: 23 June 2009, 11h30
Where: Room 200, 2nd Floor, Galleria 2, Manno
What: Generic Reinforcement Learning Agents
In this fairly non-technical talk I will give an introduction to generic learning agents, and briefly discuss two recent instantiations.
Agent applications are ubiquitous in commerce and industry, and the sophistication, complexity, and importance of these applications is increasing rapidly; they include speech recognition systems, vision  systems, search engines, auto-pilots, spam filters, and robots.  
Current agent technology has the problems that the agents constructed  are usually specialised to a narrow domain and require considerable  
input from agent designers during construction. We can improve existing agent technology by making possible agent systems that can  
automatically acquire (learn) during deployment much of the knowledge that is currently required to be built in by agent designers. This  
will greatly reduce the effort required for agent construction and result in agents that are more  
adaptive than at present and operate successfully in a wide variety of environments.

Recommended reading:
Universal Algorithmic Intelligence: A mathematical top-down approach
Feature Markov Decision Processes

When: 26 June 2009, 11h30
Where: Room 200, 2nd Floor, Galleria 2, Manno
What: Feature Reinforcement Learning
In this more technical talk I will present a promising novel generic learning agent. It represents a general approach to learning that  
bridges the gap between theory and practice in reinforcement learning  (RL). General-purpose, intelligent, learning agents cycle through  
sequences of observations, actions, and rewards that are complex, uncertain, unknown, and non-Markovian. On the other hand, RL is well- 
developed for small finite-state Markov decision processes (MDPs). Up to now, extracting the right state representations out of bare observations, that is, reducing the general agent setup to the MDP  
framework, is an art that involves significant effort by designers.  
The primary goal of feature reinforcement learning is to automate the  eduction process and thereby significantly expand the scope of many  existing RL algorithms and the agents that employ them.

Feature Reinforcement Learning: Part I. Unstructured MDPs
Feature Dynamic Bayesian Networks