From: Jeffrey C. Schlimmer
Subject: Machine Learning Conferenc Workshops
Date: 
Message-ID: <D4soE9.2CC@serval.net.wsu.edu>
		      WORKSHOP CALLS FOR PAPERS
	 Twelfth International Conference on Machine Learning

Tahoe City, California, U.S.A.
July 9, 1995


AGENTS THAT LEARN FROM OTHER AGENTS

    Agent-oriented learning is currently receiving a great deal of
attention among machine learning researchers. The purpose of this
workshop is to draw researchers from diverse areas of machine
learning, such as learning in the context of distributed AI, planning
and learning, software agents, knowledge acquisition, reinforcement
learning, computational learning theory, neural networks, genetic
algorithms, explanation-based learning, and multistrategy learning to
address the unifying theme of agents that learn from other agents.
This workshop is a special opportunity for empirically-oriented
machine learning researchers to interact with the theoretical COLT
community on a topic of mutual interest.

Submission deadline: May 1, 1995.


APPLYING MACHINE LEARNING IN PRACTICE

    The purpose of this workshop is to characterize the expertise used
during the application of ML algorithms to real-world problems and, in
doing so, to develop a better understanding of how to use ML tools
successfully. We solicit descriptions of the expertise exhibited
during the complete sequential decision process leading to successful
ML applications and a discussion of what guided the decision making
and selection of the approaches used at each step.

Submission deadline: May 1, 1995.


GENETIC PROGRAMMING - FROM THEORY TO REAL-WORLD APPLICATIONS

    The goal of the workshop is to shed light onto the methodology for
understanding, explaining and controlling GP search and to show how
these issues are reflected in GP frameworks and successful or
innovative applications.

Submission deadline: April 24, 1995.


LEARNING FROM EXAMPLES VERSUS PROGRAMMING BY DEMONSTRATION

    Inductive Learning from Examples (LfE) is a well established
subject in Machine Learning. "Pure" LfE is performed automatically
without any human interaction. Programming by Demonstration (PbD) on
the other hand can be seen as some kind of "extreme" form of
user-supported LfE where the user continually interacts with a PbD
system. The nearly exclusive focus of it is the learning of programs.
PbD researchers have been disappointed with standard ML algorithms.
They require too many examples or too strong a domain theory. Moreover
they do not learn "useful" concepts that deterministically generate or
modify data, but rather learn how to classify. Finally they easily get
out of the user's control.

Submission deadline: April 7, 1995.


VALUE FUNCTION APPROXIMATION IN REINFORCEMENT LEARNING

    This workshop will explore the issues that arise in reinforcement
learning when the value function cannot be learned exactly, but must
be approximated. It has long been recognized that approximation is
essential on large, real-world problems because the state space is too
large to permit table-lookup approaches. In addition, we need to
generalize from past experiences to future ones, which inevitably
involves making approximations. In principle, all methods for learning
from examples are relevant here, but in practice only a few have been
tried, and fewer still have been effective. The objective of this
workshop is to bring together all the strands of reinforcement
learning research that bear directly on the issue of value function
approximation in reinforcement learning. We hope to survey what works
and what doesn't, and achieve a better understanding of what makes
value function approximation special as learning from examples
problem.

Submission deadline: May 1, 1995.


For further information, please consult the conference's World-Wide
Web pages in http://www.eecs.wsu.edu/~schlimme/ml95.html .