From: Pete Bonasso
Subject: New AI Job In Houston
Date: 
Message-ID: <4l0a8m$b2j@aio.jsc.nasa.gov>
In past years we have been experimented with our 3t robot control
architecture as the monitoring and control (M&C) system for NASA
Advanced Life Support System (ALSS). An attached excerpt from one of
our papers describes the system. Recently, NASA has asked Metrica to
design the definitive M&C system for the large Human Rated Test
Facility and we are using 3t as the basis for that design. This means
new work and the need for new hires.

We are in particular looking for a someone with solid AI experience
(process control experience is a plus), who could learn 3t (LISP and
C) and help augment the design for this work. The work involved here
will be more oriented at the LISP parts of the architecture , so you
shouldn't respond if you're not into some of the more traditional AI
stuff (model-based reasoning, expert systems, etc.) rather than
genetic algorithms, fuzzy logic, etc.

"Houston Texas!" you exclaim. "Jeez, who do you think you're kidding?"
Well, we all are having fun here, so there! But the good part of this
design phase of the work (which could involve some prototype coding if
you're interested), is that you (or one of your students) can do it on
a consulting basis, remotely. The Internet is an integral part of our
work life, so we could consider someone working in a virtual office if
necessary.

You'd be working with me, Dave Kortenkamp, and a very bright, engaging
team of folks in the AI and Robotics group at Metrica, Inc. a small
minority-owned company with offices all over the world.

We're in a bit of a hurry, so if you or someone you know might be
interested in this, email me and I'll get our manager to fill you in
on the details.

Thanks,
Pete
--------------------------------------------------------------------

 We are also exploring the use of \threeT\ for closed ecological
 support systems (CELSS). Previous CELSS experiments such as those
 conducted in the U.S. and in Russia have shown that most of the crew's
 time is spent in crop management and monitoring environmental control
 systems. The Johnson Space Center is currently executing an advanced
 life support project which calls for subsystem experiments in an Early
 Human Test Intiative (EHTI) followed by integrated system experiments
 in a Human Rated Test Facility (HRTF).  This plan calls for examining
 the use of robotics and automation to reduce crew time spent on crop
 management in the HRTF.

 We have recently extended \threeT\ to operate with a physical life
 support chamber for EHTI, consisting of a wheat crop linked to a human
 through an airlock. There are ten subsystems to be monitored and
 controlled along with a simulation of the robot we are designing to
 move plant trays in and out of the chamber to support harvesting and
 planting. We divided the systems into five ``agents'' : climate
 (thermal, dew-point, lighting), gases (o2-co2 exchange), fluids (flow
 of hydroponics through the growing areas), nutrients (control of the
 PH, dissolved o2 and conductivity of the hydroponics), and the
 traybot. Finally, we have developed AP plan operators which will
 determine the planting cycles of various crops to support gas exchange
 as well as dietary requirements of the crew, set up long term profiles
 of the life support system agents, and assign the use of the traybot
 system.

 After the planner sets up the monitoring and harvesting schedules,
 every 15 seconds, the architecture monitors over 200 sensor channels
 to generate climate control settings, hydroponics solution changes,
 valves and pump settings for flow control, and commands for the gas
 injectors and scrubbers to adjust the o2/co2
 concentrations. Simultaneously, traybot is commanded to various tray
 locations to retrieve and replace plant trays in support of the
 current harvest or planting operation.

 This broad form of control of both processes and mechanization is
 possible due to the inherent multi-agent nature of the \threeT\
 architecture. As we have discussed, AP is inherently a multi-agent
 system. RAPs puts no limits on which agent can carry out a task, but
 we needed to be able to send enabling and disabling messages to the
 right agent at the right time. So we extended the interface of the
 RAPs to the skills to allow the first bound variable of each skill to
 be an agent name. This allows the communication mechanism to send and
 receive messages to and from the appropriate agent.

 This is by far the largest application of \threeT\ thus far. Though the
 planning level is not much more involved than for the EVAHR program,
 there are 40 RAPs and 67 skills for the five agents. The system is
 capable of running continuously, with our longest test being 36 hours.