The Cognition and Affect Project being conducted at the University of Birmingham makes use of the design-based approach to explore architectures for intelligent autonomous agents in dynamic environments. Since the start of the project in 1992, progress has been made in several areas. However, the project addresses a vast undertaking, and has necessarily focused on a subset of issues. General biases in the work conducted so far include a focus on control issues (rather than on representational issues), a focus on mental processes that are typical of human beings, and a largely top-down approach.
The aims of this paper are to outline the goals, views and progress of the project.
(Sloman 1992a) lists the aims of the project as:
The progress towards these original goals will be evaluated, and it will be seen that new goals have been developed along the way.
Like any scientific work, the Cognition and Affect Project is conducted within a conceptual framework. This section attempts to make explicit some of these views.
The mind is seen as a very sophisticated information processing control system which deals largely with non-quantitative information, and which must use representations with semantic content in order to deal with abstract concepts (as opposed to only using direct feedback loops). Although minds are control systems, they are unlike the quantitative control systems studied in control theory and modelled using differential equations. Rather, they are more likely to involve virtual machines whose architecture undergoes structural and functional changes. New concepts and new forms of mathematics are needed to work with such systems. For more detail, see (Sloman 1993b).
The view that a mind is a control system is not provable or refutable: it defines an approach to the study of mind. (Sloman 1993b)
The study of information processing is concerned with functional systems which change discretely or continuously over time in response to qualitative and/or quantitative input. These systems may make use of representations with semantic content, are composed of functional processes which operate on representations and thus are best understood in terms of their architecture. Information processing is the approach best suited to the study of control systems like minds.
The project addresses only a small subset of the functional requirements for human-like intelligence and work is at a primitive stage. Other research programmes are complementary to the project; together they constitute a parallel search of the problem space. Thus it will be of interest to attempt to integrate the project's findings with other work. Indeed, it is hoped that the project will be able to stimulate other work, such as empirical testing, which is not within its scope. An interdisciplinary approach drawing on philosophy, psychiatry, psychology, biology, neuroscience and other fields is necessary in order to make use of both empirical data and theoretical analysis. The problems being addressed are inherently interdisciplinary and the approach to studying them should not be restricted by traditional field boundaries.
Our ordinary language for describing mental states is useful, but inconsistent and vague. While folk psychology is sufficient for most uses, for scientific purposes we need to replace it with explanations based on architectures of information processing mechanisms. This may take place through an intermediate stage of bootstrapping from existing terminology. These issues are discussed in more detail in section 4.1
An integral part of the project is the use of the design-based approach (discussed in section 3). Other approaches to scientific investigation may be termed phenomena-based or semantics-based (Sloman 1993a). Phenomena-based research collects empirical data to either support or refute theories. Semantics-based research uses the techniques of conceptual analysis to study concepts and the relations between them. Both can be useful as part of the design-based approach.
An architecture is some level of abstraction of a functional system. In trying to understand a system, we must focus on its functional aspects (what it does), rather than on implementation details (how it does it). Architecture dominates mechanism sums up the view that architectures have a greater influence on the capacities of the system than the mechanisms it consists of. Specific implementations are less important than the virtual machines they support (Sloman 1993b). For instance, a program may be compiled repeatedly for use on different systems. This results in virtual machines that are quite different at a low level (the machine code), but which may be functionally identical at a higher level (i.e. from the user's perspective).
Unfortunately, some critics of AI have made the mistake of focusing on implementations and algorithms rather than architectures.
An architecture may be broad in that it consists of a range of mechanisms implementing diverse functions. For instance, a complex agent may require mechanisms to handle such things as motivation, communication, learning, memory, planning, sensation, and action. At the same time, the components of an architecture may be shallow in the sense that they do not provide `in-depth' coverage of their area. An example would be a simulated perceptual system that directly transfers information from the simulation of the environment to the agent's internal short term memory.
In order to understand motivation and emotion we need to study complete agents, rather than just specific mechanisms. One reason for doing so is that there may be significant emergent states and processes that occur in broad architectures but not narrow ones. An example of such an emergent state would be perturbance.
This means we initially have to work with `broad but shallow' agents (Bates et al. 1991). Depth will come later.
The design-based approach involves taking the role of an engineer who is trying to design a system that meets certain requirements, and is inspired by software engineering and conceptual analysis in philosophy. It involves analysing alternative sets of requirements, designs and implementations in an attempt to establish the the nature of their relationships (Sloman and Humphreys, 1992, Sloman 1994, 1995). It allows a high level functional comparison of systems, both natural and artificial, despite differences in origin or implementation (architecture dominates mechanism). This comparison seeks to identify which aspects of a system are essential for given functions and which are not. Note that this approach does not require a full understanding of the requirements or the available tools at the outset, nor does it assume that there is a single correct design to be found.
Given that the capacities of an information processing system depend primarily on its architectural features, to understand it we need to understand its design. We can locate any functional system, whether artificial or natural, in a space of possible designs. Just as designs can be viewed at various levels of abstraction, so can design space. Similarly, we can view sets of requirements for a system as forming an abstract niche space. A niche is not a physical location: a physical area may provide many possibly interacting niches concurrently. The concept of niche space applies equally to the requirements of natural environments and engineering requirements.
To understand a design, we need to understand how it maps into niche space. This is complicated by the fact that mappings between design space and niche space are not one-to-one: each location in one space may map into many locations in the other. There are also many dimensions by which the two spaces may be mapped, such as cost, efficiency, generality, flexibility, and robustness. A further complication is that neither space is really continuous - there are discontinuities such as the difference between having one wheel and two. In understanding how a design maps into niches, it is critical to understand similar designs and what the implications of their differences are.
The mapping from design space to niche space is often complicated by the process of instantiating designs. In nature, differences between the genotype (which is a somewhat vague specification) and phenotype of an organism effectively result in related but different designs. These initial designs are further differentiated as the organism develops during its life time. Rather than being problematic, variations on a generic design produced by instantiation allow an evolutionary search across design space and niche space. However, in a complex changing environment the niche an instance of a design occupies is very unlikely to ever be one of its optimal niches. Likewise, a niche is unlikely ever to be occupied by a design which is optimal for it. Given the variety of relations between design and niche space, defining optimality is itself difficult.
In an artificial system, there is often a corresponding difference between the design at an abstract level and that of the actual implementation. For instance, each implementation of a C compiler has its own characteristics despite adherence to a general specification. As with natural systems, an iterative approach may be required to find designs which map into certain niches. As designers, we can benefit even when a theory results in an inappropriate design, as analysis of its failings may produce better understanding. Given the complexity of design space and niche space, there is no such thing as the `right' design for something like a mind.
Top-down approaches work from requirements to designs. Bottom-up approaches work from the mechanisms implementations afford to see what designs and requirements they support. Top-down, bottom-up and middle-out approaches are all useful, and can be complementary. The work to date within the project has been primarily top-down, partly middle-out, and broad but shallow.
The use of the design-based approach is one of the primary features of the project and it has seen a broad application. The approach has been progressively elaborated in recent years (Sloman 1994, Sloman 1995, Wright, Sloman & Beaudoin 1996, Sloman 1996).
Folk psychology provides a rich set of terms relating to mental states (e.g. attention, emotion, consciousness, desire, joy ...) which are generally sufficient for casual use. These terms undoubted reflect much about the reality of mental states, but their meanings can be quite vague. In a scientific context, vague or inconsistent terminology generates a great deal of unnecessary debate and confusion and must be avoided. These problems are often exacerbated in an interdisciplinary context. Such terms are often left undefined in scientific papers, and when they are defined it is often in a simplistic or arbitrary manner (Read & Sloman 1993). Proposed definitions generate controversy either by being too general (and thus including inappropriate phenomena) or by being too restrictive (and thus excluding valid example phenomena).
It is important to note the distinction between the conceptual space of mechanisms and behaviours to be explained, and how we refer to them. (Read & Sloman 1993)
Studying emotional terminology itself cannot resolve the problem as the terminology simply does not map onto a coherent conceptual space: the concepts of folk psychology are simply not detailed enough for design purposes. We need to develop a complete theory of the mechanisms and behaviours involved in order to ground coherent and stable terminology in it. This terminology will need to map across different levels, for example the behavioural, neural and computational levels of (Gray 1990).
The notion that science starts with definitions is quite wrong. The definitions can only come after you have good explanatory theories: for only in terms of the theories will you be able to clearly and unambiguously specify boundaries between different sets of phenomena. (Read & Sloman 1993)
Building a complete theory of emotion is a long term undertaking, and along the way it will be necessary to develop terminology that has explanatory power at one level but does not initially map to other levels. (Read & Sloman 1993) give `insistence and `urgency' as examples of terminology at this intermediate stage. These terms are grounded in a motive-management architecture and have meaning at the computational and behavioural levels, but do not yet map onto a neural level.
At this point, our understanding of the mechanisms involved in emotion is quite poor, and thus our definitions are provisional and incomplete. This only increases the need for clear definitions which do not exceed the bounds of their theoretical base.
One of the long term goals of the project is to generate appropriately grounded language for describing mental states. Some progress has already been made in this area by generating explicative concepts and terminology grounded in the motive-processing architecture under development (see the glossary at the end of this paper). However, this is only a very small step towards replacing folk psychology's language with one based on a coherent theory of mental states. Developing new concepts and terminology remains one of the ongoing tasks of the project.
The number of designs, or even classes of designs, that will fit (more or less well) into niches in environments such as those found on the earth is truly vast. The entire range of species on this planet, living and extinct, great as it is, constitutes only a small part of the range of possible designs which could inhabit the niches these environments afford. Despite this seemingly endless variety, universal or widely applicable constraints on resource-bound autonomous agents allow us to speculate about what kinds of designs are realistic for sophisticated agents, be they natural or artificial. Some constraints are imposed by a complex dynamic environment, some by limitations inherent in the implementation of the agent, and some simply by the fact that it is cognitively resource-bounded.
Resource-bounded agents must make provisions for handling:
See (Sloman & Croucher 1981, Sloman 1987) for more details.
Beaudoin (1994) provides a conceptual analysis of goals as control structures, and a provisional taxonomy of control states. Goals are complex structures which include a `core' which is ``a representation of a possible state of affairs towards which the agent has a motivational attitude''. Goals also have other attributes which an agent may need to generate in order to assess them and make decisions about them.
The conceptual structure of goals in tabular form from Beaudoin (1994) is reproduced in Table 1.
The conceptual structure of goals.
Attribute type Attribute name
Essence: Goal descriptor
Urgency (there is more than one kind)
Decision: Commitment status
Any resource-bounded agent needs to manage its goal-directed behaviour to optimise the use of limited resources. Goal management has three main functions (derived from (Wright 1993)):
In addition, goal management has auxiliary functions relating to gathering information on attributes of goals and assessing situations. Goal management is also part of the control of action and meta-management.
The following factors, drawn from a discussion in Beaudoin (1994), may help suggest why the control of management processing is very complex.
A broad but shallow motive-processing architecture based on H. A. Simon's ideas (Simon 1967) has been progressively developed using the design-based approach. It is similar to Georgeff's Procedural Reasoning System but improves upon it in several respects, including the addition of asynchronous goal generators, meta-management processes, attentional filtering and a richer representation of goals (Beaudoin 1994, Wright, Sloman & Beaudoin 1996).
Section 4.2 discussed some general requirements and constraints that are likely to hold for any resource-bounded agent. The development of this motive-processing architecture is an attempt to explore designs which can satisfy some of these requirements. Given the range of requirements, this is necessarily done in a broad but shallow way. The architecture makes use of goal and goal-management structures as reported in section 4.3
A broad but shallow architecture for a complete agent may include:
For a fuller discussion of these components, see (Wright, Sloman and Beaudoin 1996). For a broader discussion of the architecture see (Beaudoin 1994).
Within this architecture, two general categories of processes are distinguished: automatic (pre-attentive) processes, and attentive management processes. Automatic processes, despite potential internal complexity, function simply as condition-action systems: they respond whenever their conditions are met. Attentive management processes, on the other hand, consider alternative actions. This can involve the creation and evaluation of very complex temporary structures as part of planning or evaluation (Sloman 1996). This activity may in turn trigger other processes of either type.
In humans, automatic processes appear to be hard-wired into the brain, which gives them considerable freedom to operate simultaneously. However, attentive processes have resource limitations that automatic ones do not, as if they had to share resources.
This restricts the amount of parallelism they are capable of. As an example, skilled drivers find it easy to carry on a conversation whilst keeping their car on the road. In contrast, learning drivers typically find this quite difficult, as they have not yet automated the tasks involved. In addition, automatic processes seem to work more quickly than attentive ones - compare the novice and expert reader.
Automatic and attentive processes deal with motivators: information structures which may establish control states that produce internal or external action. Goals, desires and intentions are all types of motivators. The pre-attentive and attentive processes that generate, activate, or reactivate motivators and set their insistence levels are called motive generactivators (Beaudoin 1994). Motive generactivation may occur in response to asynchronous internal and external events.
Because motivators can be generated by automatic processes, they may make unpredictable demands on attentional resources. Sometimes an individual's attentional resources are fixed on an important and urgent problem and should not be interrupted. This implies that an interrupt filtering mechanism is needed to keep undesirable distractions at a minimum. It must not be an absolute barrier, as sometimes new distractors are more urgent and important than current activities. This means the filter must have a variable threshold. The ability of a motivator to get through this filter is called its insistence, a notion which combines qualities of urgency and importance. Insistence levels must be set heuristically since a thorough evaluation is beyond the capacities of an automatic process. This heuristic evaluation implies that inappropriate insistence levels will sometimes be assigned to motivators. When a motivator with an inappropriately high insistence level consistently and repeatedly succeeds in getting past the filter despite management decisions to disregard it, the system is in a perturbant state (see section 4.7). However, sometimes an uncontrollable distractor is not regarded as undesirable, such as in thrilling situations entered voluntarily (e.g. taking a roller coaster ride, watching a horror movie, bungie jumping etc.).
In a resource-bounded agent, management and meta-management stem from the need to make effective use of processing resources, and to resolve conflicts between motivators. Management and meta-management of motivators for sophisticated agents is necessarily complex, and motivators need a rich structure. (Wright, Sloman & Beaudoin 1996) define meta-management processes as ``any goal-directed process whose goal refers to either a management or to a meta-management process''. An agent needs to be able to recognise conditions such as when a plan is taking too long, or is likely to take too long, when it is changing its mind too much, and otherwise manage its motive-management.
It should be noted that such systems may be implemented in many ways (e.g. neurons, neural nets, rule-based systems ... ) and are required by a wide range of agents, for example other animals. However, while we may share aspects of this architecture with other animals, they clearly do not have the same management and meta-management processes as we do.
Architectures of this type have control states which include dispositions to act in certain ways. Some control states have a tendency to diffuse across the system in a process called circulation (Wright, Sloman & Beaudoin 1996). This may have the effect of embedding certain tendencies in the behaviour of a (potentially) great number of mechanisms, which may contribute to the development of a system's personality. However,
We do not claim that this architecture is part of the causal structure of the human mind; rather, it represents an early stage in the iterative search for a deeper and more general architecture, capable of explaining more phenomena. (Wright, Sloman & Beaudoin 1996)
The development of this architecture is one of the primary long term objectives of the project.
The minder scenario provides a focus for exploring requirements and design ideas about autonomous agents. In the scenario, a robot minder is charged with looking after a number of robot babies in a nursery. The nursery consists of several rooms, only one of which the minder can monitor at a time. The babies are mobile and can get get into trouble by falling into ditches and in a number of other ways. The minder's task is to keep the babies from harm.
The minder scenario is a simple simulation, in keeping with the project's aims of developing broad but shallow architectures. It would serve little purpose to develop robotic bodies or complex visual processing systems to implement the scenario, although avoiding oversimplification is important.
The scenario can be easily extended to provide a more demanding environment for broad but less shallow control systems. Possible changes (Sloman 1996) include:
The minder scenario has been a useful focus for the design-based approach. To date, it has not been fully implemented, and is notably lacking meta-management processes. However, at this stage, implementation is most useful for finding flaws in theories, rather than for evaluating working systems. The work done so far provides a base for future extensions and, as we have seen, the scenario can be tailored to explore new areas as required.
The SIM_AGENT Toolkit is a software package being developed to meet the
requirements for sophisticated agents implied by the work of the Cognition and
Affect Project. It allows rapid development of simulated
interactive multi-agent environments, and is capable of supporting the
type of complex motive-processing architecture discussed in section
It provides a framework for a) developing classes of
agents and objects, b) building sophisticated internal architectures for
agents, and c) running multi-agent simulations. The toolkit is not dedicated
to particular architectures or ontologies, but rather encourages
experimentation and even the use of evolutionary and developmental
architectures. Nor does it require agents to use particular mechanisms; it
supports hybrid agent architectures incorporating both symbolic and
The toolkit is written in Pop-11 and makes use of the Objectclass and
Poprulebase libraries. In addition to its use at the University of
Birmingham, it is currently being used at DRA Malvern to develop
a training simulation for army commanders. A demonstration of the toolkit is
and is described in
One of the main functions of the toolkit is to schedule processing time for objects within the simulated environment in terms of discrete time-slices, or `cycles'. If the user is interested in exploring the effects of processing-resource limitations, the amount of processing each agent can perform during a cycle can be restricted. Restrictions can be set for classes of agents or for specific rulesets within agents. Parallelism in the simulation is approximated by dividing each cycle into two stages: a) sensation (including receiving messages) and internal processing, and b) acting (including sending messages). Each stage in a) is completed by all agents before b) occurs for any of them, thus to the agents it appears that they are all running simultaneously. At the end of each cycle a user-definable interface procedure is called, allowing the toolkit to update other components (such as a graphical display) on a regular basis.
The toolkit makes use of the Objectclass library, a CLOS-like object oriented extension to Pop-11 which supports classes, inheritance, mixins, methods and more. SIM_AGENT provides two default Objectclass classes with minimal features: sim_object and its subclass, sim_agent. sim_object is the top level class and provides basic default features for all objects in the simulation. sim_agent extends sim_object by providing facilities for sending messages to, and receiving messages from, other agents in the simulation through a special toolkit mechanism. Objectclass makes it easy to extend these default classes to provide more specific types of agents and objects, without the need to modify any toolkit code.
Poprulebase, a forward chaining production system interpreter, is used as the default mechanism for the internal processing done by agents in the simulation. Poprulebase runs sets of condition-action rules on databases and allows multiple sets of rules and databases to be combined in forming the internal architecture of a class of agents. Databases are used by agents as various types of memory store (sensory buffers, working and long-term memory), while rules are used to update them and control internal and external actions. Many rules may fire in response to a set of conditions, allowing the simulation of parallel automatic processes. Furthermore, rules may switch between databases, as in SOAR (Newell 1990), and may transfer control to other rulesets.
In addition to the application of condition-action rules to databases, Poprulebase supports prb_filter, an interface to other types of mechanism, including sub-symbolic mechanisms such as neural nets (Poli & Brayshaw 1994, help prb_filter). However, objects simulated with the toolkit are not required to use Poprulebase. It can be supplemented or replaced by other mechanisms, such as neural networks, genetic algorithms, learning classifier systems or human controllers. In those instances where it is desirable, facilities exist in Pop-11 to incorporate routines written in C.
While the toolkit is very flexible in terms of the ontologies it supports, the combination of Objectclass and Poprulebase allows rapid development of complex structures. This makes SIM_AGENT a useful tool for experimenting with a wide range agent architectures and environments.
The toolkit does not specify formats for internal databases and sensory messages, or for inter-agent messages. This leaves the user free to adopt a convention such as KQML or to develop custom formats.
Despite being designed to provide a flexible, rapid prototyping environment rather than a real-time controller, it may be possible to link embodied agents into simulations using the toolkit. This has yet to be attempted.
The flexibility of the toolkit in terms of environmental ontologies, agent architectures and communications protocols means that it provides less structure than other less general agent simulation systems. However, this limitation may eventually be overcome by the development of libraries implementing more specific ontologies.
While the toolkit is still under development, it can already greatly facilitate the implementation of simulated agents and environments.
A first attempt has been made to use the design-based approach to find architectures which can explain emotional phenomena connected with grief (Wright, Sloman & Beaudoin 1996). It is believed that an account based on a complete architecture will account for aspects of grief in humans better than the type of more narrow account that has been attempted in the past. This analysis does not attempt to explain the phenomenology of emotional states in terms of architecture; it focuses instead on their functional roles.
Initially, a personal report of grief was analysed to illustrate surface phenomena of grieving and to provide interpretations of them based on the motive-management architecture developed earlier. Attachment theory and the concept of perturbance were then used to provide an architecturally grounded design-based account of grief.
Bowlby's attachment theory describes the creation and destruction of emotional bonds between individuals, and is used by clinical psychologists to account for mourning.
In terms of the motive-management architecture, an attachment structure towards an individual is described as a highly distributed collection of information stores and active components of some prominence. When strong motivations towards an individual are formed, control states pertaining to the individual are created. These control states tend to diffuse throughout the architecture via a process known as circulation. As control states circulate, they are transformed and become embedded throughout the architecture. As a result of the strength of the motivators involved, information relating to the individual has a greater tendency to be accumulated, creating a considerable network of facts relating to the individual. The distributed nature of the resultant attachment structure makes detachment (the reverse process) a difficult and long-term prospect.
A perturbant state is one where a motivator with high insistence consistently succeeds in getting past the attentional filter and disrupting attentional processes, despite being postponed or rejected by attentional processes. Thus, perturbance involves a loss of some degree of self-control. Perturbance is a side effect of the necessary use of heuristics in assigning insistence levels to motivators, and thus is not functional behaviour (or at least not telic behaviour), and does not have a mechanism dedicated to producing it.
One aspect of grief is a perturbant reaction to the loss of someone towards whom the griever has an attachment structure. Grief continues until detachment has been completed, but manifestations of grief may be interrupted as a result of changes in the attention filter level when other important motivators take control of attention. This leaves the griever in a state of partial self-control.
The analysis of the causes of perturbance in mourning can be extended somewhat by taking into account the fact that the loss of a loved one disrupts more than an attachment structure towards them. The griever has suffered a profoundly unsettling experience in the loss of a loved one, an event which they are powerless to prevent or to rectify. The griever is thus confronted with a combination of the following:
These factors will have to be accounted for in terms of the motive-management architecture.
This analysis of grief is admittedly incomplete, and has yet to be implemented in a simulation. Indeed, work has not progressed enough, at this point, to implement it. However, the analysis is significant in that it proposes a coherent (partial) account of a high level human phenomenon. Such work is bound to be of interest to clinicians, and we may hope that it will help lead to a design-based role in clinical psychology. This architecturally grounded account is a step towards replacing our folk psychology with a deeper understanding.
Information processing theories of mind have often avoided dealing with motivation and emotion. Some believe that `hot cognition' cannot be explained in information processing terms. However, attempts to build complete agents force these issues to be addressed. Wright (1996) investigates the relationship between motivation and prototype emotional states using Holland's learning classifier system as an example.
Emotional states can be classified as either dispositional states or occurrent states. Dispositional states are latent states, such as the brittleness of a wine glass. Occurrent states, in contrast, are those that are actually happening, such as a wine glass breaking (Wright 1996). Occurrent states have an intentional aspect, that is, they refer to something (e.g. being angry about something, being happy about something). Occurrent states also have a non-intentional aspect referred to as their valency. Valency is either pleasurable or displeasurable and has some degree of intensity, but does not itself refer to anything. Valency can be subdivided into two categories: physiological and cognitive. Physiological valency is located in the body (as with an itch or hunger). Cognitive valency is not linked to any part of the body (as with pride or grief).
Classifier systems are domain-independent learning systems which learn to apply appropriate condition-action rules (classifiers) in response to internal and external stimuli (messages). In a classifier system, the credit-assignment problem is overcome using a bucket-brigade algorithm which implements a circulation of value. Value is a measure of a classifier's ability to buy processing power and thus a measure of its importance to the system.
Classifier systems have both intentional (i.e. messages and classifiers) and non-intentional (i.e. value) components. The circulation of value is a control signal which assigns value to intentional components of the system. The circulation of value is adaptive in that it distributes value in a way that leads to more successful interactions with the environment.
As value is non-intentional, it is domain-independent; it ranks classifiers regardless of what they refer to. Similarly, valency in emotional states provides a domain-independent rating of motivators. The accumulation of valency in control states is adaptive just as the circulation of value in classifier systems is adaptive.
Further, the circulation of value in a classifier system thus shows prototypic valenced affective states. With the addition of self-monitoring mechanisms, the system has the basis for non-intentional `feelings', to complement intentional classifiers and messages. In both cases, the non-intentional aspect of emotion is a result of self-monitored circulation of value.
The example of the learning classifier system suggests that hot cognition can be accommodated by existing AI techniques. Integrating the concept of valency into the architecture of autonomous agents is an important step towards developing a general theory of cognition.
Only a very brief description is made of the work in the following areas as (Wright 1993), to which the interested reader is referred, provides a summary of them.
The ability to learn is a vital capacity of a sophisticated agent. Combinatorial explosion in a complex world rules out a brute force approach to associative learning indicating that heuristic selection processes are required to prune the `learning space'. In (Shing 1994), a nursemaid agent was implemented for the minder scenario using a rule-based model of classical conditioning which allowed comparison of various selection processes. In tasks that were relatively complicated in terms of the number of non-predictive stimuli involved, adding temporal and novelty selection heuristics to a basic lookup-table architecture improved learning performance greatly.
Systemic design is a form of the design-based approach which has a particular concern with the evolutionary history of the architecture of a species. This approach may help to account for redundant features in organisms which a designer would not choose to include (i.e. it may rule out certain designs that would otherwise match a niche).
Gray's model of emotion (Gray 1990) posits three mammalian behavioural systems which may be considered at three functional levels. (Read 1994) refines Gray's model by developing a computational model and then implementing it. As part of this work, an architecture for a broad but shallow simulated rat is to be tested in various experimental scenarios duplicating work on real rats. It is hoped that this will provide insight into areas that Gray's work has neglected, including the computational level of the three behavioural systems and aspects of information representation.
A rough distinction may be drawn between event-driven (reactive) and goal-driven (classical) AI planning systems. Patterson's AIMAE is an architecture meant to combine the classical and reactive approaches and to interleave planning and execution.
It is hoped that implementation of the architecture will exhibit a number of interesting behaviours, including automatic behaviours (those for which no management function comes into play), action slips (inappropriate automatic behaviour), opportunity taking (when different goals can be satisfied by the same action) and learning (the adoption of strategies that prone to success over repeated trials).
AIMAE is to be implemented and tested under the minder scenario.
The Cognition and Affect home page lists the following current issues that members of the project are addressing:
In addition, there are some further questions which have been raised during the course of the project, including:
Research has progressively extended the scope of the project beyond its original goals. Such work includes:
Progress has been made in developing designs for motive-processing architectures, exploring architectural requirements for human-like agents and in understanding the design-based approach. All of these areas remain long-term research commitments.
An analysis of grief has been conducted as a first attempt to provide an architecturally grounded account of emotional states. As our understanding develops, this type of analysis can certainly be extended to cover far more phenomena. Reactions from the broader scientific community to this proposed account are awaited.
Initial work on linking low level processes in learning to valency has been conducted. This suggests that hot cognition can be incorporated into information processing theories of mind, and has far-reaching implications for work within the project and beyond.
In all areas, implementation lags behind theory, despite the (ongoing) development of the SIM_AGENT toolkit. Effective interfaces and teaching tools have yet to be developed.
My thanks to Aaron Sloman and Ian Wright for their comments on a draft of this paper.
Bates, J., Loyall, B., & Reilly, W. S., Broad agents. Paper presented at the AAAI spring symposium on integrated intelligent architectures. Stanford, CA: (Available in SIGART BULLETIN, 2(4), Aug. 1991, pp 38-40).
Beaudoin, L. & Sloman, A. (1993). A study of motive processing and attention, in A. Sloman, D. Hogg, G. Humphreys, D. Partridge, A. Ramsay (eds) Prospects for Artificial Intelligence, IOS Press, pp 229-238.
Beaudoin, L. Goal Processing in Autonomous Agents. PhD thesis, School of Computer Science, The University of Birmingham, 1994.
Davis, D. (1995). Towards a Formalism for Cognitive Agents under the Minder Scenario. Cognitive Science Technical Report CSRP-95-13. School of Computer Science, the University of Birmingham.
Gray, J. A. (1990). Brain systems that mediate both emotion and cognition. Cognition and Emotion, 4(3):269-288.
Holland, J. H. (1995). Hidden Order. How Adaptation Builds Complexity. Helix Books.
Newell, A. (1990). Unified Theories of Cognition. Harvard University Press, 1990.
Poli, R., & Brayshaw, M., (1995). A Hybrid Trainable Rule-based System. Cognitive Science Technical Report CSRP-95-4. School of Computer Science, the University of Birmingham.
Read, T., & Sloman, A. (1993). The terminological pitfalls of studying emotion. Paper presented at the Workshop on Architectures Underlying Motivation and Emotion - WAUME 93, Birmingham.
Read, T. (1994). Applying Systemic Design to the study of emotion.
Presented at AICS94 in Dublin, Ireland. Available from the Cognition and
Affect Project FTP Archive:
Ryle, G. (1949). The Concept of Mind, Hutchinson.
Shing, E. (1994). Computational Constraints for Associative Learning.
Available from the Cognition and Affect Project FTP Archive:
Simon, H. A. (1967). Motivational and emotional controls of cognition. Psychological Review, 74, 29-39.
Sloman, A., & Croucher, M. (1981). Why robots will have emotions. In Proceedings 7th International Joint Conference on Artificial Intelligence, Vancouver, 1981. Also available as Cognitive Science Research Paper 176, Sussex University.
Sloman, A. (1987). `Motives mechanisms and emotions' in Emotion and Cognition 1, 3, 217-234, reprinted in M. A. Boden (ed) The Philosophy of Artificial Intelligence ``Oxford Readings in Philosophy'' Series, Oxford University Press, 231-247, 1990.
Sloman, A. (1992a). Towards an Information Processing Theory of Emotions. Notes for Cognitive Science seminar, October 1992.
Sloman, A. (1992b). What are the phenomena to be explained? Seminar notes for the Attention and Affect Project, 1992.
Sloman, A., & Humphreys, G. (1992). The Attention and Affect Project. Appendix to JCI proposal.
Sloman, A. (1993a). Introduction: Prospects for AI as the general science of intelligence. In A. Sloman, D. Hogg, G. Humphreys, A. Ramsay, & D. Partridge (Ed)., Prospects for Artificial Intelligence (Proceedings AISB-93). Birmingham: IOS.
Sloman, A. (1993b). The Mind as a Control System, in Philosophy and the Cognitive Sciences, (eds) C. Hookway and D. Peterson, Cambridge University Press, pp 69-110, 1993.
Sloman, A. (1994). Explorations in Design Space, in Proceedings ECAI, August 1994.
Sloman, A., Beaudoin, L., & Wright, I. (1994). Computational Modelling of
Motive-Management Processes. ``Poster'' prepared for the Conference of the
International Society for Research in Emotions, Cambridge July 1994. Revised
version in Proceedings ISRE94, Nico Frijda (Ed.), ISRE Publications.
Sloman, A. (1995). Exploring design space and niche space. Invited talk for 5th Scandinavian Conference on AI, Trondheim, May 1995. In Proceedings 5th Scandinavian Conference on AI, IOS Press, Amsterdam.
Sloman, A., & Poli, R. (1995). SIM_AGENT: A toolkit for exploring agent designs. In Intelligent Agents Vol II(ATAL-95). Eds. Mike Wooldridge, Joerg Mueller, Milind Tambe. Springer-Verlag pp 392-407.
Sloman, A. (1996). What sort of control system is able to have a personality? To appear in Proceedings Workshop on Designing personalities for synthetic actors. Vienna, June 1995. Robert Trappl (Ed).
Wright, I. P., (1993). A Summary of the Attention and Affect Project. Internal report, Dec 1993.
Wright, I. P., (1996). Reinforcement Learning and Animat Emotions. Unpublished research document, Cognitive Science Research Centre, University of Birmingham.
Wright, I. P., Sloman, A., & Beaudoin, L. P. (1996). Towards a design-based analysis of emotional episodes. To appear, with commentaries, in Philosophy Psychiatry and Psychology.
Further useful information may be found in `help sim_agent', `help prb_filter' (Aaron Sloman, June 1995) and other related documents in the Birmingham Poplog system.
A Summary of the Cognition and Affect Project
This document was generated using the LaTeX2HTML translator Version 95.1 (Fri Jan 20 1995) Copyright © 1993, 1994, Nikos Drakos, Computer Based Learning Unit, University of Leeds.
The command line arguments were:
latex2html -split 0 summary.tex.
The translation was initiated by Timothy Kovacs on Mon Sep 30 14:52:46 BST 1996