Abstract
1. Introduction
To date, a fundamental facet of embodiment has, on the whole, been neglected in autonomous mobile research and in artificial intelligence as a whole, that of social embodiment. Embodiment has been interpreted as being the physical existence of an entity, i.e. a robot, in a physical environment (by robot it is understood to represent a physical body with actuator and perceptor functionality). By virtue of it physical presence, whether static or dynamic, there is interaction between the robot and the environment. At a fundamental level, this can be the physical space occupied by the robot and extending to the robot's ability to move, change, and perceive the environment. When a second robot is added, this introduces a definite element of social interaction, even without any direct inter-robot communication. The perceptions of another robots motions, whether abstract notions of a moving obstacle or its clear distinction of another individual robot, influences the observing robots behaviour. The social implications of two robots coexisting in an environment add another dimension to the complexity of each robot's perception of that environment, which cannot be ignored.
In the first instance where the first robot perceives a moving obstacle in some simplistic way, the overlap between the concept of physical embodiment and the “social” connotations of one robot's influence over another becomes apparent. While this abstract notion of “communication” is not necessarily explicit in either its intention or application, it does not constitute a “degree of sociality” in the sense of a structured interaction. Work on emergent robotic systems have embraced such inter-robot dynamics without explicit social representations or communication (Mitsumoto et al, 1995; Fukuda et al, 1989; Cai et al, 1995; Holland & Deneuborg, 2000; see Cao et al, 1997 for an overview). These systems have been characterised as exhibiting “emergent intelligence” and constitute many simple, independent, individual robot agents acting according to a purely local perception of their environment and following a set of simple rules to actuate local environmental changes.
When looking at explicit social interaction between two or more robots capable of engaging in some degree of communication, a whole new set of problems arise. Worden (Worden, 1993) discusses the structure of such social domains as consisting of the following:
The structure and interrelations between the components are crucial.
The set of social situations and possible causal relations between situations are systematic sets.
The set of possible situations is very large
An agent's social milieu involves discrete, identified individuals who tend to be in discrete relations to one another.
The interval between social cause and effect may have extended time frames.
Generalisations across individuals are important (i.e. standard social responses and interpretations)
There is a chaining of cause and effect: if A causes B, and B causes C, then effectively, A causes C.
While some points are open to discussion, an idea of the issues to be addressed emerges. The following sections start by briefly reviewing work to date on the development of strong social functionality in multi-agent systems research. The required social functionality for a team of autonomous mobile robots required to undertake explicit social tasks in real-world problem domains is then presented. Section 6 then demonstrates how such social features are realised on a team of socially capable robots.
2. Sociality in Multi-Agent Systems Information
Multi-Agent Systems research has looked to address prevalent issues in the social interaction of a team of agents and as such provides us with a rich source of ideas. The inherent notion of multiple agents required to interact in a coherent reproducible manner is a core issue. Newell's unified theory of cognition (Newell, 1990) identifies a separate level above the rational level (the human equivalent of the Knowledge Level) for dealing with social contexts. He terms this the Social Band and serves to define an individual's behaviour in a social context. Newell acknowledges that his Social Band is not clearly defined, but that it should only contain knowledge that is specifically social in nature. Newell states that “there is no way for a social group to assemble all the information relevant to a given goal, much less integrate it” and that “there is no way for a social group to act as a single body of knowledge”.
Jennings and Campos (1997) et al. introduce the Social Level Hypothesis to provide an abstract categorisation of those aspects of multi-agent system behaviour that are inherently social in nature, i.e. co-operation, co-ordination, conflicts, and competition. The Social Level sits above Newell's Knowledge Level (KL) (Newell, 1982). The Social Level Hypothesis states:
Jennings and Campos discuss the Social Level from the context of social responsibilities and leads to the formulation of the Principle of Social Rationality.
2.1. Principle of Rationality and of Social Rationality
It is believed that by explicitly drawing out a few key concepts that underpin the behaviour of a large class of problem solving agents, it is possible to understand and predict agent behaviour. Newell (1982) proposed that the agent problem solving behaviour could be characterised through the Principle of Rationality:
Jennings et al. (Jennings & Campos, 1997) discuss the implications of this within a social context and have ascertained that for a number of social actions where there is a conflict of interest between that of the member and of the society itself, Newell's Principle of Rationality is flawed. Jennings et al. propose the Principle of Social Rationality as:
Justifications for the extension of Newell's original proposal to the Principle of Social Rationality are based on the balance between the individual benefit between member's interests and those of the society or vice versa. Jennings et al. continue by defining the minimum set of necessary concepts for a responsible society to obtain the behaviour specified by the Principle of Social Rationality:
Acquaintance: the notion that the society contains other members
Influence: the notion that members can effect one another
Rights and duties: the notions that a member can expect certain things from others in the society and that certain things are expected of it by the society
Based on these fundamental attributes, an explicitly social community of agents can therefore exist. In order to achieve these attributes, the “tools” facilitating social interaction such as communication are required.
The current state of the art in the realisation of high-level social behaviour has not extended far from a conceptual interpretation and understanding with Newell's Social Level, and Jennings et al.'s Principle of Social Rationality. Given the limitations of robotic hardware systems that have until recently dictated the extent to which complex control methodologies could be realised, limited comprehensive research has been undertaken to date on the implications of social embodiment to the robotic domain (Duffy, 2000). A wealth of anthropomorphic social analogies in the pursuit of the intelligent robot has therefore had limited exploitation.
3. Social Power
In a paper entitled “Social Power: A point missed in Multi-Agent, DAI and HCI” (DAI – Distributed Artificial Intelligence, HCI – Human Computer Interface), Castelfranchi (1990) suggests that there has been a “serious lack of realism in Multi-Agent and interaction studies” where “sociality or the agents is merely postulated rather than explained in terms of their dependence”.
In addressing this issue, Castelfranchi proposes a distinction between Distributed Artificial Intelligence and what he terms the “Social Simulation Approach”, i.e. the difference between the research concerned with intelligence, problem solving and system architecture where society is “used as a metaphor”, and, on the other hand, work that deals specifically with social interaction. This corresponds to the distinction between the two fundamental issues present in social scenarios:
Castelfranchi uses alternative terminology and refers to the social empowerment of a robot as the Sociality Problem, and the task decomposition issue as the Adoption Problem and deals primarily with the second from the perspective of “social power”, i.e. the “power” one agent has over another in a social environment (Castelfranchi, 1990). Conte et al. within the Social Behaviour Simulation Project also maintains a distinction between goal adoption and cooperation (Conte, Miceli & Castelfranchi, 1991).
This paper proposes that the two cannot be isolated and postulates that in order to develop an artificially intelligent physically situated robotic entity, embracing a strong notion of social embodiment is a necessary criterion. A robot must have both the capabilities to be social in conjunction with its abilities to solve social problems. While this is not a new notion, it has not been developed in the context of robotics and artificial intelligence as an all-encompassing coherent approach.
As highlighted by Castelfranchi, the fundamental issue appears to be the void between the social goal and the social robot, i.e. breaking down the global task into some set of subtasks to be completed and the social empowerment of a robot. Multi-Agent Systems have proposed numerous strategies and models for the task decomposition problem (Sekiyama & Fukuda, 1999; Durfee & Lesser, 1987; Lesser et al, 1998; Cohen & Levesque, 1997; Grosz, 1996; Stone & Veloso, 1999) but little has been done in developing true agent sociality in multi-agent systems. When the issue is the social interaction of a number of autonomous robot entities, serious considerations arise, i.e. resource bounding, incorrect perceptions and numerous other attributes inherent with real world applications (see Duffy, 2000 for a discussion). The next sections therefore aim at taking intentional multi-robot systems a stage further by developing the “social robot”.
4. Towards Sociality in Autonomous Mobile Robotics
Panzarasa et al. (1999) present a conceptual model for representing the inherently social implications of multi-agent systems based on an agent's individual mental states. They propose the use of social mental shaping via roles and social relationships and gives examples of some of the ways in which social relationships can drive an agent's behaviour by influencing its mental state:
Authority: hierarchical social status.
Helping disposition: i.e. altruism.
Trust: based on the confidence an agent has for another.
Persuasion: i.e. through a process of argumentation.
In order to achieve such relatively high-level social functionality, a set of “tools” is required. While Panzarasa et al. propose that roles and social relationships “may complement and augment bare individual mental attitudes”, this paper proposes that the concept of identity is necessary in conjunction with character, stereotypes and roles in order to achieve social relationships between robots.
Kinny et al. (Kinny et al, 1996) propose the notion of the “internal” and “external” aspects of an agent where the internal features of the agent comprise its beliefs, desires, and intentions, while the external features relate to features of the social group, i.e. the roles and relationships within the system. An important distinction arises when considering embodied systems over software based virtual environments. In this work, the notion of internal and external relates to the attributes of a single embodied agent or robot analogous to Shoham's notion of capabilities in multi-agent systems (Shoham, 1993). The internal attributes of the robot are analogous to Kinny et al.‘s internal features. While Kinny et al.'s external aspect relates to the social interaction, i.e. the services an agent provides, its interactions, and the syntax and semantics for the communication between agents, this notion is here developed further with the addition of stereotypes, roles and characters as the social features.
There is an important difference between virtual or software-based agents and embedded systems. In real-world robotics there is an extra dimension of complexity of physical manifestation. In order to encompass the added complexity of dealing with embodied agents (both physically and socially), the original internal and external features of an agent proposed by Kinny et al.are insufficient, as they do not address the added complexity of the physical environment. A discussion of strong physical and social embodiment can be found in (Duffy, 2000; Duffy & Joue, 2000). It follows that in order to address the issue of embodiment, there are two distinct robot attributes that are local and particular to each robot within the social system:
Internal Attributes: Beliefs, desires, intentions (based on Rao & Georgeff, 1989), i.e. name, character, mental capabilities, the robot's knowledge of self, experiences, a priori and learned knowledge, processing capabilities (i.e. algorithms, DSP)
External Attributes: the physical presence of the agent in an environment; its actuator and preceptor capabilities (i.e. a robot equipped with extra sensory equipment compared to another), the physical features of the robot, i.e. physical dimensions.
And one global system attribute which subsumes the social functionality of the collective of robots:
Social Attributes: Identity, character, stereotype, roles.
The Social Attributes are more abstract social features that exist to facilitate the interaction between robots. While some pertain to the robot itself, they nevertheless constitute attributes existing in the social system that are necessary for the social functionality of the system. These attributes are developed in greater detail in the following sections.
5. Social Functionality
Complexity issues arise when the complete problem encompasses multiple agents, with different social tasks, and are embodied in a complex real world scenario. This necessitates the development of suitable formalisms to facilitate the resolution of allocating social tasks without becoming overwhelmed by the added complexity of the robot's environment, both physical and social. The formalisms are therefore developed from each robot's perspective, not from a complete systems perspective. It is the fundamental notion of embodiment that necessitates this approach over that proposed by Kinny et al (1996).
5.1. Identity
When social interaction exists, each element of the social group must be able to be differentiated from others. The robots require a sense of themselves as distinct and autonomous individuals obliged to interact with others in a social environment, i.e. they require an
Suppose robot
where
Given that this research deals primarily with four Nomad Scout II robots named Aengus, Bodan, Bui and Caer, this can be written more specifically as:
Identity and embodiment are inherently linked. The manifestation of mental capabilities and knowledge in a physical body is the ascription of a concrete singular identity to a “brain”. It is the synthesis between brain and body with the social implications of co-existing in an environment where identity is the foundation stone for social interaction. The role of embodiment should not be trivialised (Sharkey & Ziemke, 1998; Clark, 1997; Duffy et al, 2000). The ascription of the physical world to primarily software based agent research existing in virtual environments can be perceived as an added degree of complexity. On the contrary, the physical world acts as the base on which social concepts are grounded. It constrains the highly complex notions of multiple identities, and accountability.
In a social environment, knowing the identity of those with whom one communicates is essential for understanding and evaluating an interaction. Contrary to the disembodied world of virtual multi-agent communities where identity can be ambiguous (Donald, 1994), real world applications of robotic agents have identities inherently based on their physical existence. Many of the basic cues regarding personality and social role that we are accustomed to, are prevalent in the physical world, for example, statically being the body's physical construction, and dynamically being motion behaviours.
As mentioned above, the physical body provides a compelling and convenient basis for the definition of identity. Though the “brain” may be complex and mutable over time and circumstance, the body provides the stabilising anchor. There is therefore a one to one mapping between a robot and an identity. While alternate social interaction spaces, i.e. virtual communities (Donald, 1994), may facilitate multiple personas, real world social spaces are constrained by the physical embodiment of the “brain” in a single robot platform. Identity cues are primarily based on the physical attributes of a robot, i.e. its actuator and perceptor capabilities. A complete concept of
Smithers (1995) discusses
Following from this, a definition of
This approach subsumes the mere fact that the entity (robot) physically exists in the environment and has therefore, by its simple physical presence, changed its environment, which induces a form of physical identity.
Each robot possessing a particular identity, and therefore developing and exhibiting that particular robot's capabilities in a social environment (and promoting it to specialise in the attributes relevant to its identity), presents a very interesting problem. It is proposed that each autonomous robot has an individual
The identity of a robot is made up of its internal and external attributes. Examples of
The
where
Thus
which gives:
Shoham (1993) assumes that capabilities are fixed. Intuitively, the use of internal and external attributes dictate that some are static and others are dynamic.
Here it is understood that the external features of the robot (i.e. sensors, actuators) are static, while the internal features are indicative of the knowledge, experience, and processing capabilities (i.e. smoothing or filtering algorithms) that a robot has either obtained over time or has been initially provided with, and are therefore dynamic.

Identity is the set of a robots internal and external attributes
A robot's knowledge of its attributes therefore allows a sufficient degree of introspection to facilitate and maintain the development of social relationships between groups of robot entities. When a robot is “aware” of itself, it can explicitly communicate knowledge of self to others in its social environment.
As the
5.1. Character
An important criterion for the development of a social environment is the attribution of mental states to the internal representation one robot has for another. Robot
The use of the terms “identity” and “character” in mobile robotics, or more generally embedded systems, is so far limited to non-existent (see Duffy, 2000 for a review). This work highlights that the distinction between a robot's identity and its perceived character should not be confused. An illustration is where two physically similar people (i.e. identical twins) have two different identities. While having two similar bodies, each can display distinctly different behaviours, as can manifest in different trains of thought, views, beliefs, ideals and temperament, experiences and memories. This highlights how someone's identity is influenced not only by its appearance but also by its behaviour. Likewise, a robots' character depends on observation. The identity of robot
In this work,
Character deals with the fundamental attributes an agent or robot is

The perceived identity (or
Each robot builds a list of representations or

Robot Bodan's “view” of the characters of robots Aengus, Bodan & etc at time
Ideally there is a one-to-one mapping between a perceived character and that particular robot's identity, where each robot would know all other robots “completely”, i.e. it would have complete knowledge of all other robots in its social environment. This would in fact rarely happen and would also be unnecessarily complex. Character is a subset of the total set of internal and external attributes that comprise identity:
The use of heterogeneous robots also facilitates unique perceived characters in social environments. A robot equipped with a vision system may become the “eyes” of the group where the physical construction of the robot plays an important part in defining its function in the social group. The “role” a robot plays is discussed in greater detail in Section 5.4.1.
The use of
5.2. Stereotypical Representations
While a robot's
While character representations are initially independent of a global task or set of subtasks to be undertaken, the notion of stereotypical representations is proposed in order to bootstrap the initial stages of social interactions and reduce the complexity of maintaining such internal representations. The perceived character of a robot is fundamentally based on a set of stereotypical representations available to each robot and developed through communication, collaboration and experience. At time
Each robot knows its name, its associated stereotype and the attributed internal and external attributes associated with this stereotype. This allows for the introduction of a robot with a new stereotype to the group. As it knows all details pertinent to itself, it can communicate this to others in its social environment who will then learn the new stereotype and corresponding internal and external attributes.
Each robot has one stereotype associated with it. The stereotype of the robot
There are a fixed number of stereotypes in the social environment:
A stereotype comprises a defined finite list of
The stereotype of robot
Each robot in the social group has knowledge of the possible stereotypes that may exist in its social environment and all details pertaining to each stereotype in that list. A robot therefore, knowing the stereotype associated to a particular robot it encounters, will also know all the internal and external attributes associated with the stereotype of that robot. Each robot in the social group may see differing characters of a particular robot, but all are fundamentally based on that robot's

A defined subset of internal and external attributes constitutes the stereotype that robot
The introductory contact between two robots is therefore initially bootstrapped by the stereotype that each robot is associated with, which facilitates the development of the internal representation of other robots in the social environment and their corresponding functionality regarding tasks to be performed by the group. A degree of recognition is preferable to self-identification as the conversation can use simple “words”, then jump to a more complex level. If there is no recognition (or categorisation based on stereotype), then robot
The personal, physical and social knowledge presented in the previous sections is stored in the form of beliefs with each robot's belief set, a feature of BDI-based architectures (Rao & Georgeff, 1989; Jennings, 1993; O'Hare & Jennings, 1996; Duffy et al, 1999). Examples of this are shown in Section 7.
The use of stereotypes is indicative of exactly
5.4 The Global Task
The social requirements of a social task dictate that the task must be allocated to a collective of robots. In order to have a social group of collaborating robots perform such a global social task, the subdivision of a social task into suitable subtasks that can be performed by the cooperating collective of robots is required. These subtasks must be allocated relative to each robot's abilities to perform each subtask thereby dictating that all robots should be altruistic in nature. Commitments are then required by all robots to collectively work towards the global objective or goal.
When the global task has been decomposed into a set of subtasks, the issue is how to allocate these subtasks to appropriate robots within the social group. The notion of “role” is introduced to facilitate this subtask allocation. The task decomposition issue as a topic of research is beyond the scope of the work presented and is therefore not discussed in great detail (see Haddadi, 1995 for a formal description). This complex problem constitutes a major field of research in both multi-agent systems research and artificial intelligence in general. As the allocation of a social task to a collective of socially competent robots necessitates both task decomposition and the development of existing technologies to encompass the social implications of such a problem, the notion of “role” is proposed and discussed in suitable detail.
5.4.1 Role
A robot must undertake a
Here the term “expected” is used in the context of the required social behaviour the robot has to undertake or perform.
A role is primarily task driven, i.e. the role the robot must undertake to complete a task is analogous to the role an actor must undertake in a performance so that the play will be performed.
While the mapping of a role to a robot is dependent on the robot's capabilities, it may only require a subset of its capabilities to complete. A role is task orientated and will therefore change with different tasks, different times, and different social colleagues. Each robot will therefore have a

The roles that robot
The global task is decomposed into subtasks, which are grouped together as
A
where
□ - always
Π - a plan library
ψ - a plan
In recapping, robot
Note that this does not guarantee that after
By extending Haddadi's use of the term “Contributes” to denote how a plan
where
E - optionally
This constitutes the
In order to delegate the tasks between robots, the following definitions are required:
Robot
Haddadi refers to this delegation as the potential_for_commitment or PfC and continues in discussing pre-commitment and commitment with respect to
Robot
When robot
Haddadi's potential_for_commitment is practically realised in this work by the concept of robot
A collective of socially capable robots adopting a social task is strongly founded on their capacity to communicate and a language of communication that adheres to the following formal structure (see Rooney, 2000 and Duffy et al, 1999) for a description of the communication language Teánga which has the required expressive power for such BDI-based approaches):
Robot
Robot
Robot
For further information see Haddadi (1995), and in particular for detailed formalisms on the axioms of Pre-commitments and Commitments.
The pre-condition of mutual altruism subsumes the issue of a robot being committed to another robot in undertaking a subtask and is thus a necessary initial feature of the social group.
A robot may be assigned any number of
When the agents are obliged to play different
As physical attributes (external) of a robot are assumed static and absolute (i.e. the hardware configuration) and the internal attributes are more dynamic in nature (i.e. environmental knowledge), the
In a group of homogeneous robots where their physical construction is similar and as such negate the influence any external attributes have on the role allocation process, the internal attributes influences the allocation of roles. The evolution of a robot's internal states over time based on experience, learning and knowledge accumulation also can increase the importance of the
It is important to note that the decomposition of the global task into subtasks is dependent on the stereotype list, not on each robot's individual plan library. The global social task is not decomposed dependent on conditions unique to any one robot but rather the conditions prevalent to the social group as a whole.
5.4.2 The Subtasks
A subtask constitutes a plan of elemental behaviours that can be executed by the appropriate robot. Each subtask may have any number of behaviours (i.e. follow_wall, avoid_ obstacle, take_snapshot), in any order, with possible repetitions of behaviours. These behaviours are dynamic in nature and may be over-ridden by reflex behaviours in emergency situations. Duffy (2000) develops this feature in more detail by presenting an architecture with sufficient functionality to support the concepts proposed here.
Each subtask
Any given subtask may contain any behaviour in any order with possible repetitions of behaviours.
The behaviours are temporally ordered, ensuring that some behaviours are only initiated when appropriate others have been completed.

Each subtask comprises a subset of behaviours that are stored in the behaviour library.
A behaviour constitutes a mapping between a sensory stimulus and actuator functionality, but may on occasion be simply either sensory information or actuator motion. All tasks are undertaken via a behavioural hierarchy (see Duffy, 2000).
5.5 The Whole Picture
The concepts of identity, character, stereotypes, and roles proposed in this work have been developed as a complete and integrated framework to facilitate the development of a social community of robots with suitable functionality to complete required social tasks.
In returning to the minimum set of concepts necessary for social behaviour and a society as defined by the Principle of Social Rationality (Jennings & Campos, 1997), the following objectives have been undertaken:
Defining an independent primitive for each agent, i.e. its identity.
Defining an independent primitive for one agent's model of another agent, i.e. its character.
Description of task decomposition notions such as roles analogous to (Panzarasa Normal & Jennings, 1999; Kinny et al, 1996).
Assessing what “capabilities” exist in the cooperative, i.e. a global list of the attributes available to achieve the global task, i.e. stereotype listing.
A mapping between the robots and the tasks to be undertaken (i.e. which robot undertakes which role based on its stereotype association).
The system objectives address the notions of acquaintance, influence, and rights and duties in a social collective of embodied robotic entities. The first two are new approaches to multi-robot control methodologies. This work proposes that the definition and development of these primitives facilitates the development of a complex social structure between a collective of socially capable robotic entities functioning in real-world environments. This not only addresses the issue of developing from the conceptual notion of the Principle of Social Rationality, but also seeks to apply this to the real world domain.
5.5.1. The Conditions
The following set of conditions and pre-requisites are imposed on the system in order for a collective of autonomous robots to complete a global task:
The robots are altruistic in nature: (i.e. it is assumed that all robots in the social group will undertake to perform whatever tasks are required of it so that the group will achieve an explicit social goal required of the group)
Good faith: It is assumed that agents commit only to what they believe they are capable of. This also includes that once a robot commits to something that it will undertake to achieve it.
The robot knows its own attributes and correspondingly its identity, i.e. knowledge of its external (physical) and internal state.
Handshaking is initially performed by each robot with any other robots in a group and by any robot entering an existing group to all other robots in the group. It is a requirement of a new robot to “introduce” itself and its identity to all others.
The global task must be divided into realisable subtasks and allocated to appropriate robots, equipped both internally and externally, to achieve the task.
Each robot should have a degree of autonomy in performing its allocated task.
Communication between robots should be possible in order to update each robot's knowledge about other robots' contribution to the global task.
Each robot should communicate its completion of its subtasks.
An evaluation of the result is required to ascertain that the task (both at a global and sub level) has been completed correctly.
Following from Shoham's informal guidelines (Shoham, 1993) on the change and persistence of mental attitudes over time, it is also assumed that beliefs must be adopted or learned and persist by default. That is, once a belief is adopted, this will remain until either the belief is specifically let go or a contradictory belief is adopted.
While some of the above appear trivial, the explicit clarification of these assumptions provides a foundation on which the system can be analysed. When one knows the way in which the system is required to function, one can assess whether it is performing correctly.
5.6 Entity-Relationship Diagram for the Social Robot
Fig 7 summarises the relationships among the concepts presented in this paper, where:

The entity-relationship diagram for the Social Robot
The diagram shows the complete Social Robot entity relationships that allow the development of robust complex social groups of robots capable of undertaking the global social tasks required of them. This represents the generic components of a social robot, a task, and the resulting mapping between the two. This structure allows snapshots be taken of the robot's state and the task state, therefore facilitating remote observation and analysis of the dynamic real-time system.
6. Realising the Social Robot
The realisation of a robot capable of explicit complex social behaviour or any robot, in fact, is validated by a motivation to build a machine that can realise specific goals. It is the goal that defines the problem and in so doing defines the solutions required. The goal-oriented approach gives a criterion to assess the robot system, by seeing how effectively the robot has been at achieving its goals. The use of a behaviour-based approach results in the predefinition of the goal to be achieved. See (Brooks 1986, 1991) for the Subsumption Architecture and Duffy (2000) for explicit social behaviours in robotics. In order to achieve rational behaviour, goals are required to ascertain the success of the robot in its function, i.e. “the principle aim of a situated agent is to take action appropriate to its circumstances and goals” (Beer, 2000). When this is extended to the social domain, individual robot goals logically extend to the goals of the social group and each robot's mutual objective in achieving the required social goal. In order to achieve this social functionality, a number of fundamental issues arise. The following elements are proposed as fundamental towards the realisation of the social robot capable of realising explicit social goals:
Embodiment: Physical robots situated in a physical environment.
In order for a robot to survive the dynamic complexity of real world environments, a fast reactive process is required:
Reflexes: Fast reactive / reflexive nature to unforeseen or unanticipated events.
As each robot is required to process both a priori knowledge and dynamic information of both physical and social environments, it must also be aware of its own internal and external attributes, and a means to store knowledge and intentionally deliberate on that knowledge:
Deliberation: The computational machinery required to realise complex goals.
For a robot to function as part of a social group where explicit goals are required of that group, it must have the functionality to be social:
Social functionality: The ability of the robot to interact with other robots in its communication space.
Therefore, in order to support the development of social robots, an architecture with sufficient social and intentional functionality is required. As no research to date has dealt with the issues presented here in a coherent systematic whole, the Social Robot Architecture (Duffy, 2000; Duffy et al, 1999; Duffy et al, 1999) was developed to explicitly support strong social embodiment through the implementation of the concepts of identity, character, stereotypes, and roles in conjunction with language and virtual reality visualisation media towards achieving truly social robots. The architecture was developed using the agent development toolkit Agent Factory (O'Hare et al, 1998) and utilised research undertaken on the development of an agent communication language Teánga (Rooney, 2000; Rooney et al, 1999). Real-world experimentation was realised through a team of autonomous mobile Nomad Scout II robots.
The following section demonstrates how strong sociality has been realised through this framework.
7. Implementation
In order for these robots to engage in collaborative problem solving each robot must develop knowledge about its own identity and build character representations of other robots in its social environment via the use of stereotypes in order to gain a social awareness of its social environment in conjunction with its physical environment.
7.1 The Robot's Identity
Each robot has a unique identity. Its name is unique and comprises the concatenation of the robot's text name (i.e. Aengus, Bodan, Bui or Caer), the IP address of the host computer on the robot, and the time it was created:
A commitment rule is implemented whereby the robot adopts a belief about having an associated particular stereotype, e.g. scout_looker:
Based on this ascription, a robot can infer via deductive reasoning what internal and external attributes are associated with itself from its knowledge of stereotypes (examples of which are given in the following Section 7.2). Each robot is thus initialised at time
The implementation of a robot agent's identity is via Agent Factory-developed Deliberative Level of the
7.2 Robot Stereotypes
This section details how the robot stereotypes presented in Section 5.3 are realised. As elaborated previously, each robot has an associated stereotype and each stereotype has associated internal and external attributes. In order that each robot can rapidly build its social model, it is equipped with a data file that acts as a “yellow pages” of the possible stereotypes that it may encounter. Each robot is thus equipped with this
The internal and external attributes associated with each stereotype are stored as persistent beliefs within each robot's belief set, i.e. they are independent of time. Each robot can access this “yellow pages” at any time and update it accordingly. The use of commitment rules provides the procedure for robots to introduce themselves and build their character representations (see Section 7.3 for the development of character representations). This approach greatly facilitates the development of social models for any collective of socially capable robotic entities.
The internal and external attributes ascribed to the two stereotypes for the Nomad Scout II family of robots and used within this work are shown in figures 8 and 9. The first corresponds to a Nomad Scout II robot equipped with a vision system (Duffy, 2000) and corresponds to the following fundamental internal and external features:

The ascribed internal and external attributes for a Nomad Scout II robot equipped with a vision system.

The ascribed internal and external attributes for a basic Nomad Scout II robot.
Internal: - Deliberative model (i.e. Social Robot Architecture)
Communication language, Perception processing algorithms: sonar, odometry, vision
External: - Robot type
1
Processing capabilities Actuators: wheels Perceptors: bumper, sonar, odometry, vision
This information is stored in each robot's belief set as shown in fig. 8.
As a number of the Nomad Scouts used in this work are not currently equipped with vision systems, those that do have the associated scout_worker stereotype (see fig. 9). This methodology extends to any autonomous mobile robot equipped with the Social Robot Architecture and the necessary communication hardware (through wireless communication capability). Each robot has an ascribed stereotype which it communicates on “meeting” other robots in its social envelope. The issue of social embodiment is therefore explicitly addressed. As the architecture is clearly designed to be hardware independent, the only requirements of a suitable hardware platform is adequate processing capabilities in the form of a PC, Apple Macintosh, or Unix-based processor. The corresponding stereotypes would subsume the functionality available on such platforms.
The general introduction of additional hardware or software components to a class of robots like the Nomad Scout II simply involves the addition of the corresponding internal or external attribute to the robots stereotype. The use of stereotypes both facilitates the social model building and provides a medium through which task allocation can be achieved. As the structure is in essence relatively simple, its inherent robustness provides a considerable degree of reliability in the development and maintenance of social relationships.
The process whereby the Deliberative Level of the Social Robot Architecture (Duffy et al, 1999; Duffy, 2000) develops these social relationships is through the use of commitment rules. These rules provide the deliberative machinery for the development of the social models.
7.3 Commitment Rules
The Commitment Rules are a fundamental part of the Deliberative Level that is developed by Agent Factory and constitute the rules through which the robot will commit to a certain action based on active beliefs in its belief set (Duffy, 2000). Examples of such are adoptBelief, dropBelief, inform, request, commit.
As mentioned previously, each robot must introduce itself to any other robot in the group on joining a group. When a robot communicates its associated
In order for one robot to develop an internal representation of other robots, it requires each robot's stereotype. Conversely it must communicate its own stereotype to others:
Each robot can interpret the stereotype it receives from other robots, associate it with that robot's name. Then based on the commitment rules given above and each robot's knowledge of the internal and external attributes ascribed to each stereotype, a robot can develop beliefs about another's internal and external attributes:
Thus each robot has a concept of self, i.e. its identity, as well as representations of the other robots in its social environment.
Figures 10 and 11 show the generated social model for two socially communicating robots, RobotA and RobotB. Each robot has developed a notion of

The belief set of RobotA showing its social model (

The belief set of RobotB showing its social model (
Based on each robot's knowledge about the attributes for particular stereotypes, each robot adopts beliefs about its own attributes by knowing its ascribed stereotype and also builds representations for all other robots in its social environment. Its identity therefore constitutes its complete knowledge about both itself and its social and physical environment. As a robot's knowledge of its environment (both physical and social) develops over time through interaction and communication, each robot's identity also develops and evolves. This evolution is firmly grounded on its associated stereotype in order to provide a consistent benchmark for both itself and how other robots develop character representations of its inherent internal and external attributes.
8. Discussion
The previous sections have addressed the basic issues presented in Section 5, that of embracing both physical and social embodiment within a collective of autonomous mobile robots undertaking explicit social tasks. The conceptual notions discussed in Sections 5 and 6 in the form of a socially empowered autonomous mobile robot, the
These experiments require that a number of cooperating robots explicitly collaborate to perform each social task. While there is the question as to whether many robots can complete a task more efficiently than a single robot, this issue is not addressed in this work. The benefits of a single robot over multiple robots are highly dependent on the tasks undertaken. The objective was rather to implement the
These experimental tasks demonstrate how the social features presented in section 5 and illustrated in section 7 were utilised for social complex tasks undertaken by a team of Social Robots.
9. Conclusion
Physical embodiment is a fundamental component of intelligence. When a physical robot is situated in an environment occupied by other physical robots, social embodiment exists. Physical embodiment and social embodiment are inherently linked. While generally construed as extremely complex, the embodiment of such complex notions as social functionality in conjunction with AI-based robot control strategies on physical autonomous mobile robots may provide us with an alternative understanding of the fundamentals of an artificially intelligent entity. In returning to the issues raised in Section 2, being “social” implies the existence of interactive relationships. In order for multiple robots to exhibit and maintain robust behaviours in a mutual environment, a degree of social functionality is necessary. A robot with the capacity to interact socially with other robots in this social environment has an obvious advantage of being better equipped to coordinate and collaborate its behaviour with others and in so doing reinforces its survival in complex unpredictable real world scenarios. The tools required to achieve such social interaction have been presented in this work. A “Social Robot” has been realised.
A new issue arises when robots are required to function as part of a social community. As language, in this case the ACL Teánga (Rooney, 2000), provides the medium for social communication, the argument for a more “natural” language-based cognitive process is presented, similar to that found in BDI-based approaches [Rao & Georgeff, 1991; Duffy, 1999; Duffy, 2000). Sensory information must be interpreted and understood at the deliberative level in explicit control methodologies. This enforces a degree of abstraction from sensor voltages and pixel maps of vision systems and the generation of such abstract notions as
Similarly, when such a system's social and environmental knowledge is stored in the form of beliefs, a degree of system observability is obtained. Limited decoding of this knowledge is required for interpretation by any human observer. This facilitates an understanding of the robot's data acquisition and learning processes for testing and analysis. The social and physical representations that each robot builds of its environment become more transparent. An interesting future work will involve the development of a speech recognition system and voice synthesiser to enable direct social communication with people. The intentional architecture could greatly facilitate the human-robot communication process when it already functions at an intentional level through its BDI functionality.
The Social Robot framework therefore constitutes a next stage in the evolution of explicit autonomous mobile robot control by extending research in the multi-robot task domain as well as providing a generic approach for physically and socially embodied robots.
