Differences
This shows you the differences between two versions of the page.
Both sides previous revision Previous revision Next revision | Previous revision Next revisionBoth sides next revision | ||
models_of_social_intelligence [2009-01-09 17:29] – davegriffiths | models_of_social_intelligence [2009-01-19 11:37] – davegriffiths | ||
---|---|---|---|
Line 1: | Line 1: | ||
- | ====Models of Social Intelligence==== | + | ======Models of Social Intelligence====== |
- | Part of [[Project Lirec]]: Notes (and bits pasted) from Deliverable 5.1 | + | Part of [[Project Lirec]]: Notes (and bits pasted) from Deliverable 5.1 (needs tidying up) |
An understanding of social intelligence is needed for creating agents capable of long term companionship with humans. | An understanding of social intelligence is needed for creating agents capable of long term companionship with humans. | ||
- | ===Social Intelligence in Humans=== | + | =====Social Intelligence in Humans===== |
+ | What is social intelligence? | ||
* Begins in childhood with attachment and love for caregivers | * Begins in childhood with attachment and love for caregivers | ||
Line 11: | Line 12: | ||
* Relationships grow and fade as time passes | * Relationships grow and fade as time passes | ||
- | A long term companion needs to apply the same strategies for creating and maintaining relationships if it's to be effective. | + | A long term artificial |
- | ===Emotions and personality=== | + | =====Emotions and personality===== |
Emotions and personality need to be expressed for people to gain an empathic relation with an agent. | Emotions and personality need to be expressed for people to gain an empathic relation with an agent. | ||
- | * Disney animation | + | * Disney animation |
- | * Actions -> Feelings -> Personality | + | * The theory is that actions lead to feelings, which lead to personality |
- | Companions need to have a consistent personality to maintain long term relationships. | + | Companions need to have a consistent personality |
- | ===Theory of mind=== | + | =====Theory of mind===== |
- | A theory of mind is a model of a users emotional state, gained by measuring affective state from sensor input. In this way an agent return an empathic relationship with a human. Such a model must include a users: | + | See also: [[Theory of Mind in Robotics]] |
+ | |||
+ | A theory of mind in a robot is a model of a users (or another agent' | ||
* Emotional state | * Emotional state | ||
Line 32: | Line 35: | ||
* Personality | * Personality | ||
- | ===Memory and Adaptation=== | + | =====Memory and Adaptation===== |
- | A companion needs to adapt and evolve based on past experiences. | + | A companion needs to adapt and evolve based on past experiences. |
- | To some extent | + | |
- | ====Socially intelligent agents==== | + | ======Socially intelligent agents====== |
- | Human intelligence includes: | + | Human intelligence includes |
- | * Efficient problem solving | + | * Efficient problem solving |
* Social and emotional intelligence | * Social and emotional intelligence | ||
- | Socially intelligent agents need to have the appearance of socially | + | Socially intelligent agents need to have the appearance of social |
This social information is part of a robot' | This social information is part of a robot' | ||
- | ***The expectation of social interaction of an agent depends on it's form, a human form is more natural, but also creates very high expectations - which are not currently | + | ***The expectation of social interaction of an agent depends on it's form, a human form is more natural, but also creates very high expectations - expectations |
- | ===Castelfranchi' | + | =====Castelfranchi' |
- Goal oriented agents: the action the agent takes in the world (in other words, its behaviour) is aimed at producing some result. | - Goal oriented agents: the action the agent takes in the world (in other words, its behaviour) is aimed at producing some result. | ||
Line 61: | Line 63: | ||
- Social structures and organisation: | - Social structures and organisation: | ||
- | Establishing relationships might be easy - it's quite possible for people to have relationships of a form with inanimate objects, maintaining and developing a relationship is harder. | + | Establishing relationships might be easy - for instance, |
Bickmore & Picard - relationship maintenance strategies, based on human-human relationship maintenance: | Bickmore & Picard - relationship maintenance strategies, based on human-human relationship maintenance: | ||
Line 74: | Line 76: | ||
* | * | ||
- | ===Social Power=== | + | =====Social Power===== |
* The influence of a social agent on a person | * The influence of a social agent on a person | ||
- | * Social agent = another person, social role, norm or group | + | * Social agent could be another person, |
* Social power can be resisted | * Social power can be resisted | ||
+ | * The amount of influence of a social power can be controlled | ||
- | ===Social Attraction=== | + | =====Social Attraction===== |
* Affective ties of one person or agent with others | * Affective ties of one person or agent with others | ||
Line 87: | Line 90: | ||
* But they are often balanced | * But they are often balanced | ||
- | ===Heider' | + | =====Heider' |
http:// | http:// | ||
- | POX triple | + | The theory states that people tend to avoid unstable cognitive configurations. For instance if agent A knows and likes agent B, and they both are aware of, and have positive feelings towards object C then there is balance, similarly if they both have negative feelings towards object C. If they disagree then there is imbalance, which can be resolved from agent A's perspective by one of three steps: |
+ | |||
+ | * Agent A switches to dislike of agent B | ||
+ | * Agent A decides to change it's mind and agree with agent B about object C | ||
+ | * Agent A attempts to change agent B's mind, in order to make it agree about object C | ||
+ | |||
+ | The last option takes more work than the other two, and so there is also a concept of cost for maintaining social balance. | ||
+ | |||
+ | ======Examples of Socially Intelligent Agents====== | ||
+ | |||
+ | Embodied conversational agents - these use face to face conversations in an attempt to simplify human/ | ||
+ | |||
+ | **REA**: The automated real estate agent: http:// | ||
+ | Attempts to sell people houses, keeps a track of //task talk// and //small talk//, and can interleave them. | ||
+ | |||
+ | **Laura & FitTrack**: An exercise advisor designed to explore long term relationships. Tested with 100 people, users with the relationship building features added were "more likely" | ||
+ | |||
+ | **Avatar Arena**: Multi characters interacting with each other on the user's behalf: | ||
+ | http:// | ||
+ | |||
+ | **2SGD Model**: Agents for interacting with groups: | ||
+ | http:// | ||
+ | |||
+ | =======Social Robots====== | ||
+ | |||
+ | Sociable physical robots | ||
+ | |||
+ | **Kismet**: Animatronic head which responds to people' | ||
+ | |||
+ | **Valerie the roboceptionist**: | ||
+ | |||
+ | ======Emotions and personality in Social Agents====== | ||
+ | |||
+ | (Moffat, 1997) states, “personality is the name we give to (an agent’s) reactions tendencies that are consistent over situations and time”. | ||
+ | |||
+ | =====Computational models of emotions===== | ||
+ | |||
+ | **EMA**: http:// | ||
+ | |||
+ | **FAtiMA**: Used in Fear Not! (is this part of ION?) | ||
+ | |||
+ | * Events + Emotions -> Appraisal | ||
+ | * Appraisal based on | ||
+ | * Goals | ||
+ | * Standards | ||
+ | * Attitudes | ||
+ | * Emotions have intensity - attenuated in time | ||
+ | * Mood - overall state of emotions | ||
+ | * Mood also affects emotions | ||
+ | |||
+ | * Emotional threshold - characters resistence to an emotion type | ||
+ | * Decay rate - How fast an emotion disappears | ||
+ | = personality | ||
+ | |||
+ | * Event or action -> Knowledge base update | ||
+ | * -> emotion -> autobiographic memory | ||
- | p = person | + | **PSI**: [[http:// |
- | o = other person | + | |
- | x = object | + | |
- | The theory states that people tend to avoid unstable cognitive configurations. | + | Needs/ |
+ | * food | ||
+ | * water | ||
+ | * physical integrity | ||
+ | * sexuality | ||
+ | * affiliation (social need) | ||
+ | * certainty and competence | ||
- | ====Existing Socially Intelligent Agents==== | + | Needs given weights, Reactive level/ |
- | Embodied conversational agents | + | More info: [[http:// |