<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Embodied Affect for Real-World Human-Robot Interaction</style></title><secondary-title><style face="normal" font="default" size="100%">Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">HRI '20</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">2020</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://dl.acm.org/doi/abs/10.1145/3319502.3374843</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Association for Computing Machinery</style></publisher><pub-location><style face="normal" font="default" size="100%">New York, NY, USA</style></pub-location><pages><style face="normal" font="default" size="100%">459–460</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The potential that robots offer to support humans in multiple aspects of our daily lives is increasingly acknowledged. Despite the clear progress in social robotics and human-robot interaction, the actual realization of this potential still faces numerous scientific and technical challenges, many of them linked to difficulties in dealing with the complexity of the real world. Achieving real-world human-robot interaction requires, on the one hand, taking into account and addressing real-world (e.g., stakeholder's) needs and application areas and, on the other hand, making our robots operational in the real world. In this talk, I will address some of the contributions that Embodied Artificial Intelligence can make towards this goal, illustrating my arguments with examples of my and my group's research on HRI using embodied autonomous affective robots in areas such as developmental robotics, healthcare, and computational psychiatry. So far little explored in HRI, Embodied AI, which started as an alternative to &quot;symbolic AI&quot; (a &quot;paradigm change&quot;) in the way to conceive and model the notion of &quot;intelligence&quot; and the interactions of embodied agents with the real world, is highly relevant towards achieving &quot;real-world HRI&quot;, with its emphasis on notions such as autonomy, adaptation, interaction with dynamic environments, sensorimotor loops and coordination, learning from interactions, and more generally, as Rodney Brooks put it, using and exploiting the real world as &quot;its own best model&quot;.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://dl.acm.org/doi/10.1145/3319502.3374843&quot;&gt;Download&lt;/a&gt;</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Hickton, Luke</style></author><author><style face="normal" font="default" size="100%">Lewis, Matthew</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Abdelkhilick Mohammad</style></author><author><style face="normal" font="default" size="100%">Xin Dong</style></author><author><style face="normal" font="default" size="100%">Matteo Russo</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Expression of Grounded Affect: How Much Emotion Can Arousal Convey?</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 21st Towards Autonomous Robotic Systems Conference  (TAROS2020)</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">2020</style></year><pub-dates><date><style  face="normal" font="default" size="100%">09/2020</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/chapter/10.1007/978-3-030-63486-5_26</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer</style></publisher><pub-location><style face="normal" font="default" size="100%">Nottingham, UK</style></pub-location><volume><style face="normal" font="default" size="100%">12228</style></volume><pages><style face="normal" font="default" size="100%">234–248</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">In this paper we consider how non-humanoid robots can communicate their affective state via bodily forms of communication (kinesics), and the extent to which this influences how humans respond to them. We propose a simple model of grounded affect and kinesic expression before presenting the qualitative findings of an exploratory study (N=9), during which participants were interviewed after watching expressive and non-expressive hexapod robots perform different ‘scenes’. A summary of these interviews is presented and a number of emerging themes are identified and discussed. Whilst our findings suggest that the expressive robot did not evoke significantly greater empathy or altruistic intent in humans than the control robot, the expressive robot stimulated greater desire for interaction and was also more likely to be attributed with emotion.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://www.nottingham.ac.uk/conference/fac-eng/taros/proceedings/proceedings.aspx&quot;&gt;Download&lt;/a&gt; (the complete proceedings are available from the link on this page)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Ana Tanevska</style></author><author><style face="normal" font="default" size="100%">Francesco Rea</style></author><author><style face="normal" font="default" size="100%">Giulio Sandini</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Alessandra Sciutti</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Eager to Learn vs. Quick to Complain? How a socially adaptive robot architecture performs with different robot personalities</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 2019 IEEE International Conference on Systems, Man, and Cybernetics (IEEE SMC 2019)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2019</style></year><pub-dates><date><style  face="normal" font="default" size="100%">10/2019</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://ieeexplore.ieee.org/document/8913903</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Bari, Italy</style></pub-location><pages><style face="normal" font="default" size="100%">365–371</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">A social robot that is aware of our needs and continuously adapts its behaviour to them has the potential of creating a complex, personalized, human-like interaction of the kind we are used to have with our peers in our everyday lives. We are interested in exploring how would an adaptive architecture function and personalize to different users when given different initial values of its variables, i.e. when implementing the same adaptive framework with different robot personalities. Would an architecture that learns very quickly outperform a slower but steadier learning profile? To further explore this, we propose a cognitive architecture for the humanoid robot iCub supporting adaptability and we attempt to validate its functionality and test different robot profiles.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://ieeexplore.ieee.org/document/8913903&quot;&gt;Download&lt;/a&gt;</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Imran Khan</style></author><author><style face="normal" font="default" size="100%">Lewis, Matthew</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">The Effects of Affective Social Bonds on the Interactions and Survival of Simulated Agents</style></title><secondary-title><style face="normal" font="default" size="100%">ACII2019 Workshop on Social Emotions, Theories and Models (SE-THEMO)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2019</style></year><pub-dates><date><style  face="normal" font="default" size="100%">09/2019</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://ieeexplore.ieee.org/abstract/document/8925031</style></url></web-urls></urls><pub-location><style face="normal" font="default" size="100%">Cambridge, UK</style></pub-location><pages><style face="normal" font="default" size="100%">374–380</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The formation and maintenance of affective social bonds plays a key role in the well-being of social agents. Oxytocin has been correlated with social partner preference, and it is hypothesised to influence prosocial behaviours. In this paper, we investigate the effects of modulating the preference of affective social bond partners through oxytocin during decisions related to food-sharing and grooming, in a society of simulated agents with different dominance ranks. Our results show survival benefits for agents with affective social bonds across a number of groups with different bond combinations. We observe a number of emergent social behaviours and suggest that our results bear some similarity with behaviors observed in biological agents.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://ieeexplore.ieee.org/abstract/document/8925031&quot;&gt;Download&lt;/a&gt; (or &lt;a href=&quot;http://www.emotion-modeling.info/sites/default/files/Khan_et_al_Affective_Social_Bonds_ACII2019_AcceptedVersion.pdf&quot;&gt;Download accepted version&lt;/a&gt;)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Embodied Robot Models for Interdisciplinary Emotion Research</style></title><secondary-title><style face="normal" font="default" size="100%">IEEE Transactions on Affective Computing</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2019</style></year><pub-dates><date><style  face="normal" font="default" size="100%">Early Access</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://ieeexplore.ieee.org/document/8700489/</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Due to their complex nature, emotions cannot be properly understood from the perspective of a single discipline. In this paper, I discuss how the use of robots as models is beneficial for interdisciplinary emotion research. Addressing this issue through the lens of my own research, I focus on a critical analysis of embodied robots models of different aspects of emotion, relate them to theories in psychology and neuroscience, and provide representative examples. I discuss concrete ways in which embodied robot models can be used to carry out interdisciplinary emotion research, assessing their contributions: as hypothetical models, and as operational models of specific emotional phenomena, of general emotion principles, and of specific emotion &quot;dimensions&quot;. I conclude by discussing the advantages of using embodied robot models over other models.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://ieeexplore.ieee.org/document/8700489&quot;&gt;Download&lt;/a&gt; (Open Access)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lewis, Matthew</style></author><author><style face="normal" font="default" size="100%">Oleari, Elettra</style></author><author><style face="normal" font="default" size="100%">Pozzi, Clara</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Tapus, Adriana</style></author><author><style face="normal" font="default" size="100%">André, Elisabeth</style></author><author><style face="normal" font="default" size="100%">Martin, Jean-Claude</style></author><author><style face="normal" font="default" size="100%">Ferland, François</style></author><author><style face="normal" font="default" size="100%">Ammi, Mehdi</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">An Embodied AI Approach to Individual Differences: Supporting Self-Efficacy in Diabetic Children with an Autonomous Robot</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 7th International Conference on Social Robotics (ICSR-2015)</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">2015</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/chapter/10.1007%2F978-3-319-25554-5_40</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer International Publishing</style></publisher><pub-location><style face="normal" font="default" size="100%">Paris</style></pub-location><pages><style face="normal" font="default" size="100%">401–410</style></pages><isbn><style face="normal" font="default" size="100%">978-3-319-25553-8</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">In this paper we discuss how a motivationally autonomous robot, designed using the principles of embodied AI, provides a suitable approach to address individual differences of children interacting with a robot, without having to explicitly modify the system. We do this in the context of two pilot studies using Robin, a robot to support self-confidence in diabetic children.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://link.springer.com/chapter/10.1007%2F978-3-319-25554-5_40&quot;&gt;Download&lt;/a&gt; (or &lt;a href=&quot;http://www.emotion-modeling.info/sites/default/files/2015_Lewis_Canamero_ICSR.pdf&quot;&gt;Download authors' draft&lt;/a&gt;)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lones, John</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Lewis, Matthew</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Epigenetic Adaptation in Action Selection Environments with Temporal Dynamics</style></title><secondary-title><style face="normal" font="default" size="100%">Advances in Artificial Life, ECAL 2013</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2013</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://www.mitpressjournals.org/doi/abs/10.1162/978-0-262-31709-2-ch073</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">MIT Press</style></publisher><pages><style face="normal" font="default" size="100%">505–512</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">To operate in dynamic environments robots must be able to adapt their behaviour to meet the challenges that these pose while being constrained by their physical and computational limitation. In this paper we continue our study into using biologically inspired epigenetic adaptation through hormone modulation as a way to accommodate the needed flexibility in robots’ behaviour, focusing on problems of temporal dynamics. We have specifically framed our study in three variants of dynamic three-resource action selection environment. The challenges posed by these environments include: moving resources, temporal and increasing unavailability of resources, and cyclic changes in type and availability of resources related to cyclic environmental changes.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://www.mitpressjournals.org/doi/abs/10.1162/978-0-262-31709-2-ch073&quot;&gt;Download&lt;/a&gt; (Open Access)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lones, John</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Epigenetic Adaptation through Hormone Modulation in Autonomous Robots</style></title><secondary-title><style face="normal" font="default" size="100%">2013 IEEE 3rd Joint International Conference on Development and Learning and Epigenetic Robotics (ICDL-Epirob 2013)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2013</style></year></dates><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Osaka</style></pub-location><pages><style face="normal" font="default" size="100%">1–6</style></pages><isbn><style face="normal" font="default" size="100%">9781479910366</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Epigenetic adaptation provides biological organisms with the ability to adjust their physiology and/or morphology in order to meet some of the challenges posed by their environment. Recent research has suggested that this process may be controlled by hormones. In this paper, we present a model that allows an autonomous robot to develop its systems in accordance with the environment it is currently situated in. Experiments have been undertaken in multiple environments with different challenges and niches to negotiate. We have so far seen encouraging results and the emergence of unique behaviours tailored to exploiting its current environment.</style></abstract><notes><style face="normal" font="default" size="100%">Winner: Best Student Paper
&lt;a href=&quot;https://ieeexplore.ieee.org/document/6652561&quot;&gt;Download&lt;/a&gt;</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Davila-Ross, Marina</style></author><author><style face="normal" font="default" size="100%">Kim A. Bard</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Eliciting Caregiving Behavior in Dyadic Human-robot Attachment-like Interactions</style></title><secondary-title><style face="normal" font="default" size="100%">ACM Transactions on Interactive Intelligent Systems</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://dl.acm.org/doi/10.1145/2133366.2133369</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">ACM</style></publisher><pub-location><style face="normal" font="default" size="100%">New York, NY</style></pub-location><volume><style face="normal" font="default" size="100%">2</style></volume><pages><style face="normal" font="default" size="100%">3:1–3:24</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">We present here the design and applications of an arousal-based model controlling the behavior of a Sony AIBO robot during the exploration of a novel environment: a children's play mat. When the robot experiences too many new perceptions, the increase of arousal triggers calls for attention towards its human caregiver. The caregiver can choose to either calm the robot down by providing it with comfort, or to leave the robot coping with the situation on its own. When the arousal of the robot has decreased, the robot moves on to further explore the play mat. We gathered results from two experiments using this arousal-driven control architecture. In the first setting, we show that such a robotic architecture allows the human caregiver to influence greatly the learning outcomes of the exploration episode, with some similarities to a primary caregiver during early childhood. In a second experiment, we tested how human adults behaved in a similar setup with two different robots: one “needy”, often demanding attention, and one more independent, requesting far less care or assistance. Our results show that human adults recognise each profile of the robot for what they have been designed, and behave accordingly to what would be expected, caring more for the needy robot than for the other. Additionally, the subjects exhibited a preference and more positive affect whilst interacting and rating the robot we designed as needy. This experiment leads us to the conclusion that our architecture and setup succeeded in eliciting positive and caregiving behavior from adults of different age groups and technological background. Finally, the consistency and reactivity of the robot during this dyadic interaction appeared crucial for the enjoyment and engagement of the human partner.</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://dl.acm.org/doi/10.1145/2133366.2133369&quot;&gt;Download&lt;/a&gt; (Open Access)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Aryel Beck</style></author><author><style face="normal" font="default" size="100%">Stevens, Brett</style></author><author><style face="normal" font="default" size="100%">Kim A. Bard</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Emotional Body Language Displayed by Artificial Agents</style></title><secondary-title><style face="normal" font="default" size="100%">ACM Transactions on Interactive Intelligent Systems</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://dl.acm.org/doi/10.1145/2133366.2133368</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">ACM</style></publisher><pub-location><style face="normal" font="default" size="100%">New York, NY</style></pub-location><volume><style face="normal" font="default" size="100%">2</style></volume><pages><style face="normal" font="default" size="100%">2:1–2:29</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Complex and natural social interaction between artificial agents (computer-generated or robotic) and humans necessitates the display of rich emotions in order to be believable, socially relevant, and accepted, and to generate the natural emotional responses that humans show in the context of social interaction, such as engagement or empathy. Whereas some robots use faces to display (simplified) emotional expressions, for other robots such as Nao, body language is the best medium available given their inability to convey facial expressions. Displaying emotional body language that can be interpreted whilst interacting with the robot should significantly improve naturalness. This research investigates the creation of an affect space for the generation of emotional body language to be displayed by humanoid robots. To do so, three experiments investigating how emotional body language displayed by agents is interpreted were conducted. The first experiment compared the interpretation of emotional body language displayed by humans and agents. The results showed that emotional body language displayed by an agent or a human is interpreted in a similar way in terms of recognition. Following these results, emotional key poses were extracted from an actor's performances and implemented in a Nao robot. The interpretation of these key poses was validated in a second study where it was found that participants were better than chance at interpreting the key poses displayed. Finally, an affect space was generated by blending key poses and validated in a third study. Overall, these experiments confirmed that body language is an appropriate medium for robots to display emotions and suggest that an affect space for body expressions can be used to improve the expressiveness of humanoid robots.</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://dl.acm.org/doi/10.1145/2133366.2133368&quot;&gt;Download&lt;/a&gt; (Open Access)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Philippe Gaussier</style></author><author><style face="normal" font="default" size="100%">C Hasson</style></author><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Catherine Pelachaud</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Emotion et cognition: les robots comme outils et modèles</style></title><secondary-title><style face="normal" font="default" size="100%">Systèmes d'interaction émotionnelle</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2010</style></year></dates><publisher><style face="normal" font="default" size="100%">Lavoisier Hermes Science</style></publisher><pub-location><style face="normal" font="default" size="100%">Paris, France</style></pub-location><isbn><style face="normal" font="default" size="100%">978-2-7462-2115-4</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><section><style face="normal" font="default" size="100%">9</style></section></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">O'Bryne, Claire</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Harold Fellermann</style></author><author><style face="normal" font="default" size="100%">Mark Dörr</style></author><author><style face="normal" font="default" size="100%">Martin M Hanczy</style></author><author><style face="normal" font="default" size="100%">Lone Ladegaard Laursen</style></author><author><style face="normal" font="default" size="100%">Sarah Maurer</style></author><author><style face="normal" font="default" size="100%">Daniel Merkle</style></author><author><style face="normal" font="default" size="100%">Pierre-Alain Monnard</style></author><author><style face="normal" font="default" size="100%">Kasper Støy</style></author><author><style face="normal" font="default" size="100%">Steen Rasmussen</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Emotion in Decisions of Life and Death – Its Role in Brain-Body-Environment Interactions for Predator and Prey</style></title><secondary-title><style face="normal" font="default" size="100%">Artificial Life XII: Proc. of the 12th International Conference on the Synthesis and Simulation of Living Systems</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2010</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2010</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://mitpress-request.mit.edu/sites/default/files/titles/alife/0262290758chap141.pdf</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">MIT Press</style></publisher><pub-location><style face="normal" font="default" size="100%">Odense, Denmark</style></pub-location><pages><style face="normal" font="default" size="100%">812–822</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Taking inspiration from the biological world, in our work we are attempting to create and examine artificial predator-prey relationships using two LEGO robots. We do so to explore the possible adaptive value of emotion-like states for action selection in this context. However, we also aim to study and consider these concepts together at different levels of abstraction. For example, in terms of individual agents’ brain-body-environment interactions, as well as the (emergent) predator-prey relationships resulting from these. Here, we discuss some of the background concepts and motivations driving the design of our implementation and experiments. First, we explain why we think the predator-prey relationship is so interesting. Narrowing our focus to emotion-based architectures, this is followed by a review of existing literature, comparing different types and highlighting the novel aspects of our own. We conclude with our proposed contributions to the literature and thus, ultimately, the design and creation of artificial life.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://mitpress-request.mit.edu/sites/default/files/titles/alife/0262290758chap141.pdf&quot;&gt;Download&lt;/a&gt; (Open Access)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Oros, Nicolas</style></author><author><style face="normal" font="default" size="100%">Volker Steuber</style></author><author><style face="normal" font="default" size="100%">Davey, Neil</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Roderick G Adams</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Evolution of Bistable Dynamics in Spiking Neural Controllers for Agents Performing Olfactory Attraction and Aversion</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 19th Annual Computational Neuroscience Meeting (CNS*2010)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2010</style></year><pub-dates><date><style  face="normal" font="default" size="100%">07/2010</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://bmcneurosci.biomedcentral.com/articles/10.1186/1471-2202-11-S1-P92</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">BioMed Central Ltd.</style></publisher><pub-location><style face="normal" font="default" size="100%">San Antonio, TX</style></pub-location><volume><style face="normal" font="default" size="100%">11(Suppl 1)</style></volume><pages><style face="normal" font="default" size="100%">92</style></pages><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lori Malatesta</style></author><author><style face="normal" font="default" size="100%">John C Murray</style></author><author><style face="normal" font="default" size="100%">Amaryllis Raouzaiou</style></author><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Kostas Karpouzis</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Mario I. Chacon-M.</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Emotion Modelling and Facial Affect Recognition in Human-Computer and Human-Robot Interaction</style></title><secondary-title><style face="normal" font="default" size="100%">Affective Computing, Emotion Modelling, Synthesis and Recognition</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2009</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.intechopen.com/books/state_of_the_art_in_face_recognition/emotion_modelling_and_facial_affect_recognition_in_human-computer_and_human-robot_interaction</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">InTechOpen Publishers</style></publisher><isbn><style face="normal" font="default" size="100%">978-3-902613-42-4</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><section><style face="normal" font="default" size="100%">12</style></section></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Oros, Nicolas</style></author><author><style face="normal" font="default" size="100%">Volker Steuber</style></author><author><style face="normal" font="default" size="100%">Davey, Neil</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Roderick G Adams</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Evolution of Bilateral Symmetry in Agents Controlled by Spiking Neural Networks</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 2009 IEEE Symposium on Artificial Life (ALIFE 2009)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2009</style></year><pub-dates><date><style  face="normal" font="default" size="100%">03/2009</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://ieeexplore.ieee.org/document/4937702/</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">IEEE Press</style></publisher><pub-location><style face="normal" font="default" size="100%">Nashville, TN</style></pub-location><pages><style face="normal" font="default" size="100%">116–123</style></pages><isbn><style face="normal" font="default" size="100%">978-1-4244-2763-5</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">We present in this paper three novel developmental models allowing information to be encoded in space and time, using spiking neurons placed on a 2D substrate. In two of these models, we introduce neural development that can use bilateral symmetry. We show that these models can create neural controllers for agents evolved to perform chemotaxis. Neural bilateral symmetry can be evolved and be beneficial for an agent. This work is the first, as far as we know, to present developmental models where spiking neurons are generated in space and where bilateral symmetry can be evolved and proved to be beneficial in this context.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Pichler, Peter-Paul</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Seth Bullock</style></author><author><style face="normal" font="default" size="100%">Jason Noble</style></author><author><style face="normal" font="default" size="100%">Richard A. Watson</style></author><author><style face="normal" font="default" size="100%">Mark A Bedau</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Evolving Morphological and Behavioral Diversity Without Predefined Behavior Primitives</style></title><secondary-title><style face="normal" font="default" size="100%">Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2008</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://mitpress-request.mit.edu/sites/default/files/titles/alife/0262287196chap62.pdf</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">MIT Press</style></publisher><pub-location><style face="normal" font="default" size="100%">Winchester, UK</style></pub-location><pages><style face="normal" font="default" size="100%">474–481</style></pages><isbn><style face="normal" font="default" size="100%">978-0-262-75017-2</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Virtual ecosystems, where natural selection is used to evolve complex agent behavior, are often preferred to traditional genetic algorithms because the absence of an explicitly defined fitness allows for a less constrained evolutionary process. However, these model ecosystems typically pre-specify a discrete set of possible action primitives the agents can perform. We think that this also constrains the evolutionary process with the modellers preconceptions of what possible solutions could be. Therefore, we propose an ecosystem model to evolve complete agents where all higher-level behavior results strictly from the interplay between extremely simple components and where no ‘behavior primitives’ are defined. On the basis of four distinct survival strategies we show that such primitives are not necessary to evolve behavioral diversity even in a simple and homogeneous environment.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Pichler, Peter-Paul</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">An Evolving Ecosystems Approach to Generating Complex Agent Behaviour</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. IEEE Symposium on Artificial Life 2007, ALIFE'07</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2007</style></year><pub-dates><date><style  face="normal" font="default" size="100%">04/2007</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://ieeexplore.ieee.org/document/4218900/</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%"> Honolulu, HI</style></pub-location><pages><style face="normal" font="default" size="100%">303–310</style></pages><isbn><style face="normal" font="default" size="100%">1-4244-0701-X</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">We propose an evolving ecosystem approach to evolving complex agent behaviour based on the principle of natural selection. The agents start with very limited functional design and morphology and neural controllers are concurrently evolved as functional wholes. The agents are ‘grounded’ in an increasingly complex environment by a complex model metabolism and interaction dynamics. Furthermore, we introduce a novel criterion for evaluating differential reproductive success aimed at maximising evolutionary freedom. We also present first experimental results suggesting that this approach may be conducive to widening the scope of artificial evolution for the generation of agents exhibiting non-trivial behaviours in a complex ecosystem.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Cos-Aguilera, Ignasi</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Gillian M Hayes</style></author><author><style face="normal" font="default" size="100%">Gillies, Andrew</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Joanna J Bryson</style></author><author><style face="normal" font="default" size="100%">Tony J Prescott</style></author><author><style face="normal" font="default" size="100%">Anil K Seth</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Ecological Integration of Affordances and Drives for Behaviour Selection</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. IJCAI 2005 Workshop on Modeling Natural Action Selection</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2005</style></year></dates><pub-location><style face="normal" font="default" size="100%">Edinburgh, Scotland</style></pub-location><pages><style face="normal" font="default" size="100%">225–228</style></pages><isbn><style face="normal" font="default" size="100%">1-902956-40-9</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">This paper shows a study of the integration of physiology and perception in a biologically inspired robotic architecture that learns behavioural patterns by interaction with the environment. This implements a hierarchical view of learning and behaviour selection which bases adaptation on a relationship between reinforcement and the agent’s inner motivations. This view ingrains together the basic principles necessary to explain the underlying processes of learning behavioural patterns and the way these change via interaction with the environment. These principles have been experimentally tested and the results are presented and discussed throughout the paper.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Emotion Understanding from the Perspective of Autonomous Robots Research</style></title><secondary-title><style face="normal" font="default" size="100%">Neural Networks</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2005</style></year><pub-dates><date><style  face="normal" font="default" size="100%">05/2005</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.sciencedirect.com/science/article/pii/S0893608005000365</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">18</style></volume><pages><style face="normal" font="default" size="100%">445–455</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">In this paper, I discuss some of the contributions that modeling emotions in autonomous robots can make towards understanding human emotions-'as sited in the brain' and as used in our interactions with the environment-and emotions in general. Such contributions are linked, on the one hand, to the potential use of such robotic models as tools and 'virtual laboratories' to test and explore systematically theories and models of human emotions, and on the other hand to a modeling approach that fosters conceptual clarification and operationalization of the relevant aspects of theoretical notions and models. As illustrated by an overview of recent advances in the field, this area is still in its infancy. However, the work carried out already shows that we share many conceptual problems and interests with other disciplines in the affective sciences and that sound progress necessitates multidisciplinary efforts.</style></abstract><issue><style face="normal" font="default" size="100%">4</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Philippe Gaussier</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Jacqueline Nadel</style></author><author><style face="normal" font="default" size="100%">Darwin Muir</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Emotion Understanding: Robots as Tools and Models</style></title><secondary-title><style face="normal" font="default" size="100%">Emotional Development: Recent Research Advances</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2005</style></year></dates><publisher><style face="normal" font="default" size="100%">Oxford University Press</style></publisher><pages><style face="normal" font="default" size="100%">235–258</style></pages><isbn><style face="normal" font="default" size="100%">0-19-85-2883-3 (Hbk) 0-19-85-2884-1 (Pbk)</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><section><style face="normal" font="default" size="100%">9</style></section></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Robert Lowe</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Nehaniv, Chrystopher L</style></author><author><style face="normal" font="default" size="100%">Daniel Polani</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Jordan Pollack</style></author><author><style face="normal" font="default" size="100%">Mark A Bedau</style></author><author><style face="normal" font="default" size="100%">Phil Husbands</style></author><author><style face="normal" font="default" size="100%">Takashi Ikegami</style></author><author><style face="normal" font="default" size="100%">Richard A. Watson</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">The Evolution of Affect-Related Displays, Recognition and Related Strategies</style></title><secondary-title><style face="normal" font="default" size="100%">ALIFE IX: Proceeding of the 9th international conference on the simulation and synthesis of living systems</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2004</style></year></dates><publisher><style face="normal" font="default" size="100%">MIT Press</style></publisher><pages><style face="normal" font="default" size="100%">176–181</style></pages><isbn><style face="normal" font="default" size="100%">9780262661836</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">This paper presents an ecologically motivated, bottom-up approach to investigating the evolution of expression, perception and related behaviour of affective internal states that complements game-theoretic studies of the evolutionary success of animal display. Our results show that the perception of displays related to affect greatly influences both the types of display produced and also the survival prospects of agents. Relative to agents that do not perceive rival agent internal state, affect perceivers prosper if the initial environment in which they reside provides numerous opportunities for interaction with other agents and resources. Conversely, where the initial environment with sparse resources does not allow for regular interaction, ability to perceive affect is not as facilitatory to survival. Furthermore, the agents evolve particular display strategies distorting the expression of affect and greatly influencing the proportion of affect perceiving to nonaffect perceiving agents over evolutionary time.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Cortés, Ulises</style></author><author><style face="normal" font="default" size="100%">Annicchiarico, Roberta</style></author><author><style face="normal" font="default" size="100%">Vázquez-Salceda, Javier</style></author><author><style face="normal" font="default" size="100%">Urdiales, Cristina</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Maite López</style></author><author><style face="normal" font="default" size="100%">Miquel Sànchez-Marrè</style></author><author><style face="normal" font="default" size="100%">Carlo Caltagirone</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">I Rudomín</style></author><author><style face="normal" font="default" size="100%">J Vázquez-Salceda</style></author><author><style face="normal" font="default" size="100%">J L Díaz de León Santiago</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">e-Tools: The use of Assistive Technologies to enhance disabled and senior citizens’ autonomy</style></title><secondary-title><style face="normal" font="default" size="100%">e-Health: Application of Computing Science in Medicine and Health Care</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2003</style></year></dates><publisher><style face="normal" font="default" size="100%">Instituto Politécnico National Press</style></publisher><pages><style face="normal" font="default" size="100%">119–132</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">In this paper we present our preliminary ideas about the integration of several technologies to build specific e-tools for the disabled and for the new generation of senior citizens. ‘e-Tools’ stands for Embedded Tools, as we aim to embed intelligent assistive devices in homes and other facilities, creating ambient intelligence environments to give support to patients and caregivers. In particular, we aim to explore the benefits of the concept of situated intelligence to build intelligent artefacts that will enhance the autonomy of the target group during their daily life. We present here a multi-level architecture and our preliminary research on navigation schemes for a robotic wheelchair.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>10</ref-type><contributors><secondary-authors><author><style face="normal" font="default" size="100%">Cañamero, Lola D</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Emotional and Intelligent II: The Tangled Knot of Social Cognition. Papers from the 2001 AAAI Fall Symposium</style></title></titles><dates><year><style  face="normal" font="default" size="100%">2001</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://www.aaai.org/Press/Reports/Symposia/Fall/fs-01-02.php</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">AAAI Press</style></publisher><pub-location><style face="normal" font="default" size="100%">North Falmouth, Massachusetts</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-57735-136-8</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Cañamero, Lola D</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Cañamero, Lola D</style></author><author><style face="normal" font="default" size="100%">Paolo Petta</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Emotions and Adaptation in Autonomous Agents: A Design Perspective</style></title><secondary-title><style face="normal" font="default" size="100%">Cybernetics and Systems: An International Journal</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2001</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.tandfonline.com/doi/abs/10.1080/01969720120250</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Taylor &amp; Francis</style></publisher><volume><style face="normal" font="default" size="100%">32</style></volume><pages><style face="normal" font="default" size="100%">507–529</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Why would we want to endow artificial autonomous agents with emotions? The main answer to this question seems to rely on what has been called the functional view of emotions, arising from (analytic) studies of natural systems. In this paper, I examine to what extent this hypothesis can be applied to the (synthetic) investigation of artificial emotions and what are its implications for the design of emotional agents, the main approaches that can be appropriately used to model emotions in autonomous agents, and why situated autonomous agents provide a good framework to study the relation between emotion and adaptation.</style></abstract><issue><style face="normal" font="default" size="100%">5</style></issue></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">D Cañamero</style></author><author><style face="normal" font="default" size="100%">Walter Van de Velde</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Kerstin Dautenhahn</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Emotionally Grounded Social Interaction</style></title><secondary-title><style face="normal" font="default" size="100%">Human Cognition and Social Agent Technology</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Advances in Consciousness Research</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">2000</style></year></dates><number><style face="normal" font="default" size="100%">19</style></number><publisher><style face="normal" font="default" size="100%">John Benjamins Publishing Co.</style></publisher><pages><style face="normal" font="default" size="100%">137–162</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><section><style face="normal" font="default" size="100%">6</style></section></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">D Cañamero</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Alexis Drogoul</style></author><author><style face="normal" font="default" size="100%">Jean-Arcady Meyer</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Emotions pour les agents situés</style></title><secondary-title><style face="normal" font="default" size="100%">Intelligence Artificielle Située</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">1999</style></year></dates><publisher><style face="normal" font="default" size="100%">Hermès science publications</style></publisher><pub-location><style face="normal" font="default" size="100%">Paris</style></pub-location><isbn><style face="normal" font="default" size="100%">978-274620076-0</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Contrairement à l'intelligence artificielle (IA) symbolique, l'IA située, qui adopte une vision plus large de l'intelligence &quot;complète&quot; qui ne la détache pas de sa réalisation corporelle et qui s'intéresse à son rôle adaptatif, ouvre naturellement la porte à l'étude des rôles des émotions d'un point de vue évolutif et à leur intégration dans les agents autonomes ou animats comme des mécanismes favorisant l'adaptation. Cet article examine les raisons pour lesquelles il semble intéressant de doter d'émotions les agents situés, en établissant un lien avec les émotions naturelles, ainsi que les différentes approches envisageables permettant de modéliser les émotions dans le cadre de l'IA située, et les différents problèmes qui en découlent. 

The notion of intelligence underlying symbolic Artificial Intelligence (AI) is tightly coupled to the idea of rationality. On the contrary, situated AI, with a wider view of intelligence that focuses on its embodiment and its adaptive value, allows to study emotional phenomena in animats from the point of view of evolution, and to investigate their adaptive roles. This paper examines the main reasons why it seems interesting to endow animats with emotions, establishing a parallel with natural emotions. It also considers the main approches that can be used to model emotions within situated AI, and the problems they pose. 
</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>10</ref-type><contributors><secondary-authors><author><style face="normal" font="default" size="100%">D Cañamero</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Emotional and Intelligent: The Tangled Knot of Cognition. Papers from the 1998 AAAI Fall Symposium</style></title></titles><dates><year><style  face="normal" font="default" size="100%">1998</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://www.aaai.org/Press/Reports/Symposia/Fall/fs-98-03.php</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">AAAI Press</style></publisher><pub-location><style face="normal" font="default" size="100%">Orlando, Florida</style></pub-location><isbn><style face="normal" font="default" size="100%">978-1-57735-077-4</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language></record></records></xml>