<?xml version="1.0" encoding="UTF-8"?><xml><records><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Ana Tanevska</style></author><author><style face="normal" font="default" size="100%">Francesco Rea</style></author><author><style face="normal" font="default" size="100%">Giulio Sandini</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Alessandra Sciutti</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A Socially Adaptable Framework for Human-Robot Interaction</style></title><secondary-title><style face="normal" font="default" size="100%">Frontiers in Robotics and AI</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2020</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://www.frontiersin.org/article/10.3389/frobt.2020.00121</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">7</style></volume><pages><style face="normal" font="default" size="100%">121</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">In our everyday lives we regularly engage in complex, personalized, and adaptive interactions with our peers. To recreate the same kind of rich, human-like interactions, a social robot should be aware of our needs and affective states and continuously adapt its behavior to them. Our proposed solution is to have the robot learn how to select the behaviors that would maximize the pleasantness of the interaction for its peers. To make the robot autonomous in its decision making, this process could be guided by an internal motivation system. We wish to investigate how an adaptive robotic framework of this kind would function and personalize to different users. We also wish to explore whether the adaptability and personalization would bring any additional richness to the human-robot interaction (HRI), or whether it would instead bring uncertainty and unpredictability that would not be accepted by the robot's human peers. To this end, we designed a socially adaptive framework for the humanoid robot iCub. As a result, the robot perceives and reuses the affective and interactive signals from the person as input for the adaptation based on internal social motivation. We strive to investigate the value of the generated adaptation in our framework in the context of HRI. In particular, we compare how users will experience interaction with an adaptive versus a non-adaptive social robot. To address these questions, we propose a comparative interaction study with iCub whereby users act as the robot's caretaker, and iCub's social adaptation is guided by an internal comfort level that varies with the stimuli that iCub receives from its caretaker. We investigate and compare how iCub's internal dynamics would be perceived by people, both in a condition when iCub does not personalize its behavior to the person, and in a condition where it is instead adaptive. Finally, we establish the potential benefits that an adaptive framework could bring to the context of repeated interactions with a humanoid robot.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://www.frontiersin.org/article/10.3389/frobt.2020.00121&quot;&gt;Download&lt;/a&gt; (Open Access)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Ana Tanevska</style></author><author><style face="normal" font="default" size="100%">Francesco Rea</style></author><author><style face="normal" font="default" size="100%">Giulio Sandini</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Alessandra Sciutti</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">A Cognitive Architecture for Socially Adaptable Robots</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 2019 Joint IEEE 9th International Conference on Development and Learning and Epigenetic Robotics (ICDL-EpiRob)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2019</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2019</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://ieeexplore.ieee.org/document/8850688</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Oslo, Norway</style></pub-location><pages><style face="normal" font="default" size="100%">195–200</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://ieeexplore.ieee.org/document/8850688&quot;&gt;Download&lt;/a&gt;</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Ana Tanevska</style></author><author><style face="normal" font="default" size="100%">Francesco Rea</style></author><author><style face="normal" font="default" size="100%">Giulio Sandini</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Alessandra Sciutti</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Eager to Learn vs. Quick to Complain? How a socially adaptive robot architecture performs with different robot personalities</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 2019 IEEE International Conference on Systems, Man, and Cybernetics (IEEE SMC 2019)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2019</style></year><pub-dates><date><style  face="normal" font="default" size="100%">10/2019</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://ieeexplore.ieee.org/document/8913903</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">IEEE</style></publisher><pub-location><style face="normal" font="default" size="100%">Bari, Italy</style></pub-location><pages><style face="normal" font="default" size="100%">365–371</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">A social robot that is aware of our needs and continuously adapts its behaviour to them has the potential of creating a complex, personalized, human-like interaction of the kind we are used to have with our peers in our everyday lives. We are interested in exploring how would an adaptive architecture function and personalize to different users when given different initial values of its variables, i.e. when implementing the same adaptive framework with different robot personalities. Would an architecture that learns very quickly outperform a slower but steadier learning profile? To further explore this, we propose a cognitive architecture for the humanoid robot iCub supporting adaptability and we attempt to validate its functionality and test different robot profiles.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://ieeexplore.ieee.org/document/8913903&quot;&gt;Download&lt;/a&gt;</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Coninx, Alexandre</style></author><author><style face="normal" font="default" size="100%">Paul E. Baxter</style></author><author><style face="normal" font="default" size="100%">Oleari, Elettra</style></author><author><style face="normal" font="default" size="100%">Bellini, Sara</style></author><author><style face="normal" font="default" size="100%">Bierman, Bert</style></author><author><style face="normal" font="default" size="100%">Henkemans, Olivier Blanson</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author><author><style face="normal" font="default" size="100%">Valentin Enescu</style></author><author><style face="normal" font="default" size="100%">Espinoza, Raquel Ros</style></author><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author><author><style face="normal" font="default" size="100%">Remi Humbert</style></author><author><style face="normal" font="default" size="100%">Kiefer, Bernd</style></author><author><style face="normal" font="default" size="100%">Kruijff-Korbayová, Ivana</style></author><author><style face="normal" font="default" size="100%">Looije, Rosmarijn</style></author><author><style face="normal" font="default" size="100%">Mosconi, Marco</style></author><author><style face="normal" font="default" size="100%">Mark A. Neerincx</style></author><author><style face="normal" font="default" size="100%">Giulio Paci</style></author><author><style face="normal" font="default" size="100%">Patsis, Georgios</style></author><author><style face="normal" font="default" size="100%">Pozzi, Clara</style></author><author><style face="normal" font="default" size="100%">Sacchitelli, Francesca</style></author><author><style face="normal" font="default" size="100%">Hichem Sahli</style></author><author><style face="normal" font="default" size="100%">Alberto Sanna</style></author><author><style face="normal" font="default" size="100%">Sommavilla, Giacomo</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Yiannis Demiris</style></author><author><style face="normal" font="default" size="100%">Tony Belpaeme</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Towards Long-Term Social Child-Robot Interaction: Using Multi-Activity Switching to Engage Young Users</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Human-Robot Interaction</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2016</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://dl.acm.org/doi/abs/10.5898/JHRI.5.1.Coninx</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">5</style></volume><pages><style face="normal" font="default" size="100%">32–67</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Social robots have the potential to provide support in a number of practical domains, such as learning and behaviour change. This potential is particularly relevant for children, who have proven receptive to interactions with social robots. To reach learning and therapeutic goals, a number of issues need to be investigated, notably the design of an effective child-robot interaction (cHRI) to ensure the child remains engaged in the relationship and that educational goals are met. Typically, current cHRI research experiments focus on a single type of interaction activity (e.g. a game). However, these can suffer from a lack of adaptation to the child, or from an increasingly repetitive nature of the activity and interaction. In this paper, we motivate and propose a practicable solution to this issue: an adaptive robot able to switch between multiple activities within single interactions. We describe a system that embodies this idea, and present a case study in which diabetic children collaboratively learn with the robot about various aspects of managing their condition. We demonstrate the ability of our system to induce a varied interaction and show the potential of this approach both as an educational tool and as a research method for long-term cHRI.</style></abstract><issue><style face="normal" font="default" size="100%">1</style></issue><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://dl.acm.org/doi/abs/10.5898/JHRI.5.1.Coninx&quot;&gt;Download&lt;/a&gt; (Open Access)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Lewis, Matthew</style></author><author><style face="normal" font="default" size="100%">Oleari, Elettra</style></author><author><style face="normal" font="default" size="100%">Pozzi, Clara</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Tapus, Adriana</style></author><author><style face="normal" font="default" size="100%">André, Elisabeth</style></author><author><style face="normal" font="default" size="100%">Martin, Jean-Claude</style></author><author><style face="normal" font="default" size="100%">Ferland, François</style></author><author><style face="normal" font="default" size="100%">Ammi, Mehdi</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">An Embodied AI Approach to Individual Differences: Supporting Self-Efficacy in Diabetic Children with an Autonomous Robot</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 7th International Conference on Social Robotics (ICSR-2015)</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">2015</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/chapter/10.1007%2F978-3-319-25554-5_40</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer International Publishing</style></publisher><pub-location><style face="normal" font="default" size="100%">Paris</style></pub-location><pages><style face="normal" font="default" size="100%">401–410</style></pages><isbn><style face="normal" font="default" size="100%">978-3-319-25553-8</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">In this paper we discuss how a motivationally autonomous robot, designed using the principles of embodied AI, provides a suitable approach to address individual differences of children interacting with a robot, without having to explicitly modify the system. We do this in the context of two pilot studies using Robin, a robot to support self-confidence in diabetic children.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://link.springer.com/chapter/10.1007%2F978-3-319-25554-5_40&quot;&gt;Download&lt;/a&gt; (or &lt;a href=&quot;http://www.emotion-modeling.info/sites/default/files/2015_Lewis_Canamero_ICSR.pdf&quot;&gt;Download authors' draft&lt;/a&gt;)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Kruijff-Korbayová, Ivana</style></author><author><style face="normal" font="default" size="100%">Oleari, Elettra</style></author><author><style face="normal" font="default" size="100%">Pozzi, Clara</style></author><author><style face="normal" font="default" size="100%">Sacchitelli, Francesca</style></author><author><style face="normal" font="default" size="100%">Bagherzadhalimi, Anahita</style></author><author><style face="normal" font="default" size="100%">Bellini, Sara</style></author><author><style face="normal" font="default" size="100%">Kiefer, Bernd</style></author><author><style face="normal" font="default" size="100%">Racioppa, Stefania</style></author><author><style face="normal" font="default" size="100%">Coninx, Alexandre</style></author><author><style face="normal" font="default" size="100%">Paul E. Baxter</style></author><author><style face="normal" font="default" size="100%">Bierman, Bert</style></author><author><style face="normal" font="default" size="100%">Henkemans, Olivier Blanson</style></author><author><style face="normal" font="default" size="100%">Mark A. Neerincx</style></author><author><style face="normal" font="default" size="100%">Rosemarijn Looije</style></author><author><style face="normal" font="default" size="100%">Yiannis Demiris</style></author><author><style face="normal" font="default" size="100%">Espinoza, Raquel Ros</style></author><author><style face="normal" font="default" size="100%">Mosconi, Marco</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author><author><style face="normal" font="default" size="100%">Remi Humbert</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Hichem Sahli</style></author><author><style face="normal" font="default" size="100%">Joachim de Greeff</style></author><author><style face="normal" font="default" size="100%">James Kennedy</style></author><author><style face="normal" font="default" size="100%">Robin Read</style></author><author><style face="normal" font="default" size="100%">Lewis, Matthew</style></author><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author><author><style face="normal" font="default" size="100%">Giulio Paci</style></author><author><style face="normal" font="default" size="100%">Sommavilla, Giacomo</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Athanasopoulos, Georgios</style></author><author><style face="normal" font="default" size="100%">Patsis, Georgios</style></author><author><style face="normal" font="default" size="100%">Verhelst, Werner</style></author><author><style face="normal" font="default" size="100%">Alberto Sanna</style></author><author><style face="normal" font="default" size="100%">Tony Belpaeme</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Let’s Be Friends: Perception of a Social Robotic Companion for children with T1DM</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. New Friends 2015</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2015</style></year><pub-dates><date><style  face="normal" font="default" size="100%">10/2015</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://mheerink.home.xs4all.nl/pdf/ProceedingsNF2015-3.pdf</style></url></web-urls></urls><pub-location><style face="normal" font="default" size="100%">Almere, The Netherlands</style></pub-location><pages><style face="normal" font="default" size="100%">32–33</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">We describe the social characteristics of a robot developed to support children with Type 1 Diabetes Mellitus (T1DM) in the process of education and care. We evaluated the perception of the robot at a summer camp where diabetic children aged 10-14 experienced the robot in group interactions. Children in the intervention condition additionally interacted with it also individually, in one-to-one sessions featuring several game-like activities. These children perceived the robot significantly more as a friend than those in the control group. They also readily engaged with it in dialogues about their habits related to healthy lifestyle as well as personal experiences concerning diabetes. This indicates that the one-on-one interactions added a special quality to the relationship of the children with the robot.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://mheerink.home.xs4all.nl/pdf/ProceedingsNF2015-3.pdf&quot;&gt;Download full proceedings&lt;/a&gt; (PDF)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Angel Fernandez, Julian M.</style></author><author><style face="normal" font="default" size="100%">Bonarini, Andrea</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Tapus, Adriana</style></author><author><style face="normal" font="default" size="100%">André, Elisabeth</style></author><author><style face="normal" font="default" size="100%">Martin, Jean-Claude</style></author><author><style face="normal" font="default" size="100%">Ferland, François</style></author><author><style face="normal" font="default" size="100%">Ammi, Mehdi</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">A Reactive Competitive Emotion Selection System</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 7th International Conference on Social Robotics (ICSR-2015)</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">Emotion production</style></keyword><keyword><style  face="normal" font="default" size="100%">Emotional models</style></keyword><keyword><style  face="normal" font="default" size="100%">Human Robot Interaction</style></keyword><keyword><style  face="normal" font="default" size="100%">Social robotics</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2015</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/chapter/10.1007%2F978-3-319-25554-5_4</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer International Publishing</style></publisher><pub-location><style face="normal" font="default" size="100%">Paris</style></pub-location><pages><style face="normal" font="default" size="100%">31–40</style></pages><isbn><style face="normal" font="default" size="100%">978-3-319-25553-8</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">We present a reactive emotion selection system designed to be used in a robot that needs to respond autonomously to relevant events. A variety of emotion selection models based on &quot;cognitive appraisal&quot; theories exist, but the complexity of the concepts used by most of these models limits their use in robotics. Robots have physical constrains that condition their understanding of the world and limit their capacity to built the complex concepts needed for such models. The system presented in this paper was conceived to respond to &quot;disturbances&quot; detected in the environment through a stream of images, and use this low-level information to update emotion intensities. They are increased when specific patterns, based on Tomkins’ affect theory, are detected or reduced when it is not. This system could also be used as part of (or as first step in the incremental design of) a more cognitively complex emotional system for autonomous robots.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://link.springer.com/chapter/10.1007%2F978-3-319-25554-5_4&quot;&gt;Download&lt;/a&gt;</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Aryel Beck</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author><author><style face="normal" font="default" size="100%">Luisa Damiano</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Sommavilla, Giacomo</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Interpretation of Emotional Body Language Displayed by a Humanoid Robot: A Case Study with Children</style></title><secondary-title><style face="normal" font="default" size="100%">International Journal of Social Robotics</style></secondary-title></titles><keywords><keyword><style  face="normal" font="default" size="100%">emotion</style></keyword><keyword><style  face="normal" font="default" size="100%">emotional body language</style></keyword><keyword><style  face="normal" font="default" size="100%">perception</style></keyword><keyword><style  face="normal" font="default" size="100%">Social robotics</style></keyword></keywords><dates><year><style  face="normal" font="default" size="100%">2013</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/article/10.1007/s12369-013-0193-z</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">5</style></volume><pages><style face="normal" font="default" size="100%">325–334</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The work reported in this paper focuses on giving humanoid robots the capacity to express emotions with their body. Previous results show that adults are able to interpret different key poses displayed by a humanoid robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy) and valence (positive or negative emotion) whereas moving the head up produces an increase along these dimensions. Hence, changing the head position during an interaction should send intuitive signals. The study reported in this paper tested children’s ability to recognize the emotional body language displayed by a humanoid robot. The results suggest that body postures and head position can be used to convey emotions during child-robot interaction.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://link.springer.com/article/10.1007/s12369-013-0193-z&quot;&gt;Download&lt;/a&gt;</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Sue Attwood</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">René te Boekhorst</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Pietro Liò</style></author><author><style face="normal" font="default" size="100%">Orazio Miglino</style></author><author><style face="normal" font="default" size="100%">Giuseppe Nicosia</style></author><author><style face="normal" font="default" size="100%">Stefano Nolfi</style></author><author><style face="normal" font="default" size="100%">Mario Pavone</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">SimianWorld – A Study of Social Organisation Using an Artificial Life Model</style></title><secondary-title><style face="normal" font="default" size="100%">Advances in Artificial Life, ECAL 2013</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2013</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://www.mitpressjournals.org/doi/abs/10.1162/978-0-262-31709-2-ch090</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">MIT Press</style></publisher><pub-location><style face="normal" font="default" size="100%">Taormina, Italy</style></pub-location><pages><style face="normal" font="default" size="100%">633–640</style></pages><isbn><style face="normal" font="default" size="100%">9780262317092</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">In studies of social behaviour it is commonly assumed that individual complexity is the origin of intricate social interactions. In primates for example, social complexity is attributed to their intelligence and it is argued by many that the cognitive capacity of primates are especially manifest in the way they regulate their social relationships. Whereas the complex societies of non-human primates are considered to be as a direct result of their cognitive abilities this assumption is not made about social insects. In the absence of certain cognitive abilities their complex societies and structurally sophisticated nests are thought to arise from self-organisation. Since it is unlikely that cognitive capacities are all-or-nothing, usually integrating a range of mechanisms, it is possible that different species use similar cognitive mechanisms resulting in different behavioural outcomes.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://www.mitpressjournals.org/doi/abs/10.1162/978-0-262-31709-2-ch090&quot;&gt;Download&lt;/a&gt; (Open Access)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>17</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Tony Belpaeme</style></author><author><style face="normal" font="default" size="100%">Paul E. Baxter</style></author><author><style face="normal" font="default" size="100%">Robin Read</style></author><author><style face="normal" font="default" size="100%">Rachel Wood</style></author><author><style face="normal" font="default" size="100%">Cuayáhuitl, Heriberto</style></author><author><style face="normal" font="default" size="100%">Kiefer, Bernd</style></author><author><style face="normal" font="default" size="100%">Racioppa, Stefania</style></author><author><style face="normal" font="default" size="100%">Kruijff-Korbayová, Ivana</style></author><author><style face="normal" font="default" size="100%">Athanasopoulos, Georgios</style></author><author><style face="normal" font="default" size="100%">Valentin Enescu</style></author><author><style face="normal" font="default" size="100%">Rosemarijn Looije</style></author><author><style face="normal" font="default" size="100%">Mark A. Neerincx</style></author><author><style face="normal" font="default" size="100%">Yiannis Demiris</style></author><author><style face="normal" font="default" size="100%">Raquel Ros-Espinoza</style></author><author><style face="normal" font="default" size="100%">Aryel Beck</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Lewis, Matthew</style></author><author><style face="normal" font="default" size="100%">Baroni, Ilaria</style></author><author><style face="normal" font="default" size="100%">Nalin, Marco</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author><author><style face="normal" font="default" size="100%">Giulio Paci</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Sommavilla, Giacomo</style></author><author><style face="normal" font="default" size="100%">Remi Humbert</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Multimodal Child-Robot Interaction: Building Social Bonds</style></title><secondary-title><style face="normal" font="default" size="100%">Journal of Human-Robot Interaction</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2012</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://dl.acm.org/doi/10.5555/3109688.3109691</style></url></web-urls></urls><volume><style face="normal" font="default" size="100%">1</style></volume><pages><style face="normal" font="default" size="100%">33–53</style></pages><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">For robots to interact effectively with human users they must be capable of coordinated, timely behavior in response to social context. The Adaptive Strategies for Sustainable Long-Term Social Interaction (ALIZ-E) project focuses on the design of long-term, adaptive social interaction between robots and child users in real-world settings. In this paper, we report on the iterative approach taken to scientific and technical developments toward this goal: advancing individual technical competencies and integrating them to form an autonomous robotic system for evaluation “in the wild.” The first evaluation iterations have shown the potential of this methodology in terms of adaptation of the robot to the interactant and the resulting influences on engagement. This sets the foundation for an ongoing research program that seeks to develop technologies for social robot companions.</style></abstract><issue><style face="normal" font="default" size="100%">2</style></issue><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://dl.acm.org/doi/10.5555/3109688.3109691&quot;&gt;Download&lt;/a&gt; (Open Access)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Aryel Beck</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Luisa Damiano</style></author><author><style face="normal" font="default" size="100%">Sommavilla, Giacomo</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Children Interpretation of Emotional Body Language Displayed by a Robot</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 3rd International Conference on Social Robotics (ICSR 2011)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2011</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/chapter/10.1007%2F978-3-642-25504-5_7</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer</style></publisher><pub-location><style face="normal" font="default" size="100%">Amsterdam, The Netherlands</style></pub-location><pages><style face="normal" font="default" size="100%">62–70</style></pages><isbn><style face="normal" font="default" size="100%">978-3-642-25504-5</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Previous results show that adults are able to interpret different key poses displayed by the robot and also that changing the head position affects the expressiveness of the key poses in a consistent way. Moving the head down leads to decreased arousal (the level of energy), valence (positive or negative) and stance (approaching or avoiding) whereas moving the head up produces an increase along these dimensions [1]. Hence, changing the head position during an interaction should send intuitive signals which could be used during an interaction. The ALIZ-E target group are children between the age of 8 and 11. Existing results suggest that they would be able to interpret human emotional body language [2, 3].

Based on these results, an experiment was conducted to test whether the results of [1] can be applied to children. If yes body postures and head position could be used to convey emotions during an interaction.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://link.springer.com/chapter/10.1007%2F978-3-642-25504-5_7&quot;&gt;Download&lt;/a&gt;</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Paul E. Baxter</style></author><author><style face="normal" font="default" size="100%">Tony Belpaeme</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Cosi, Piero</style></author><author><style face="normal" font="default" size="100%">Yiannis Demiris</style></author><author><style face="normal" font="default" size="100%">Valentin Enescu</style></author><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author><author><style face="normal" font="default" size="100%">Kruijff-Korbayová, Ivana</style></author><author><style face="normal" font="default" size="100%">Rosemarijn Looije</style></author><author><style face="normal" font="default" size="100%">Nalin, Marco</style></author><author><style face="normal" font="default" size="100%">Mark A. Neerincx</style></author><author><style face="normal" font="default" size="100%">Hichem Sahli</style></author><author><style face="normal" font="default" size="100%">Giocomo Sommavilla</style></author><author><style face="normal" font="default" size="100%">Tesser, Fabio</style></author><author><style face="normal" font="default" size="100%">Rachel Wood</style></author></authors></contributors><titles><title><style face="normal" font="default" size="100%">Long-Term Human-Robot Interaction with Young Users</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. ACM/IEEE Human-Robot Interaction conference (HRI-2011) (Robots with Children Workshop)</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2011</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://www.researchgate.net/publication/228470784_Long-term_human-robot_interaction_with_young_users</style></url></web-urls></urls><pub-location><style face="normal" font="default" size="100%">Lausanne, Switzerland</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Artificial companion agents have the potential to combine novel means for effective health communication with young patients support and entertainment. However, the theory and practice of long-term child-robot interaction is currently an underdeveloped area of research. This paper introduces an approach that integrates multiple functional aspects necessary to implement temporally extended human-robot interaction in the setting of a paediatric ward. We present our methodology for the implementation of a companion robot which will be used to support young patients in hospital as they learn to manage a lifelong metabolic disorder (diabetes). The robot will interact with patients over an extended period of time. The necessary functional aspects are identified and introduced, and a review of the technical challenges involved is presented.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://www.researchgate.net/publication/228470784_Long-term_human-robot_interaction_with_young_users&quot;&gt;Downlaod&lt;/a&gt;</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Luisa Damiano</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Jackie Chappell</style></author><author><style face="normal" font="default" size="100%">Susannah Thorpe</style></author><author><style face="normal" font="default" size="100%">Nick Hawes</style></author><author><style face="normal" font="default" size="100%">Aaron Sloman</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Constructing Emotions: Epistemological Groundings and Applications in Robotics for a Synthetic Approach to Emotions</style></title><secondary-title><style face="normal" font="default" size="100%">International Symposium on AI-Inspired Biology</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2010</style></year></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.cs.bham.ac.uk/research/projects/cogaff/aiib/Symposium_6/Papers/Damiano.pdf</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">The Society for the Study of Artificial Intelligence and the Simulation of Behaviour</style></publisher><pub-location><style face="normal" font="default" size="100%">De Montford University, Leicester, UK</style></pub-location><pages><style face="normal" font="default" size="100%">20–28</style></pages><isbn><style face="normal" font="default" size="100%">1902956923</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Can the sciences of the artificial positively contribute to the scientific exploration of life and cognition? Can they actually improve the scientific knowledge of natural living and cognitive processes, from biological metabolism to reproduction, from conceptual mapping of the environment to logic reasoning, language, or even emotional expression? To these kinds of questions our article aims to answer in the affirmative. Its main object is the scientific emergent methodology often called the “synthetic approach”, which promotes the programmatic production of embodied and situated models of living and cognitive systems in order to explore aspects of life and cognition not accessible in natural systems and scenarios. The first part of this article presents and discusses the synthetic approach, and proposes an epistemological framework which promises to warrant genuine transmission of knowledge from the sciences of the artificial to the sciences of the natural. The second part of this article looks at the research applying the synthetic approach to the psychological study of emotional development. It shows how robotics, through the synthetic methodology, can develop a particular perspective on emotions, coherent with current psychological theories of emotional development and fitting well with the recent “cognitive extension” approach proposed by cognitive sciences and philosophy of mind.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://www.cs.bham.ac.uk/research/projects/cogaff/aiib/Symposium_6/Papers/Damiano.pdf&quot;&gt;Download&lt;/a&gt; (PDF)</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Antoine Hiolle</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Peirre Andry</style></author><author><style face="normal" font="default" size="100%">Arnaud J Blanchard</style></author><author><style face="normal" font="default" size="100%">Philippe Gaussier</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Shuzhi Sam Ge</style></author><author><style face="normal" font="default" size="100%">Haizhou Li</style></author><author><style face="normal" font="default" size="100%">John-John Cabibihan</style></author><author><style face="normal" font="default" size="100%">Yeow Kee Tan</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Using the Interaction Rhythm as a Natural Reinforcement Signal for Social Robots: A Matter of Belief</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. International Conference on Social Robotics, ICSR 2010</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">2010</style></year></dates><publisher><style face="normal" font="default" size="100%">Springer</style></publisher><pub-location><style face="normal" font="default" size="100%">Singapore</style></pub-location><volume><style face="normal" font="default" size="100%">6414</style></volume><pages><style face="normal" font="default" size="100%">81–89</style></pages><isbn><style face="normal" font="default" size="100%">978-3-642-17247-2</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">In this paper, we present the results of a pilot study of a human robot interaction experiment where the rhythm of the interaction is used as a reinforcement signal to learn sensorimotor associations. The algorithm uses breaks and variations in the rhythm at which the human is producing actions. The concept is based on the hypothesis that a constant rhythm is an intrinsic property of a positive interaction whereas a break reflects a negative event. Subjects from various backgrounds interacted with a NAO robot where they had to teach the robot to mirror their actions by learning the correct sensorimotor associations. The results show that in order for the rhythm to be a useful reinforcement signal, the subjects have to be convinced that the robot is an agent with which they can act naturally, using their voice and facial expressions as cues to help it understand the correct behaviour to learn. When the subjects do behave naturally, the rhythm and its variations truly reflects how well the interaction is going and helps the robot learn efficiently. These results mean that non-expert users can interact naturally and fruitfully with an autonomous robot if the interaction is believed to be natural, without any technical knowledge of the cognitive capacities of the robot.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">John C Murray</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Kim A. Bard</style></author><author><style face="normal" font="default" size="100%">Ross, Marina Davila</style></author><author><style face="normal" font="default" size="100%">Thorsteinsson, Kate</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Kim, Jong-Hwan</style></author><author><style face="normal" font="default" size="100%">Ge, Shuzhi Sam</style></author><author><style face="normal" font="default" size="100%">Vadakkepat, Prahlad</style></author><author><style face="normal" font="default" size="100%">Jesse, Norbert</style></author><author><style face="normal" font="default" size="100%">Al Manum, Abdullah</style></author><author><style face="normal" font="default" size="100%">Puthusserypady K, Sadasivan</style></author><author><style face="normal" font="default" size="100%">Rückert, Ulrich</style></author><author><style face="normal" font="default" size="100%">Sitte, Joaquin</style></author><author><style face="normal" font="default" size="100%">Witkowski, Ulf</style></author><author><style face="normal" font="default" size="100%">Nakatsu, Ryohei</style></author><author><style face="normal" font="default" size="100%">Braunl, Thomas</style></author><author><style face="normal" font="default" size="100%">Baltes, Jacky</style></author><author><style face="normal" font="default" size="100%">Anderson, John</style></author><author><style face="normal" font="default" size="100%">Wong, Ching-Chang</style></author><author><style face="normal" font="default" size="100%">Verner, Igor</style></author><author><style face="normal" font="default" size="100%">Ahlgren, David</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">The Influence of Social Interaction on the Perception of Emotional Expression: A Case Study with a Robot Head</style></title><secondary-title><style face="normal" font="default" size="100%">Advances in Robotics: Proc. FIRA RoboWorld Congress 2009</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">2009</style></year><pub-dates><date><style  face="normal" font="default" size="100%">08/2009</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/chapter/10.1007%2F978-3-642-03983-6_10</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer Berlin Heidelberg</style></publisher><pub-location><style face="normal" font="default" size="100%">Incheon, Korea</style></pub-location><volume><style face="normal" font="default" size="100%">5744</style></volume><pages><style face="normal" font="default" size="100%">63–72</style></pages><isbn><style face="normal" font="default" size="100%">978-3-642-03983-6</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">In this paper we focus primarily on the influence that socio-emotional interaction has on the perception of emotional expression by a robot. We also investigate and discuss the importance of emotion expression in socially interactive situations involving human robot interaction (HRI), and show the importance of utilising emotion expression when dealing with interactive robots, that are to learn and develop in socially situated environments. We discuss early expressional development and the function of emotion in communication in humans and how this can improve HRI communications. Finally we provide experimental results showing how emotion-rich interaction via emotion expression can affect the HRI process by providing additional information.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Oros, Nicolas</style></author><author><style face="normal" font="default" size="100%">Volker Steuber</style></author><author><style face="normal" font="default" size="100%">Davey, Neil</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Roderick G Adams</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Asada, Minoru</style></author><author><style face="normal" font="default" size="100%">Hallam, John C T</style></author><author><style face="normal" font="default" size="100%">Jean-Arcady Meyer</style></author><author><style face="normal" font="default" size="100%">Tani, Jun</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Adaptive Olfactory Encoding in Agents Controlled by Spiking Neural Networks</style></title><secondary-title><style face="normal" font="default" size="100%">From Animals to Animats 10: Proc. 10th International Conference on Simulation of Adaptive Behavior (SAB 2008)</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science (LNCS)</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">07/2008</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://link.springer.com/chapter/10.1007/978-3-540-69134-1_15</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer, Berlin, Heidelberg</style></publisher><pub-location><style face="normal" font="default" size="100%">Osaka, Japan</style></pub-location><volume><style face="normal" font="default" size="100%"> 5040</style></volume><pages><style face="normal" font="default" size="100%">148–158</style></pages><isbn><style face="normal" font="default" size="100%">978-3-540-69134-1</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">We created a neural architecture that can use two different types of information encoding strategies depending on the environment. The goal of this research was to create a simulated agent that could react to two different overlapping chemicals having varying concentrations. The neural network controls the agent by encoding its sensory information as temporal coincidences in a low concentration environment, and as firing rates at high concentration. With such an architecture, we could study synchronization of firing in a simple manner and see its effect on the agent’s behaviour.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Oros, Nicolas</style></author><author><style face="normal" font="default" size="100%">Volker Steuber</style></author><author><style face="normal" font="default" size="100%">Davey, Neil</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">Roderick G Adams</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Trappl, R</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Optimal Receptor Response Functions for the Detection of Pheromones by Agents Driven by Spiking Neural Networks</style></title><secondary-title><style face="normal" font="default" size="100%">Proc. 9th European Meeting on Cybernetics and Systems Research, Vol. II</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2008</style></year><pub-dates><date><style  face="normal" font="default" size="100%">03/2008</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">http://www.cogsci.uci.edu/~noros/mypapers/OROS_2008_EMCSR.pdf</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Austrian Society for Cybernetic Studies</style></publisher><pub-location><style face="normal" font="default" size="100%">Vienna, Austria</style></pub-location><pages><style face="normal" font="default" size="100%">427–432</style></pages><isbn><style face="normal" font="default" size="100%">978-3-85206-175-7</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The goal of the work presented here is to find a model of a spiking sensory neuron that could cope with small variations in the concentration of simulated chemicals and also the whole range of concentrations. By using a biologically plausible sigmoid function in our model to map chemical concentration to current, we could produce agents able to detect the whole range of concentration of chemicals (pheromones) present in the environment as well as small variations of them. The sensory neurons used in our model are able to encode the stimulus intensity into appropriate firing rates.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Avila-García, Orlando</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">René te Boekhorst</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Banzhaf, Wolfgang</style></author><author><style face="normal" font="default" size="100%">Christaller, Thomas</style></author><author><style face="normal" font="default" size="100%">Dittrich, Peter</style></author><author><style face="normal" font="default" size="100%">Kim, Jan T</style></author><author><style face="normal" font="default" size="100%">Ziegler, Jens</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Analyzing the Performance of &quot;Winner-Take-All&quot; and &quot;Voting-Based&quot; Action Selection Policies within the Two-Resource Problem</style></title><secondary-title><style face="normal" font="default" size="100%">Advances in Artificial Life: 7th European Conference, ECAL 2003</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Artificial Intelligence</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">2003</style></year><pub-dates><date><style  face="normal" font="default" size="100%">09/2003</style></date></pub-dates></dates><urls><web-urls><url><style face="normal" font="default" size="100%">https://link.springer.com/chapter/10.1007%2F978-3-540-39432-7_79</style></url></web-urls></urls><publisher><style face="normal" font="default" size="100%">Springer</style></publisher><pub-location><style face="normal" font="default" size="100%">Dortmund, Germany</style></pub-location><volume><style face="normal" font="default" size="100%">2801</style></volume><pages><style face="normal" font="default" size="100%">733–742</style></pages><isbn><style face="normal" font="default" size="100%">978-3-540-20057-4</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">The problem of action selection for an autonomous creature implies resolving conflicts between competing behavioral alternatives. These conflicts can be resolved either via competition, following a “winner-take-all” approach, or via cooperation in a “voting-based” approach. In this paper we present two robotic architectures implementing these approaches, and report on experiments we have performed to compare their underlying optimization policies. We have framed this study within the context of the “two-resource problem,” as it provides a widely used standard that favors systematic experimentation, analysis, and comparison of results.</style></abstract><notes><style face="normal" font="default" size="100%">&lt;a href=&quot;https://link.springer.com/chapter/10.1007%2F978-3-540-39432-7_79&quot;&gt;Download&lt;/a&gt;</style></notes></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>5</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Cañamero, Lola D</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Robert Trappl</style></author><author><style face="normal" font="default" size="100%">Paolo Petta</style></author><author><style face="normal" font="default" size="100%">Sabine Payr</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Designing emotions for activity selection in autonomous agents</style></title><secondary-title><style face="normal" font="default" size="100%">Emotions in Humans and Artifacts</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2003</style></year></dates><publisher><style face="normal" font="default" size="100%">MIT Press</style></publisher><pages><style face="normal" font="default" size="100%">115–148</style></pages><isbn><style face="normal" font="default" size="100%">9780262201421</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">This chapter advocates a &quot;bottom-up&quot; philosophy for the design of emotional systems for autonomous agents that is guided by functional concerns and considers the particular case of designing emotions as mechanisms for action selection. The concrete realization of these ideas implies that the design process must start with an analysis of the requirements that the features of the environment, the characteristics of the action-selection task, and the agent architecture impose on the emotional system. This is particularly important if we see emotions as mechanisms that aim at modifying or maintaining the relation of the agent with its (external and internal) environment (rather than modifying the environment itself) in order to preserve the agent's goals. Emotions can then be selected and designed according to the roles they play with respect to this relation. 
</style></abstract><section><style face="normal" font="default" size="100%">4</style></section></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Avila-García, Orlando</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author><author><style face="normal" font="default" size="100%">René te Boekhorst</style></author><author><style face="normal" font="default" size="100%">Davey, Neil</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">U Nehmzow</style></author><author><style face="normal" font="default" size="100%">C Melhuish</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Optimization Criteria Underlying &quot;Winner-Take-All&quot; and &quot;Voting-Based&quot; Action Selection Policies</style></title><secondary-title><style face="normal" font="default" size="100%">Towards Intelligent Mobile Robots, TIMR'03: 4th British Conference on Mobile Robotics</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2003</style></year></dates><pub-location><style face="normal" font="default" size="100%">University of the West of England, Bristol</style></pub-location><language><style face="normal" font="default" size="100%">eng</style></language></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Avila-García, Orlando</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Garijo, Francisco J</style></author><author><style face="normal" font="default" size="100%">Riquelme, José C</style></author><author><style face="normal" font="default" size="100%">Toro, Miguel</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Comparing a Voting-Based Policy with Winner-Takes-All to Perform Action Selection in Motivational Agents</style></title><secondary-title><style face="normal" font="default" size="100%">Advances in Artificial Intelligence – IBERAMIA 2002; Proc. 8th Ibero-American Conference on AI</style></secondary-title><tertiary-title><style face="normal" font="default" size="100%">Lecture Notes in Computer Science</style></tertiary-title></titles><dates><year><style  face="normal" font="default" size="100%">2002</style></year></dates><publisher><style face="normal" font="default" size="100%">Springer</style></publisher><pub-location><style face="normal" font="default" size="100%">Seville, Spain</style></pub-location><volume><style face="normal" font="default" size="100%">2527</style></volume><pages><style face="normal" font="default" size="100%">855–864</style></pages><isbn><style face="normal" font="default" size="100%">978-3-540-00131-7</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">Embodied autonomous agents are systems that inhabit dynamic, unpredictable environments in which they try to satisfy a set of time-dependent goals or motivations in order to survive. One of the problems that this implies is action selection, the task of resolving conflicts between competing behavioral alternatives. We present an experimental comparison of two action selection mechanisms (ASM), implementing &quot;winner-takes-all&quot; (WTA) and &quot;voting-based&quot; (VB) policies respectively, modeled using a motivational behavior-based approach. This research shows the adequacy of these two ASM with respect to different sources of environmental complexity and the tendency of each of them to show different behavioral phenomena.</style></abstract></record><record><source-app name="Biblio" version="7.x">Drupal-Biblio</source-app><ref-type>47</ref-type><contributors><authors><author><style face="normal" font="default" size="100%">Nehaniv, Chrystopher L</style></author><author><style face="normal" font="default" size="100%">Daniel Polani</style></author><author><style face="normal" font="default" size="100%">Kerstin Dautenhahn</style></author><author><style face="normal" font="default" size="100%">René te Boekhorst</style></author><author><style face="normal" font="default" size="100%">Lola Cañamero</style></author></authors><secondary-authors><author><style face="normal" font="default" size="100%">Russell Standish</style></author><author><style face="normal" font="default" size="100%">Mark A Bedau</style></author><author><style face="normal" font="default" size="100%">Hussein A Abbass</style></author></secondary-authors></contributors><titles><title><style face="normal" font="default" size="100%">Meaningful Information, Sensor Evolution, and the Temporal Horizon of Embodied Organisms</style></title><secondary-title><style face="normal" font="default" size="100%">Artificial Life VIII: Proceedings of the Eighth International Conference on Artificial Life</style></secondary-title></titles><dates><year><style  face="normal" font="default" size="100%">2002</style></year></dates><publisher><style face="normal" font="default" size="100%">MIT Press</style></publisher><pub-location><style face="normal" font="default" size="100%">Sydney, Australia</style></pub-location><pages><style face="normal" font="default" size="100%">345–349</style></pages><isbn><style face="normal" font="default" size="100%">9780262692816</style></isbn><language><style face="normal" font="default" size="100%">eng</style></language><abstract><style face="normal" font="default" size="100%">We survey and outline how an agent-centered, information-theoretic approach to meaningful information extending classical Shannon information theory by means of utility measures relevant for the goals of particular agents can be applied to sensor evolution for real and constructed organisms. Furthermore, we discuss the relationship of this approach to the programme of freeing artificial life and robotic systems from reactivity, by describing useful types of information with broader temporal horizon, for signaling, communication, affective grounding, two-process learning, individual learning, imitation and social learning, and episodic experiential information (memories, narrative, and culturally transmitted information).</style></abstract></record></records></xml>