SciELO - Scientific Electronic Library Online

 
vol.42 issue2Risk of Sodicity in the Soils of Canton Milagro, Guayas-Ecuador at Time of Dryness author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

  • Have no similar articlesSimilars in SciELO

Share


Revista Politécnica

On-line version ISSN 2477-8990Print version ISSN 1390-0129

Rev Politéc. (Quito) vol.42 n.2 Quito Nov./Jan. 2019

 

Articles

Cognitive social zones for improving the pedestrian collision avoidance with mobile robots

Zonas sociales cognitivas para mejorar la evasión de peatones con robots móviles

Daniel Herrera 1   *  

Javier Gimenez 1  

Matias Monllor 1  

Flavio Roberti 1  

Ricardo Carelli 1  

1Instituto de Automática-Conicet, Universidad Nacional de San Juan, San Juan, Argentina


Abstract:

Social behaviors are crucial to improve the acceptance of a robot in human-shared environments. One of the most important social cues is undoubtedly the social space. This human mechanism acts like a repulsive field to guarantee comfortable interactions. Its modeling has been widely studied in social robotics, but its experimental inference has been weakly mentioned. Thereby, this paper proposes a novel algorithm to infer the dimensions of an elliptical social zone from a points-cloud around the robot. The approach consists of identifying how the humans avoid a robot during navigation in shared scenarios, and later use this experience to represent humans obstacles like elliptical potential fields with the previously identified dimensions. Thus, the algorithm starts with a first-learning stage where the robot navigates without avoiding humans, i.e. the humans are in charge of avoiding the robots while developing their tasks. During this period, the robot generates a points-cloud with 2D laser measures from its own framework to define the human-presence zones around itself but prioritizing its closest surroundings. Later, the inferred social zone is incorporated to a null-space-based (NSB) control for a non-holonomic mobile robot, which consists of both trajectory tracking and pedestrian collision avoidance. Finally, the performance of the learning algorithm and the motion control is verified through experimentation.

Keywords: Human-robot interaction; control of mobile robots; robot cognition; social robots; social zones; proxemics.

Resumen:

Los comportamientos sociales son esenciales para mejorar la aceptación social de un robot en ambientes compartidos con humanos. Uno de las cualidades más importantes es sin duda el espacio social. Este mecanismo humano actúa como un campo repulsivo para garantizar interacciones confortables. Su modelado ha sido ampliamente estudiado en robótica social, sin embargo su inferencia experimental ha sido apenas mencionada. De esta manera, este trabajo propone un novedoso algoritmo para inferir las dimensiones de una zona social elíptica a partir de una nube de puntos alrededor del robot. El enfoque consiste en identificar cómo los humanos evitan al robot durante una evasión en un ambiente compartido, y posteriormente usar esta experiencia para representar obstáculos humanos como campos elípticos potenciales con las dimensiones previamente identificadas. Para ésto, el algoritmo empieza con una primera etapa de aprendizaje donde el robot navega sin evadir a los humanos, i.e. los humanos estan a cargo de evadir al robot durante el desenvolvimiento de sus tareas. Durante este periodo, el robot genera una nube de puntos de mediciones laser 2D desde su marco de referencia para definir las zonas de no-inferencia humana alrededor de sí mismo, pero priorizando sus cercanías. Posteriormente, la zona social que ha sido inferida se incorpora a un control de movimiento basado en espacios nulos (NSB) para un robot móvil no holonómico, el cual se diseña para seguir trayectorias y evitar colisiones con peatones. Finalmente, el rendimiento del algoritmo de aprendizaje y el control de movimiento es verificado experimentalmente.

Palabras clave: Interación humano-robot; control robot móviles; cognición robótica; robots sociales; zonas sociales; proxémica

1. INTRODUCTION

Cognitive robots are endowed with intelligent behavior, which allows to learn and reason how to behave in response to complex goals in a real environment. In general, its application focuses on creating long-term human-robot interactions. In consequence, this field of study is centered on developing robots that are capable of perceiving the environment and other individuals to learn from experience and to adapt their behavior in an appropriate manner (Aly, Griffiths, y Stramandinoli, 2017). As with human cognition, robot cognition is not only good at interaction but that interaction is indeed fundamental to build a robotic cognition system.

Inspired by the proxemics studies developed by Hall (1963), some conventions have been established in robotics in order to avoid the intimate and social spaces of humans depending of the task, the situation, and even of specific human behavior reaction. For example, Chi-Pang, Chen-Tun, Kuo-Hung, y Li-Chen (2011) discuss different types of personal space for humans according to the situation, e.g., they assume an egg-shaped personal space for the human while it is moving, in order to have a long and clear space to walk (giving the sense of safety). For this, they consider that the length of the semi-major axis of the potential field is proportional to the human velocity. In Scandolo y Fraichard (2011) it is used the personal space in their social cost map model for path simulation. In Guzzi, Giusti, Gambardella, Theraulaz, y Di Caro (2013) it is incorporated a potential field that dynamically modifies its dimensions according to the relative distance with the human to avoid occlusion events or “deadlocks”. In Ratsamee, Mae, Ohara, Kojima, y Arai (2013), a human-friendly navigation is proposed, where the concept of personal space or “hidden space” is used to prevent uncomfortable feelings when humans avoid or interact with robots. This is based on the analysis of human motion and behavior (face orientation and overlapping of personal space).

According to proxemics research, the actual shape of the social zones is subject to tweaking and the preferred distances between humans and robots depend on many context factors. Most of them have growing costs as the distance to some area decreases (, Pandey, Alami, y Kirsch, 2013). Due to the diversity of models for the social zone, find the optimal selection of distances and shapes results also a key challenge. For example, Pacchierotti, Christensen, y Jensfelt (2006) evaluates through an experimental methodology the social distance for passage in a corridor environment to qualitative determine the optimal path during the evasion. The hypothesis that people prefer robots to stay out of their intimate space when they pass each other in a corridor is qualitatively verified. On the other hand, Kuderer, Kretzschmar, y Burgard (2013) proposes an approach that allows a mobile robot to learn how to navigate in the presence of humans while it is being tele-operated in its designated environment. It is based on a feature-based maximum entropy learning to derive a navigation policy from the interactions with the humans. Other approaches considers that the comfort of the individual is not only guaranteed through the avoidance of these social zones, but also the dynamics during the meddling events, i.e., to give a natural, smooth and damped motion during the interaction by considering these zones as flexible potential zones (Herrera, Roberti, Toibero, y Carelli, 2017).10

In this paper, the “hidden-dimension” or social space of humans is characterized by the robot with previous experiences. It consists of taking measurements in front of the robot to determine zones of the lowest human-presence around itself. Under the hypothesis that this zone can be modeled like elliptical potential fields, the best elliptical zone that fits to the zone with lowest human presence is defined as the cognitive social field. The use of this inferred ellipse is based on the hypothesis that humans avoids other individuals in the way they would like to be avoided (Fong, Nourbakhsh, y Dautenhahn, 2003). With the inferred social zone, the design of a null-space approach (NSB) for trajectory tracking control and pedestrian collision avoidance is proposed. The first-priority task consists in keeping out of social zones of humans, which are defined as elliptical potential fields which are moving with a non-holonomic motion, and whose dimensions are defined as mentioned. On the other hand, the secondary task is related to follow the predefined trajectory. The good performance of the designed control is verified experimentally.

In this way, Section 2 presents a novel algorithm to estimate the social zone of the robot based on laser measures and acquisition, storage and update policies during direct interaction with humans. Later, in Section 3, the estimated dimensions of the social zone are used to infer a social potential field for human obstacles. Following with this, in Section 4 the avoidance of this field and the trajectory tracking tasks are included as part of a null-space based control design. In Section 5, the performance of the algorithm is tested through experimentation with a mobile robot Pioneer 3AT which is navigating in a structured human-shared scenario. Finally, some conclusions are presented in Section 6.

2. COGNITION SYSTEM

Learning from the experience is doubtlessly a human quality that must be used by robots to understand and reproduce human behaviors, which enhances the human-robot interaction (Fong y cols., 2003).

The humans unconsciously define a social zone for the robot while avoid it, which is represented by the the non-meddling zone, i.e. the region near to the robot that has been not invaded by the human. This section presents a cognitive system in which the robot learns how to avoid humans by analyzing how the human avoids the social space of the robot (see Fig. 1). In this manner, it is proposed to use this hypothesis to infer a social zone for collision avoidance with humans. For this, elliptical social zones have been frequently chosen by researchers to represent human obstacles, which have proved to improve the social acceptance of robots navigating between humans (Rios-Martinez, Spalanzani, y Laugier, 2015). In this way, consider the equation of the ellipse in the robot framework expressed in polar coordinates by

cos2θa2+sin2θb2=1r2

where r>0 and θ[0,2π) are the radial and angular coordinates of the ellipse points, and a,b>0 are the major and minor axis of the ellipse, respectively.

2.1 Data acquisition

For each detection event, new laser measurements ri,θi, i=1,, nt are captured from the robot, where ri and θi[-π/2,π/2] are respectively the distance and the orientation relatives to the robot from which a human is observed, and nt is the number of measurements of humans for each detection event (see Fig. 1).

These measures are organized in a vector of nt-distances dnew=r1,, rntT and its corresponding ray-angles θnew=θ1,, θntT. Due to computational costs, a mechanism for eliminating non-informative data must be incorporated.

Figure 1.  Robotic Cognition System. 

In this manner, new observations of humans, compactly expressed by Dnew=[dnew,θnew] must be incorporated or not into the points-cloud D=[d,θ] acquired so far. The procedure is based on storage and update policies, which are shown in the next subsections (see flowchart in Fig. 2).

Figure 2.  Flowchart of the cognition system. 

2.2 Storage and update algorithms

Let N=size(D) be the size of the points-cloud, which is limited by the need of reducing computational costs, outliers observations, and redundant data. These objectives are reached by using a subsampling procedure, which is the random process of reducing the sample size from K to k<K (subsampling performed in one step), where each observation remains in the subsample with probability k=K. If a storage quota of C observations is also considered, then it is required to perform a subsampling every time that the number of stored observations N exceeds the quota C, in which each observation has a probability C=N of remaining in the subsample (recursive subsampling).

This task is reached by using Algorithm 1, in which new observations are properly incorporated in D so that all observations (regardless of when they are acquired) have the same probability p of belonging to the points-cloud.

The procedure consist of initializing a empty points-cloud (N=0) and an acceptance probability of new observations p=1. The algorithm incorporates all the observations acquired to the pointscloud until N+nt>C. When it happens for the first time, C observations must be subsampled and the remaining observations must be deleted.

Then, probability of belonging to the points-cloud is updated to p=C/N, and N is set to C. From this moment, all new observations must undergo an acceptance process in order to match their probability with the probability p of the observations acquired previously (steps 6-10 of the Algorithm 1). Later, if the quota is still exceeded a new subsampling is performed and the acceptance probability is decreased (steps 11-13 of the Algorithm 1). In this way, all observations (originals and news) have the same probability p of belonging to the points-cloud.

Algorithm 1 (D,p)=subsampling(D,Dnew,p)

1: N=size(D)

2: nt=size(Dnew)

3: if N+nt≤C then if the storage quota is not exceeded

4: D=[D;Dnew]

5: else. if the storage quota is exceeded

6: for i=1:nt do for each new observation

7: u=rand(0,1) uniform random number

8: if u>p then acceptance-rejection procedure

9: i-th new observation is deleted

10: nt=nt-1

11: if N+nt>C then if the quota is still exceeded

12: subsample C obs. without replacement.

13: p=pC/N update the acceptance probability

14: N=C

15: else

16: D=[D;Dnew]

Subsequently, a process is incorporated to avoid the agglomeration of observations in specific angular direction from the robot framework, in order to unify the angular distribution of the pointscloud. For them, the number of measures per ray is also limited by considering only the ˜C nearest measures to the robot, and measures in excess are discarded (see Algorithm 2).

Algorithm 2 D=RayMeasures(D)

1: for i=1:180 do for each angle

2: #θi=size(θ= θi) number of obs. in each angle θi.

3: if #θi>C~ then agglomeration test

4: Leave only the C~ nearest obs. and delete the others.

2.3 Identification of the elliptical social zone

The identification procedure consists of estimating the parameters a and b that minimize the following functional

Ja,bD=i=1Nwicos2θa2+sin2θb2-1r22,

where ri,θi is the i-th observation, and wi are weight factors that allow to incorporate influence levels of the observations in the estimation. This optimization is equivalent to find the weighted least squares solution of the linear system Y = XΘ, where

Y=r1-2r2-2rN-2,X=cos2θ1cos2θ2cos2θNsin2θ1sin2θ2sin2θN,Θ=a-2b-2

and the weight matrix is the NxN diagonal matrix

W=diag({wi}).

In this case, the weight factors are defined as

wi=δ+sin2(θi)δ+1exp-ri-rmin,iμ2,

where δ,μ>0 are design constants and rmin,i is the minimum distance observed in the direction θi. This definition gives higher priority to points near to the robot (to obtain an internal ellipse to the points-cloud), as well as, to points in the front-side (to compensate that these observations are less frequent and distant from each other).

Then, the solution of the optimization problem is given by

Θ = (XTWX)-1XTWY. (1)

Algorithm 3 performs this estimation.

Algorithm 3 [a;b]=ElipseWLS(D)

1: r = D(:,1), θ=D(:,2)

2: for i = 1:N do for each observation

3: rmin,i=min(r(θ = θi))

4: Y(i,:)=[cos2(θi),sin2(θi)]

5: X(i)=1/ri2

6: W(i,i)=wi

7: Θ = (XTWX)-1XTWY.

8: a=Θ(1)-1/2,b=Θ(2)-1/2

2.4 Learning indicator

Infer a social space requires to define a learning indicator during the cognition stage, which determines a condition where the ellipse dimensions can be well defined, and in consequence a social zone for human collision avoidance algorithms could be estimated.

Let γi be a dispersion factor for the laser-angle θi with i=1,...,180, defined as,

γi=r-i-rmin,ir-i-Rrmin,iR,rmin,i>R, (2)

where r-i is the mean of the distances in the direction θi, R is a radius-range used as design parameter and where it is expected to fit the ellipse (represents a radius of social interaction), and rmin,i is the minimum distance observed in the direction θi.

In this manner, let γ be the learning indicator defined by,

γ=i=1180γi/180 (3)

In this manner, the points-cloud can be only used for estimating a social zone when γ is less than a threshold.

3. SOCIAL POTENTIAL FIELD

As mentioned, social zones are interpreted as repulsive potential fields, which guarantee the human comfort during interactions. Thereby, once that the dimensions of the minor and major axis of an elliptical social zone have been inferred of the cognition system, an elliptical potential field is defined as follows.

Let xh=[xh,yh]T be the Cartesian position of a human obstacle, φh its orientation and xr=xr,yrT the Cartesian position of the robot in the global framework x-y. Additionally, consider an alternative framework x*-y*, which corresponds to x-y rotated by φh (see Fig. 3). In this rotated framework, if the human position is xh*=xh*,yh*T and the robot position is xr*=xr*,yr*T, then the potential field Vh is expressed as,

Vh=exp-xr*-xh*a2-yr*-yh*a2,

where a is the major-axis length of the elliptical Gaussian form, and b is the minor-axis length (see Fig. 3). The time-dependency of these variables has been neglected to simplify the notation.

Figure 3.  Schematic description of the social zone. 

In order to define the Jacobian that relates the potential variation with the motion of the robot, consider the time derivative of the potential as follows:

V˙h=Vhx*,Vhy*,Vhxh*,Vhyh*,Vha,Vhb[x˙r*,y˙r*,x˙h*,y˙h*,a˙,b˙]T

Solving this expression through algebraic steps, it results

V˙h=Jsx˙r*-Jsx˙h*+Jaa˙+Jab˙ (4)

where,

Js=-2Vhxr*-xh*a2,yr*-yh*b2,

Ja=2Vh(xr*-xh*)2a3, and, Jb=2Vh(yr*-yh*)2b3,

If the rotation matrix is defined as,

R=cosφhsinφh-sinφhcosφh,

then it results possible to express (4) in the x􀀀y global framework, i.e., xr*=Rxr, xh*=Rxh . The time-derivative of these expressions results,

x˙r*=Rx˙r+pφ˙h, (5)

where,

p-sinφhxr+cosφhyr, -cosφhxr-sinφhyrT.

Similarly for the human position,

x˙h*=Jh*[x˙h   y˙h   φh]T, (6)

where,

Jh*=cosφhsinφh-sinφhcosφh-sinφhxh+cosφhyh-cosφhxh-sinφhyh,

If additionally it is considered a non-holonomic motion for the human gait (Arechavaleta, Laumond, Hicheur, y Berthoz, 2006; Leica, Toibero, Roberti, y Carelli, 2014),

x˙hy˙hφ˙h=cosφh0sinφh0        0    1  Vhwh=JhVhwh,

then operating with (6), it results,

x˙h*=Jh*JhVhwh, (7)

where Vh and wh are the linear and angular of the human obstacle, respectively.

In this way, by substituting (5) and (7) in (4), results that

V˙h=JsRx˙+Jspwh-JsJh*JhVhwh+Jaa˙+Jbb˙,Jox˙+g.

Therefore, the total repulsive effect over the robot in a position xr:=(xr;yr) is calculated as the sum of all the repulsive effects Vhj generated by n human obstacles in the shared scenario, i.e. V=j=1nVhj, and, in consequence

V˙J1x˙r+g (8)

where J1j=1nJoj,gj=1ngj are the Jacobians and compensation motion factor under the presence of n human obstacles, respectively.

4. NULL-SPACE BASED CONTROL

Consider a robot that must develop a task in a human-shared environment fulfilling two objectives: trajectory tracking and collision avoidance with humans.

4.1 Principal task: Collision avoidance with humans

The meddling avoidance of the robot into human social zones is defined as the principal task. Thus, if V is considered as the task variable, then from (8), the minimal norm solution for this task is expressed through the control action x˙r(1) as,

x˙r(1)=J1+V˙d+K1V~-g, (9)

where the desired potential is Vd=0, V~Vd-V, and K2=diag(k2,k2), with k2>0 a design parameter. The incorporation of g improves the performance of the evasion of humans compared to other approaches for common dynamic obstacles. This factor compensates the linear motion of the obstacle, but also it is related to the angular motion of this elliptical shape.

4.2 Secondary task: Trajectory tracking control

For the secondary task, consider a desired trajectory xt=[xt,yt]T , which must be tracked by the robot. In this way, the control objective is based on controlling the robot position xr=[xr,yr]T. For this, consider an error-proportional solution for the secondary task expressed through the Cartesian robot velocities as

x˙r(2)=x˙t+K2x~, (10)

where x~=[xt-xr,yt-yr]T, and K2=diag(k2,k2) with k2>0 a design parameter for the secondary task.

4.3 Solution for a differential drive mobile robot

Consider the non-holonomic kinematic model of the robot expressed as,

x˙ry˙r=cosφr-r sinφrsinφrr cosφrVrwrJrur (11)

where xr=[xr,yr]T is the robot position located on the symmetry axis between the wheels at a distance of r from the wheel axis, φr is the robot orientation, Jr is the Jacobian, and ur=[Vr,wr]T is a control input vector with the linear and angular velocity respectively.

Thereby, consider an inverse kinematic solution given by

urJr-1x˙rc, (12)

where X˙rc=[x˙rc,y˙rc]T is the control action in Cartesian coordinates, which must be defined to develop both tasks respecting the priority order. With this purpose, it is defined a null-space based control defined by (Antonelli, Arrichiello, y Chiaverini, 2009; Chiaverini, 1997):

X˙rc=X˙r1+I-J1+J1X˙r2, (13)

where I-J1+J1 is the projection over the null-space of J1.

5. EXPERIMENTATION

5.1 Experimental setup

The experimentation is developed in a structured scenario, which is scanned by a ceiling camera. The algorithm consists in capturing color images in each sample time, where the posture of each experimental individual, i.e. the robot and the human obstacle is characterized through the centroid positions of two colors, easily distinguishable (Colors are neither in the scenario nor in the experimental individuals.).

Additionally, a Pioneer 3AT robot is equipped with a 180-degrees LIDAR sensor (one degree of resolution), and the measures and commands to the robot are sent through a client/server WiFi connection. In this case, a sample time of ts = 0.1[s] is considered to control the robot during the experiment. The selected parameters for the experiments are shown in Table 1.

Table 1.  Parameters used during the experimentation. 

Tratamiento Parameters
General ts=0.1[s]; tmax=560[s]
Cognitive system C=1710, C~=10, u=0.01, =0.01, R=2[m] If r<1 then avoid.
Motion control k1=0.001, k2=0.1; xt=2cos(2πt/tmax)-0.5, yt=sin(2πt=tmax)

5.2 Cognition system results

In Fig. 4, it is presented the trajectory followed by the robot when captures the information. Note that, at first, the robot takes measurements, which are not representative to infer a social zone, but they are discarded in an iterative way during the experiment, and only the points in the nearest to the robot are stored and updated according to the previous defined policies.

Figure 4.  Photo sequence during the experiment. 

At the beginning of the experiment, the robot tracks the trajectory without avoiding the human obstacles, i.e. while r > 1 the humans are in charge of avoiding the robot, because the robot is not able to infer a social zone yet (see video clip in https://youtu.be/acpYu0I3XDI).

Along time, the inferred major and minor-axis length achieves a bounded and practically constant condition, even when the learning factor continues decreasing (see results before dashedred-line in Fig. 5).

Once that the spread of the points-cloud is well-defined around the social zone of the robot, i.e. when r < 1, the robotic system starts using this learning to avoid the human obstacles while tracking the trajectory as intended. This time depends of each experiment and the degree of interaction with humans, and in this case, it happens at t = 133.3[s]. Note that the cognitive system continues getting new data and improving its learning factor, but the variation of the inferred major and minor-axis length remains practically bounded (see results after dashed-red-line in Fig. 5).

Figure 5.  Learning indicator and inferred ellipse dimensions along the time. Dashed red lines refers the point when r = 1. 

5.3 Motion control results

The control errors and the trajectories generated during a timelapse of the experiment are shown in Fig. 6. Marked with green boxes, some collision avoidance events are shown, where the controller is able to guarantee the convergence of the trajectories after that the collision is avoided. Note that the control objectives are fulfilled in a priority order as expected (see video clip in https://youtu.be/acpYu0I3XDI).

Figure 6.  Control errors of the robot. 

6. CONCLUSION

This paper has presented a novel cognition system to infer a social field from laser measurements. For this, acquisition, storage and update policies have been proposed to build the social zone of the robot based on direct human-robot interactions. Later, this learning allows representing human obstacles as elliptical potential field with non-holonomic motion, which according to the researchers, improves the social acceptance of the mobile robots during human-robot interactions. Following with this, a null-space based (NSB) motion control to track a trajectory and to avoid pedestrians by considering a differential drive mobile robot is proposed. Finally, the algorithm has been tested through experimentation, and the good performance of the cognition system and motion control trategy has been verified. Based on the hypothesis that humans avoids other individual in the way they would like to be avoided, authors believe that the proposed algorithm improves the social acceptance of mobile robot during human-robot interactions, because it is capable to learn this behavior and reproduce it during the pedestrian collision avoidance.

REFERENCIAS

Aly, A., Griffiths, S., y Stramandinoli, F. (2017). Metrics and benchmarks in human-robot interaction: Recent advances in cognitive robotics. Cognitive Systems Research, 43, 313-323. [ Links ]

Antonelli, G., Arrichiello, F., y Chiaverini, S. (2009). Experiments of formation control with multirobot systems using the null-space-based behavioral control. IEEE Transactions on Control Systems Technology, 17(5), 1173-1182. [ Links ]

Arechavaleta, G., Laumond, J. P., Hicheur, H., y Berthoz, A. (2006). The nonholonomic nature of human locomotion: a modeling study. En Ieee/ras-embs international conference on biomedical robotics and biomechatronics (p. 158-163). [ Links ]

Chiaverini, S. (1997). Singularity-robust task-priority redundancy resolution for real-time kinematic control of robot manipulators. IEEE Transactions on Robotics and Automation, 13(3), 398-410. [ Links ]

Chi-Pang, L., Chen-Tun, C., Kuo-Hung, C., y Li-Chen, F. (2011). Human-centered robot navigation towards a harmoniously human robot coexisting environment. IEEE Transactions on Robotics, 27(1), 99-112. [ Links ]

Fong, T., Nourbakhsh, I., y Dautenhahn, K. (2003). A survey of socially interactive robots. Robotics and Autonomous Systems, 42(3-4), 143-166. [ Links ]

Guzzi, J., Giusti, A., Gambardella, L. M., Theraulaz, G., y Di Caro, G. A. (2013). Human-friendly robot navigation in dynamic environments. En Ieee international conference on robotics and automation (icra) (p. 423-430). [ Links ]

Hall, E. T. (1963). A system for the notation of proxemic behavior. American Anthropologist, 65(5), 1003-1026. [ Links ]

Herrera, D., Roberti, F., Toibero, M., y Carelli, R. (2017). Human interaction dynamics for its use in mobile robotics: Impedance control for leader-follower formation. IEEE/CAA Journal of Automatica Sinica, 4(4), 696-703 [ Links ]

Kruse, T., Pandey, A. K., Alami, R., y Kirsch, A. (2013). Humanaware robot navigation: A survey. Robotics and Autonomous Systems, 61(12), 1726-1743. [ Links ]

Kuderer, M., Kretzschmar, H., y Burgard, W. (2013). Teaching mobile robots to cooperatively navigate in populated environments. En Intelligent robots and systems (iros), 2013 ieee/rsj international conference on (p. 3138-3143). IEEE. [ Links ]

Leica, P., Toibero, J. M., Roberti, F., y Carelli, R. (2014). Switched control to robot-human bilateral interaction for guiding people. Journal of Intelligent and Robotic Systems, 77(1), 73-93. [ Links ]

Pacchierotti, E., Christensen, H. I., y Jensfelt, P. (2006). Evaluation of passing distance for social robots. En 15th ieee international symposium on robot and human interactive communication (roman) (p. 315-320). [ Links ]

Ratsamee, P., Mae, Y., Ohara, K., Kojima, M., y Arai, T. (2013). Social navigation model based on human intention analysis using face orientation. En Ieee/rsj international conference on intelligent robots and systems (iros) (p. 1682-1687). [ Links ]

Rios-Martinez, J., Spalanzani, A., y Laugier, C. (2015). From proxemics theory to socially-aware navigation: A survey. International Journal of Social Robotics, 7(2), 137-153. [ Links ]

Scandolo, L., y Fraichard, T. (2011). An anthropomorphic navigation scheme for dynamic scenarios. En Ieee international conference on robotics and automation [ Links ]

Published: 31/01/2019

Received: October 30, 2018; Accepted: October 31, 2018

* dherrera@inaut.unsj.edu.ar

Daniel Herrera was born in Riobamba, Ecuador, in 1989. He received his degree in Electronic and Control Engineering from the National Polytechnic School of Ecuador in 2012, where he was also Laboratory Assistant at the Department of Automation and Industrial Control. In March 2017, he obtained a Ph.D. degree in Control Systems Engineering at the Institute of Automatics (INAUT) of the National University of San Juan, Argentina. Currently, he is Postdoctoral Researcher at INAUT. His research interests are human-robot interactions, artificial intelligence and robot modeling and identification.

Javier Gimenez received the B. Sc. degree in mathematics from the National University of San Juan (UNSJ), Argentina in 2009, and the Ph. D. degree in mathematics from the National University of Córdoba (UNC), Argentina in 2014. Currently, he is an assistant researcher of the Argentinean National Council for Scientific Research (CONICET), and Professor in the Institute of Automatics, Argentina. His research interests include probabilistic and statistical implementations of robotics, such as simultaneous localization and mapping (SLAM) algorithms.

Matias Monllor received the B.S. degree in Electronics Engineering from the Facultad de Ingenieria - UNSJ, San Juan - Argentina, in 2014. He is currently a Ph.D. student at the Instituto de Automática, UNSJ - CONICET His current research interests include humanrobot interaction and control.

Flavio Roberti was born in Buenos Aires, Argentina in 1978. He graduated in Engineering from the National University of San Juan, Argentina in 2004, and obtained a Ph.D. degree in Control Systems Engineering from the National University of San Juan, Argentina in 2009. He is currently an Assistant Professor at the National University of San Juan and a Researcher of the National Council for Scientific and Technical Research (CONICET, Argentina). His research interests are robotics, wheeled mobile robots, mobile manipulators, visual servoing and passivity based visual control.

Ricardo Carelli was born in San Juan, Argentina. He graduated in Engineering from the National University of San Juan, Argentina, and obtained a Ph.D. degree in Electrical Engineering from the National University of Mexico (UNAM). He is a full professor at the National University of San Juan and a senior researcher of the National Council for Scientific and Technical Research (CONICET, Argentina). Prof. Carelli is the Director of the Instituto de Automática, National University of San Juan (Argentina). His research interests are in Robotics, Manufacturing Systems, Adaptive Control and Artificial Intelligence Applied to Automatic Control. Prof. Carelli is a senior member of IEEE and a member of AADECA-IFAC.

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License