Trust in Human-Robot Teams

Two sets of trajectories for the two-agent trust problem. In one situation (solid blue lines), both trajectories converge and thus consensus is achieved, while in the other situation (dashed red lines), finite escape time is exhibited, meaning both trajectories blow up in finite time.

Trust- based coordination occurs when a team of robots and possibly humans have to work together to achieve some goal, but each robot (and/or human) may not trust all of the other robots (and/or humans). If a robot doesn’t trust one of its neighbors, it may not want to heavily weight the information obtained from that neighbor. So it seems that this trust should play a role in the dynamics of the agents. In addition, the trust levels should depend on whether an agent’s neighbor is doing what it is expected to do. For example, when trying to meet at a common location in the 2D plane, an agent would expect its neighboring agents to be moving towards it.

This work introduces a trust model that couples the change in performance in a team of agents to how the
agents perceive (or trust) each other. This combination of social dynamics and physical update laws not only changes the performance of the system, but it has the potential to make the performance deteriorate in a dramatic fashion. In fact, in the two-agent case, it is shown that the system exhibits finite escape time through an invariance result that carries over also to larger systems and more elaborate trust models. The invariance result states that an increase in performance must be accompanied by an increase in the total trust in the network (and vice versa for deteriorating performance).

We can show that, in the 2-agent case, the agents’ states may actually blow up (go to infinity) in finite time, if the sum of the initial trusts of the agents is negative. This is shown in the figure above. The two trajectories that approach zero depict a two agent case in which the sum of the initial trusts is positive, while the two trajectories that go off to infinity depict the case in which the sum of the initial trust values is negative.

This model also has connections to the belief and group polarization phenomena encountered in group processes driven by social interaction dynamics. It also allows us to start thinking about how humans should interact with teams of robots and what role trust plays in this interaction.

Investigators

Comments are closed.