In this research, multi-agent networks are modeled via their interaction graphs, where the nodes represent the agents and the edges denote direct interactions between the corresponding agents. Using this representation, the goal is to design local reconfiguration schemes for decentralized formation of interaction graphs that are robust to structural (e.g., node or edge failures) and functional (e.g., noisy interactions) perturbations. More specifically, we investigate how to obtain a well-connected interaction graph from any connected graph without a significant change in the number of edges.
Investigators:
A. Yasin Yazıcıoğlu
Magnus Egerstedt
Jeff S. Shamma
Related Publications:
Consider the convoy protection scenario depicted above. In particular, consider the side objective of having a UAV team fly over and offer visual coverage of the surrounding area. It is of interest that a team of UAV is able to cover as much of the area to surveil as possible as they “sweep” across. Assuming the agents have a limited sensor footprint, they’ll need to have rich movement patterns in order to increase their coverage. However, these patterns will need to be coordinated to avoid collisions as the agents share the airspace. We approach the problem of having these robots mix, or interacting with each other consistently in the airspace, by borrowing concepts from algebraic topology, namely from the Braid Group.
As depicted in the figure, if two agents beginning at two corners of a square need to move across to the remaining corners, they can accomplish this by moving straight across or by having their paths crisscross. This two movement patterns can be associated with elements in the braid group. Using this idea, we utilize the generators of the braid group together with the concatenation operation to form symbolic strings of movement patterns. We then interpret these symbolic strings and map them to braid controllers. The braid controllers then take the multi-agent system dynamics into consideration to generate control laws so the agents achieve these patterns while avoiding collisions.
The simulation result below demonstrates one of our proposed algorithms for a particular class of mobile agents performing a randomly generated braid. This seven agent system of unicycle robots is performing a braid of length 10. It can be observed that all the agents achieve the secondary objective of arriving at the intermediary way points (the bubbles on the paths) at the same times, and that collisions are avoided throughout their excursion.
]]>
Sim.I.am provided the core of the experience for students enrolled in ECE4555. As part of an inverted classroom experience, students would spend the beginning of the week learning a control theory concept through the MOOC course, Control of Mobile Robots. For example, students would learn the mathematical formulation of a go-to-goal controller, which can drive a differential-drive robot from point A to point B. The first step required students to implement the go-to-goal controller in the simulator. While the simulator is a somewhat idealized version of the real world, it provides the students with a sufficient tool to test whether their controllers are behaving correctly. If a controller did not work in the simulator, it almost assuredly would not work on the real robot. The second step required students to deploy their go-to-goal controller on a real mobile robot. Rather than port their controller from MATLAB to C, the simulator provides a network interface (TCP/IP) that simply links the inputs/outputs from the student’s controllers to the real robot instead of the simulated robot. This approach allows students to focus their attention on adapting their control design to the real robot, rather than worry about porting their controller to C.
Investigators
Jean-Pierre de la Croix
Magnus Egerstedt
As a first stab at this, the two-agent rendezvous problem is considered where one agent (the target) is equipped with no sensors and is stationary, while the other is equipped with a set-valued sensor.
The measurements returned by the set-valued sensor are measurable sets which are sampled from a probability distribution and contain the location of the target agent. The set-valued measurements can be viewed as very coarse approximation of the location of the target agent. This approximation can be further improved by making multiple set-valued measurements and taking intersections (represented by the blue area in the figure). The amount of uncertainty in the agent’s measurements is then given by the volume of the measured set. We devise motion strategies which allow us to reduce the uncertainty as we make infinitely many measurements to 0 (i.e the volume contained in the intersection of the measurements). The motion strategy was implemented on a Khepera III robot in a lab environment.
Thiagarajan Ramachandran
Magnus Egerstedt
T.Ramachandran, M.Egerstedt, “Pair-wise Agreement Using Set-Valued Sensors”, in IFAC Workshop on Estimation and Control of Networked Systems, Sept 2013.
]]>We use this idea of agents covering the density function to control a system of multiple agents. More specifically, we specify a time-varying density function as an input to multi-agent system. This time-varying density function can represent some time-evolving phenomenon. It may be the probability of a lost person being present in a region over time, or it can be a description of oil spill over time. Otherwise, it may just represent where we want the agents to be at each time. The agents are guided by the time-varying density function to be where we want them to be at each time. To enable this idea, we developed an optimal coverage algorithm for general time-varying density functions. Other algorithms for optimal coverage with time-varying density functions existed before, but relied on assumptions that do not hold in general. We proposed an algorithm that guarantees optimal coverage of the density function provided that the agents start from a favorable initial setup. This can be easily achieved by holding the density function constant in the beginning and running well-known algorithms for time-invariant algorithms (such as Lloyd’s algorithm). Simulations and robot experiments were conducted to verify the approach.
Investigators:
Sung G. Lee Magnus Egerstedt
Publications:
S.G. Lee and M. Egerstedt, Controlled Coverage Using Time-Varying Density Functions. IFAC Workshop on Estimation and Control of Networked Systems, Koblenz, Germany, Sept. 2013.
]]>Static networks(comprising of agents with no mobility) are typically deployed for the purpose of monitoring critical areas for long periods of time, and comprise of a large number of low-cost, low-power devices with limited sensing, processing, and communication capabilities. Owing to the low quality of the constituent devices and the harshness of the environments in which they are deployed, the batteries of these devices start to deteriorate as a result of which their available power decay with time. However, in all the existing sensor scheduling schemes, there is an inherent assumption that the performance of sensing devices remain constant throughout the lifetime of a network and this assumption is not always true.
In this work, we present power-aware scheduling schemes for sensor networks that consist of sensing devices whose sensing range model is a function of transmitted power to maintain a desired event detection probability throughout the lifetime of the network. The lifetime of a network is the maximum time beyond which the desired performance cannot be guaranteed. The footprints of the sensors comprising these networks are dynamic in nature because of the fact that variations in available power have a direct impact on the performance of sensing devices. Therefore, we select the area of a sensor footprint as a performance metric and use explicit relationship between footprint area and available power to quantify the effects of variation in available power on the performance of sensing devices.
To compensate for this variation in sensor performance because of change in available power, we propose power-aware scheduling schemes in which sensors use their available power to determine their performance metric at each decision time and then update their control parameter accordingly such that the desired event detection is maintained while consuming minimum power. The impact of variation in available power on sensor performance was a missing link in the existing literature, and is addressed in this work for the first time.
This project is related with a more traditional aspect of power-awareness, i.e., efficient utilization of available energy resources to maximize system lifetime. Most of the existing schemes that are available in the literature are designed to maintain complete coverage throughout the lifetime of a network by ensuring that switching a particular sensor off does not deteriorate the coverage profile of a network. Maintaining complete coverage is important especially for time critical events that must be detected immediately. However, this complete coverage is typically achieved at the expense of considerable control and communication overheads, and these overheads make this objective over restrictive for applications that can tolerate some delay in the detection of an event. Thus, for certain applications, power consumption can be reduced by relaxing the desired performance criterion, which in this case is coverage. This tradeoff between power consumption and desired performance criterion is exploited and a probabilistic switching scheme is proposed that can ensure a required level of partial coverage throughout the lifetime of a network while minimizing the overhead involved in making switching decisions.
In this project, a probabilistic power-efficient sensor scheduling scheme is proposed that is based on the concept of a hard-core point process form stochastic geometry to minimize communication among neighboring sensors in making switching decisions. Hard-core point processes are inhibition processes that maintain certain minimum distance, called the inhibition distance, among the constituent points, and in this way limit the number of redundant sensors covering any area. The information that is communicated between sensors for coordination only consists of randomly generated numbers, which results in minimum communication overhead. To efficiently design and analyze this scheme, we developed an explicit relationship between the inhibition distance and detection probability through extensive Monte Carlo simulations and the developed model is shown to accurately achieve desire performance with average error of less than 1%. The proposed scheme can extend the lifetime of a sensor network from 40% – 70% as compared to random switching scheme.
]]>