A. Vision-Based Positioning System: The vision-based positioning system [6–10] is the cornerstone of this multi-agent testbed, offering a robust solution for precise agent localization. This system is composed of strategically positioned overhead cameras, meticulously placed to ensure comprehensive coverage of the entire operational area. These cameras are used to detect AprilTags affixed to the top of ground robots. AprilTags are a distinctive type of visual marker known for their high-contrast patterns, which are easily discernible by the system's cameras. These markers serve as reference points, allowing the system to accurately track and identify each ground robot within the environment. The combination of overhead camera placement and the use of AprilTags forms a reliable and versatile system for real-time agent tracking and localization.
B. Hardware Setup: The hardware setup is a critical component of the system's architecture. It involves the integration of the cameras with Raspberry Pi microcontrollers. These microcontrollers play a pivotal role in processing the visual data captured by the cameras. Specifically, they extract precise position information from the detected AprilTags. This position data is crucial for determining the exact coordinates of each ground robot within the operational area. Once processed, the position information is transmitted to the ground agents via a Wi-Fi network. This intricate hardware setup ensures that the system operates in a seamless and synchronized manner, providing agents with accurate and up-to-date localization information.
C. Ground Agents: The ground agents, which are exemplified by NVIDIA's JetBot, are equipped with powerful Jetson Nano microcontrollers. Figure 1 shows the assembled jetbot employing a jetson nano microcontroller. These microcontrollers are the brains behind the ground agents, responsible for interpreting and utilizing the position information received from the vision-based positioning system. With this precise localization data in hand, the ground agents can navigate and operate within the environment with a high degree of accuracy. They can adjust their movements, paths, and actions based on the real-time position updates, allowing them to effectively respond to changing circumstances and execute tasks with precision. The mobility of these ground agents is driven by their motors, which respond to control signals generated by the system. The JetBot's capability to move within the environment in accordance with these control signals makes it a versatile and dynamic component of the multi-agent system.
D. Communication: The communication framework within the system is established using the Robotic Operating System (ROS). ROS acts as the middleware that facilitates real-time message exchange and coordination between the various system components, including the cameras, Raspberry Pi microcontrollers, and ground agents. This communication framework is vital for the seamless operation of the multi-agent system. It enables the sharing of position information, control commands, and other relevant data between the different entities, ensuring that all agents are continuously aware of their positions and the positions of their peers. ROS provides a robust and extensible platform for inter-agent communication and coordination, allowing for complex multi-agent behaviors and applications to be executed effectively and in real-time.