This study addresses the challenges and research gaps in traffic monitoring and control, as well as traffic simulation, by proposing an integrated approach that utilizes Visible Light Communication (VLC) to optimize traffic signals and vehicle trajectory at urban intersections. The feasibility of implementing Vehicle-to-Vehicle (V2V) VLC in adaptive traffic control systems is examined through experimental results. Environmental conditions and their impact on real-world implementation are discussed. The system utilizes modulated light to transmit information between connected vehicles (CVs) and infrastructure, such as street lamps and traffic signals. Cooperative CVs exchange position and speed information via V2V communication within the control zone, enabling flexibility and adaptation to different traffic movements during signal phases. A Reinforcement Learning, coupled with the Simulation of Urban Mobility (SUMO) agent-based simulator, is employed to find the best policies to control traffic lights. The simulation scenario was adapted from a real-world environment in Lisbon, and it considers the presence of roads that impact the traffic flow at two connected intersections. A deep reinforcement learning algorithm dynamically control traffic flows by minimizing bottlenecks during rush hour through V2V and Vehicle-to-Infrastructure (V2I) communications. Queue/request/response interactions are facilitated using VLC mechanisms and relative pose concepts. The system is integrated into an edge-cloud architecture, enabling daily analysis of collected information in upper layers for a fast and adaptive response to local traffic conditions. Comparative analysis reveals the benefits of the proposed approach in terms of throughput, delay, and vehicle stops, uncovering optimal patterns for signals and trajectory optimization. Separate training and test sets allow monitoring and evaluating our model.
|