Screen Shot 2021-05-29 at 10.30.28 PM

Overview

Traffic data collection is an important data source for many applications. Urban planners use traffic data to efficiently allocate resources when planning new or improved infrastructure. Transportation engineers use the data to set traffic light timings, and to improve the safety of road users with traffic calming or other measures performed before and after studies. Traffic data collection is an important data source for many applications. Urban planners use traffic data to efficiently allocate resources when planning new or improved infrastructure. Transportation engineers use the data to set traffic light timings, and to improve the safety of road users with traffic calming or other measures performed before and after studies. 

The data can also be used to coordinate control of traffic lights across a road network. Additionally, the data can be used by third parties, for example for valuing real estate or route planning applications to avoid congestion.

This data has come from many different sources and new technologies have been developed and popularized over time. Starting before traffic lights, traffic police monitored traffic visually then routed that traffic efficiently. Manual counts of data were used in traffic studies. Eventually inductive loops embedded in the pavement, or pneumatic tubes laid across the road, were used for measuring traffic flows, but they could not provide data on turning movements and are prone to error. Telephone surveys are still used to estimate origin destination flows for big cities, at a large expense. Cell network data is used to estimate travel times and flows at the macro scale. Recently, advances in computer vision have led to video camera data being used to count turning movements and classifications of vehicles, however, video camera data can suffer from poor accuracy in harsh weather, low/high lighting conditions, and can be tricked by reflections on wet surfaces, buildings, and trucks. Processing video data has large computational requirements limiting everything but basic presence detection and flow counts to be performed offline, not in real time.

Despite the issues faced by cameras, they have become the standard for advanced micro scale data for traffic studies. This data includes turning movement counts, classifications, pedestrian and bicycle counts and recently, basic safety analysis, such as near miss detection, has become commercially available. Still, with its reduced performance in poor lighting and weather conditions and its inability to perform all the above tasks in real time, there is room in the market for a new technology.

In the past decade, Lidar technology has advanced significantly in range, accuracy, and resolution, and has as fallen dramatically in price. These advances have come largely from the investment in autonomous vehicles that need to perform reliably in all conditions. Lidar is desirable for this application as performance is unaffected by lighting conditions and is superior to cameras during precipitation.

Bluecity has developed a solution that uses lidar for monitoring traffic networks. Unlike cameras, our solution works in any lighting and weather conditions. Due to advances, we’ve made, our machine vision algorithms perform 10x faster than the state of the art. This allows us to process data on an inexpensive edge computer eliminating the need for a high bandwidth connection reducing latency. Our solution provides real-time turning movement counts, classification of all road users, near miss analysis, speed measurement, and more coming soon.

Case Study:

Comparison Between Accuracy of Bluecity Lidar Sensor and Camera-

based systems

Camera-based traffic detection has become a viable data collection tool for traffic studies.By developing a camera-based system that provides accurate, turning movement count data, leading camera-based suppliers have replaced the need for manual, labour intensive counting.

In November of 2020, a traffic study was performed in Repentigny, Quebec, to investigate the impact of a potential project to replace a bridge upstream from a highway interchange. For this study, the turning movements at the intersection needed to be collected during two busy days. Two cameras were required to cover the medium-sized intersection, and were installed on opposite corners. On Friday November 20, and on Monday November 23, 10 hours of video was recorded covering the morning, noon, and evening peak hours.

Bluecity’s IndiGO 3D solution was also chosen to collect count data. At the same intersection, one lidar sensor was installed which could cover the entire intersection. This allowed for a direct comparison to be made.

Figure 1:

Comparison of Total of All Turning

Movement Counts, Nov. 20, 2020

After submitting our results to the engineering firm performing the study for the project, camera-based counts were also provided. Figures 1 and 2 were a blind comparison between the two sets of data. Their similarity suggests both systems performed very well, and consistently.

Figure 2:

Comparison of Total of All Turning

Movement Counts, Nov. 20, 2020

Over the first day of counts, camera-based counts had a total of 23,709, and Bluecity had a total of 23,685, resulting in a difference of only 0.10%. The total absolute difference between each technology for each 15 min interval was 442, just 1.86%.

On the second day, camera-based counts counted 19,180, while Bluecity counted 19,309, a 0.67% difference. The total absolute difference was 1.43%.

Although it can be assumed that both systems are very accurate with numbers this close, ground truth data should be collected to confirm their accuracy and analyze which system performs better. To do this, the hour with the highest counts was chosen, which was 4PM-5PM on the 20th. As this was performed before we had access to the video data, the lidar data from that hour was replayed and each turning movement was counted individually using visual inspection.

Over the hour of data, 3,325 vehicles were counted and became the ground truth. The camera-based system counted 3,290, an error of -1.05%. Bluecity counted 3,331, an error of 0.18%. The sum of the absolute error for each turning for each 15-minute interval was 129 (3.88%) for the camera-based system, and 76 (2.29%) for Bluecity. For each 15-minute interval of each turning movement, Bluecity was more accurate more often and had less variability in error. The raw data and comparison can be found below in figure 3.

Figure 3:

Comparison between Bluecity,

Camera-based system, and the Ground Truth

There were not any adverse weather conditions during the study period when data was processed using the camera-based system. However, the night before there was precipitation and captures of the video data and lidar data can be compared from the exact same time in figures 4 and 5.

These captures show the limitations of using cameras especially when data is required reliably in all conditions. While this area is well lit at night, precipitation still has significantly degraded the performance of the cameras. A small amount of noise can be seen in the lidar data, however, as only the strongest laser return is measured, this noise is minimal, and the detection algorithm has been trained to be robust in the presence of noise. The camera on the other hand, suffers from water on the lens causing refraction and water on the road causing reflections, both problematic when trying to detect road users.

Conclusion

Bluecity’s Lidar based system overcomes the issues inherent to camera-based solutions. It works in all weather and lighting conditions. Bluecity has developed a solution that provides all the data cameras can, turning movement counts, speed estimation, safety analytics, classifications, and it is fast enough to work in real-time on an edge computer.

With recent reductions in lidar price, all of this is provided at a competitive price. In addition, lidar outperforms the industry standard in accuracy.

While camera-based methods of collecting road network data have made significant advances in recent years, inherent limitations of the technology prevent it from being a reliable solution especially for permanent installation. Poor weather and lighting conditions have strong effects on performance. Multiple cameras are often required leading to tedious calibration between cameras. Due to the high computational requirements, the video data must be recorded then uploaded to the solution provider’s system, which is expensive and time consuming.