AI-powered video telematics has revolutionized trucking operations in recent years. Forward-facing cameras provide video of road incidents, which often provides important evidence during litigation. Driver-facing cameras monitor driver behavior and fatigue. The combination of both ideas enables continuous assessment of how safely a driver is driving, thereby reducing safety incidents. Across Europe and North America, the installed base of video telematics systems stood at five million and is expected to top eleven million by 2027.
Overall, truck drivers are highly professional and safety conscious. Dashcam systems provide a major security backup.
The major players in this sector are Lytx with 35% market share, followed by Samsara and Motiv with similar market share of around 15%. With many other proponents, competition is fierce and elbowing is becoming intense.
Recently, Motiv announced the results of a Motiv-funded study conducted by the Virginia Tech Transportation Institute (VTTI). This was a limited scope evaluation conducted at VTTI’s test track. The results showed that Motiv’s AI dashcam successfully generated driver alerts related to six unsafe driving behaviors 86% of the time, compared to 32% for LiteX and 21% for Samsara.
Releasing the results, Shoaib Makani, co-founder and CEO of Motiv, said, “There is an epidemic of road accidents and deaths that is getting worse, but with advances in AI, they should improve. What’s worse, the main causes of these accidents – which include distracted and unsafe driving behavior – are 100% preventable. VTTI’s research results are not just about comparing products. “They show that not all of these technologies perform the same, which could have major implications for accident prevention.”
Motiv’s AI dashcam capabilities include:
- Stop Sign Violation: Detects stop signs and detects if the driver fails to stop completely. (According to Motive, rolling stops are frequently ticketed and are one of the leading causes of accidents.)
- Driver distraction: Detects and alerts drivers when they are looking down due to eating, drinking, smoking, drowsiness, cell phone use or general inattention.
- Unsafe Lane Change: Alerts drivers when they are swerving, weaving or changing lanes at high speed, as well as when following road lanes, regardless of lane type.
The VTTI study specifically looked at the likelihood of each tested system to generate alerts for these common unsafe behaviors: close following, rolling stops at stop signs, not using seat belts, phone calls, and texting. Motiv’s press release highlighted three unsafe driving behaviors where performance differences were prominent:
· Phone call overall alert rates (Purpose: 95%, World: 38%, Litex: 28%)
· Texting overall alert rates (purpose: 71%, world: 30%, litx: 13%).
· Close following of the vehicle immediately ahead (Purpose: 67%, World: 18%, LitEx: 28%)
The study methodology included protocols designed to mimic real-world conditions and driver behavior on a closed test track. The test was conducted in three in-cab placement locations and at three different times of the day (day, evening and night). The systems were installed by a certified, professional third-party installer to ensure that the placement of the cameras conformed to each technology provider’s installation standards. Factors affecting system performance, such as weather conditions, driver identity, and system placement, were controlled to maintain the integrity of the study. More information about the full methodology, data and results can be found here.
What is bad driving behavior and what is not?
Assessing any type of human behavior requires specific criteria. Perhaps seat belt use is the most straightforward: as long as the video processing algorithm is up to the perception task, this item is pass/fail. Tracking driver activity from a phone is more complex but still relatively simple.
But what is close-following, or swerving through a stop sign? In response to a request, Motive provided its speed criteria for these two behaviors. When approaching a stop sign, the study requires an event trigger if the vehicle speed does not drop below 6 mph within 7 seconds of the stop sign becoming invisible to the camera (moving past the stop sign). . And, the near-following event required a speed above 35 mph and was defined as a headway of 0.7 seconds or less for at least 15 seconds.
There are no industry standards regarding detection of these behaviors. While the Motiv kinematic parameters in the study appear to be reasonable, another company may have similar but different parameters. If the close-following speed limit for a competing dashcam system is 45 mph, driving at 40 mph will be flagged by the VTTI protocol as a missed detection for that system.
But upon further discussion with Motiv, it became clear that VTTI empirically identified the warning threshold for each unsafe behavior for all three vendors and then designed its experiment to ensure that the tests conducted Their experimental protocols cover each vendor’s settings for each unsafe behavior observed. For example, the minimum speed for close following was set at 50 mph, which they claimed exceeded the warning limit for all 3 vendors. Nice move, although “discovering” the limits and other factors is not the same as getting engineering specifications directly from the other two companies. Needless to say, this leaves room for debate and rebuttal.
The VTTI report captures this and other factors by saying that “due to the specific tasks evaluated, the number of experimental runs, and other study design features, the analysis results may not apply to conditions outside those tested in the current study.” Are.”
not so fast
Another important dimension is readiness for recognition. Since seconds matter when driving at high speed on a highway, it would be ideal to immediately detect when the driver is taking his eyes off the road, so that an alert can be declared. But the sooner it is detected, the greater the chance of false positives. There is always a challenging engineering trade-off between accurate detection and early warning.
False positives of driver behavior cause distress to both the driver and the fleet safety manager.
As the VTTI report states, “For the phone-in-lap task, Motive had a statistically significant, higher probability of successfully issuing an in-cab alert for phone-in-lap compared to Litex and Samsara. ….” It takes much less time than Lytx to alert on all study conditions.” The word ”successfully” makes other people raise their eyebrows. If Lytx and Samasara intentionally used longer detection times to reduce false positives, was that considered a failure?
Motiv noted that the behaviors were tested in a manner that allowed each system enough time to provide an alert. For example, each Close Following test lasted 30 seconds, which the researchers considered to be well above the limit for all three dashcam systems. VTTI’s results showed that this experimental protocol failed to receive alerts on unsafe behavior from the two vendors, as Motiv’s system often did.
The testing protocol requires that an event occur during each test to make it impossible for a false positive to occur. Therefore, the false positive rate is an important factor that has not been taken into account.
Competitors’ weight has increased
To put it mildly, comparing the performance of the Motiv system to the Lytx and Samasara systems raises some issues with these competitors.
Jim Brady, LitEx’s vice president of product management, says: “This study, like a previous study funded by Motiv, does not compare exact rates. It simply reports how many times a behavior has been captured. This does not take into account precision, or how often alerts are accurate. In other words, it is missing important information about false positives. It is entirely possible for a device to capture more events at the expense of what is important More noise.” Noting that the report only evaluated a small selection of alerts and risk types within Lytx’s portfolio, Mr. Brady also raised questions about the study’s design relative to the real world, noting that “The study was an isolated test, set under artificial conditions, in which one or two devices were used less than 40 times. Lytx, on the other hand, analyzes thousands of incidents per week and currently has over 221 billion miles of driving data that continuously informs and refines our systems. When properly configured and deployed, Lytx customers enjoy 95% or greater accuracy in real-world environments, the highest in the industry across a broad portfolio of alert types.
A representative of Samsara also raised similar issues.
Fleet operators are customers. How do they choose which dashcam technology to adopt? Instead of doing deep data dives, fleet safety managers typically evaluate several different systems with their drivers on the road. In this setting, false positives or too many alarms will stick out like a sore thumb. The system must be practical and effective, and each vendor has metrics and data showing the potential of their system.
Dashcam location is complicated; I’m certainly not attempting to provide a complete treatment here. In fact, I felt like I was in over my head while working on the details above!
But this much is clear. There is value in conducting comparative studies, and resources are always limited, so decisions about scope must be made. The results must be broad enough to be meaningful. This is the origin of external criticisms about the study of motives.
Still, Motiv’s initiative potentially provides a new set of questions customers can ask any dashcam vendor.
The conversation progresses.