You are on page 1of 68

THE INTERNATIONAL

REVIEW OF AUTONOMOUS
VEHICLE TECHNOLOGIES:
FROM CONCEPTION
TO MANUFACTURE TO
IMPLEMENTATION

January 2024

Face
value
Driver monitoring systems
continue to mature and
grow in sophistication

Mapping Nissan: evolvAD Large language models


HD maps have long been synonymous with A UK project seeks to boost autonomous Breaking down the language barrier
autonomous driving. Could the latest REM driving on rural and urban residential roads, between humans and computers could
technology mark a fundamental shift? partly by training AVs to act more assertively accelerate and improve AV development
CONTENTS

14
04 Tech insider: ZF Group COVER STORY:
Alexander Makowski, head of the autonomous mobility DRIVER MONITORING
product line at ZF Group, discusses the increasing Robust driver monitoring is coming
popularity of self-driving shuttles under ever sharper scrutiny as more
08 Tech insider: EZTow capable assistance features reach the
market – and the most advanced
EasyMile’s Marion Ferré on the latest from the company’s systems could offer advantages
autonomous baggage tractor deployment at Toulouse- beyond occupant safety
Blagnac Airport in France
10 Project brief: WHILL
An autonomous wheelchair is just the ticket for those
flying with a disability or reduced mobility
22 Data operations
Mastering the ADAS/AV data pipeline is crucial to
the successful deployment of self-driving vehicles 34 V2X
V2X is often cited as a cornerstone of autonomous
28 Mapping driving, but with misaligned standards and delayed
HD maps have long been synonymous with 5G rollout, is it also becoming a bottleneck?
autonomous driving. Could the latest technology
mark a fundamental shift? 40 evolvAD
A UK project seeks to boost autonomous driving
on rural and urban residential roads, partly by
training AVs to act more assertively
46 AI: LLMs
28
Large language models are coming to
self-driving cars

40

34
ADAS & Autonomous Vehicle International January 2024 01
CONTENTS

53 Supplier interview:
Dewesoft
Welcome
I really enjoyed talking to the team behind Nissan’s evolvAD project (see page 40).
Bojan Contala, business development manager,
Launched in September 2023, this test program will see Nissan Leaf EVs kitted out with
describes Dewesoft’s recently released Obsidian
autonomous driving software and hardware take to select London roads to see how
autonomous data recording product line
they cope with the demands of an urban residential environment.
54 Family affair The self-driving cars will have a safety driver on board at all times, and will benefit
Simulation software and real-time hardware from added V2I data captured by cameras integrated into the test zone, which is run
from IPG Automotive offer support for all stages by TRL. Nissan’s safety pilots/drivers have all taken part in Nissan A2 training (the
of testing highest level of vehicle handling training for Nissan vehicle development in Europe),
56 Team effort as well as roadcraft training (defensive driver training and road awareness in the UK)
and system failure mode training.
How the Sim4CAMSens project aims to forge
a robust perception sensor ecosystem The project builds on the OEM’s previous projects, HumanDrive and ServCity, and will
help advance AD technologies at Nissan as it accelerates toward its Nissan Ambition
57 Park better 2030 long-term vision. At the project’s launch, the manufacturer was particularly keen
Advanced automated parking systems can to emphasize just how ‘long term’ its vision is: “There are still challenges to overcome
save time, effort and lives, says Applied Intuition in terms of technology and reliability, particularly when it comes to creating fully
automated vehicles that can carry out complete door-to-door journeys in all driving
58 On reflection scenarios,” it said in a company statement. “Even when the technology is deemed
Gentex offers mirror-integrated driver monitoring
ready, there may still be potential legal hurdles, as well as questions over insurance
as a quick, cost-effective way to meet impending
liability, cybersecurity and public acceptance.”
regulatory requirements
Although the evolvAD vehicles are equipped with 100% autonomous drive capability,
60 Sound and vision Nissan says this should not be misinterpreted, and that it has “no intention to launch
VBOX’s new test tool enables engineers to check a fully autonomous vehicle in the UK and Europe in the near future”. Instead, evolvAD
the safety of L3+ driving controls and features fits into the OEM’s wider autonomous driving research and development program
that is taking place across its R&D facilities worldwide. As such, the findings will help
61 Mind the gap inform future Nissan AD systems for passenger vehicles with a focus on how Nissan
How ASAM seeks to close the gap between can ensure its systems integrate into urban environments.
standard specifications and their implementation,
through expert advice and a range of tools Such a methodical, patient and cautious approach may be frustrating for some
investors and tech enthusiasts, but recent events in the US suggest it is the correct
62 Ready for a drive? way to proceed. As this issue went to press, Cruise had been forced to recall 950 of its
Xylon’s all-in-one logging and simulation device self-driving vehicles from cities across the USA for a software update to the collision
supports all stages of development and validation detection subsystem of its automated driving system.
of in-cabin monitoring systems
The recall is the latest development in the ongoing fallout from an impact between
63 Plug−and−play data solution a driverless Cruise robotaxi and a pedestrian in San Francisco in early October
A modular, removable, rugged datalogger with hot 2023. The accident led the California Department of Motor Vehicles to subsequently
plugging capability to ensure a smooth changeover revoke Cruise’s permits to test and operate fully driverless vehicles on the state’s
highways and urban roads.
64 Have you met...? The damage to the image of self-driving vehicles caused by recent events in California
Florens Gressner, CEO and is hard to calculate. Many argue that a safety driver should always be on board to
co-founder, Neurocat
guarantee safety, at least until there is enough data to suggest otherwise. We’d love
to hear your thoughts on whether you agree and how best the industry can move
THE INTERNATIONAL
forward. In the meantime, I hope you enjoy the issue!
ADAS & AUTONOMOUS VEHICLE INTERNATIONAL

REVIEW OF AUTONOMOUS
VEHICLE TECHNOLOGIES:

Anthony James
FROM CONCEPTION
TO MANUFACTURE TO
IMPLEMENTATION

Editor, ADAS & Autonomous Vehicle International


anthony.james@ukimediaevents.com
January 2024
JANUARY 2024

FREE
Face LEARN MORE ABOUT OUR ADVERTISERS NOW! SERVICE!
value Scan the QR code or visit www.ukimediaevents.com/info/avi to request exclusive SCAN
Driver monitoring systems
continue to mature and and rapid information about the latest technologies and services featured in this issue QR CODE
PUBLISHED BY UKi MEDIA & EVENTS

grow in sophistication

Mapping Nissan: evolvAD Large language models


HD maps have long been synonymous with A UK project seeks to boost autonomous Breaking down the language barrier

Editor Divisional sales director, magazines


autonomous driving. Could the latest REM driving on rural and urban residential roads, between humans and computers could
technology mark a fundamental shift? partly by training AVs to act more assertively accelerate and improve AV development

The views expressed in the articles and technical papers are


Anthony James Rob Knight rob.knight@ukimediaevents.com published by those of the authors and are not necessarily endorsed by the
Web editor Publication manager UKi Media & Events, a division publisher. While every care has been taken during production,
Izzy Wood Sam Figg of UKIP Media & Events Ltd the publisher does not accept any liability for errors that may
have occurred.
Production editor Head of data
This publication is protected by copyright © 2024
Alex Bradley Lauren Floyd Contact us at: Printed by Jamm Print & Production sro,
Sub editors ADAS & Autonomous Vehicle International Prague, Czech Republic
Sarah Lee, Alasdair Morton, Abinger House, ADAS & Autonomous Vehicle International is brought to you by the
Mary Russell CEO Church Street, publisher of Automotive Testing Technology International, Crash
Tony Robinson Dorking, Test Technology International, Automotive Powertrain Technology
Art director Surrey, International and Tire Technology International. The company also
Managing director
Craig Marshall RH4 1DF, UK organizes Automotive Interiors Expo, Automotive Testing Expo,
Colette Tebbutt ADAS & Autonomous Vehicle Technology Expo and Tire Technology
Art editor Tel: +44 1306 743744
Managing director, magazines Expo. Please visit www.ukimediaevents.com to find out more.
Nicola Turner Email: avi@ukimediaevents.com
Anthony James Moving on? To amend your details, or to be removed from our
Production assistants General manager Web: www.ukimediaevents.com circulation list, please email datachanges@ukimediaevents.com.
Dylan Botting, Amy Moreland Ross Easterbrook autonomousvehicleinternational.com For more information about our GDPR-compliant privacy policy,
please visit www.ukimediaevents.com/policies.php#privacy. You
can also write to UKi Media & Events, Abinger House, Church
ISSN 2753-6483 (print) Street, Dorking, RH4 1DF, UK, to be removed from our circulation
Search for ADAS & Autonomous Vehicle International
ISSN 2753-6491 (online) list or request a copy of our privacy policy
to find and follow us on LinkedIn!

02 ADAS & Autonomous Vehicle International January 2024


VISIONARY CABIN MONITORING

Gentex’s new multi-modal, AI-based camera technology combines machine


vision, depth perception and micro-vibration detection to provide an entire
suite of cabin monitoring features.

The ideal cross-platform solution. Discretely integrated into an interior


rearview mirror (or nearby location) to optimize performance, minimize
occlusions, enhance styling, and share electronics.

Comprehensive and scalable:

+ Driver monitoring – distraction, weariness,


sudden sickness, return of manual control

+ Cabin monitoring – occupants, behavior, objects

+ Social and mobile communications – video


phone calls, meetings, in-cabin selfies

+ Cabin air quality monitoring – smoke, vape,


chemical sensing

Visit Gentex.com to learn more.


TECH INSIDER: ZF

All aboard
Alexander Makowski,
head of the autonomous
mobility product line at
ZF Group, discusses the
increasing popularity of
self-driving shuttles
By Anthony James

Last October saw leading autonomous (ATS) with Beep’s mobility services and service
transportation system provider ZF announce management platform into a single-source
that it was expanding its relationship with Houston- autonomous mobility solution.
based vehicle manufacturer Oceaneering The new shuttles are equipped with lidar,
International to deliver a “four-digit number” of ZF‘s radar, camera and audio systems to provide
Group Rapid Transport (GRT) shuttles to customers precise environmental detection. This is
in the coming years. complemented by the ZF ProConnect
The company started 2023 by debuting a connectivity platform, which enables
brand-new shuttle developed with US mobility communication with any V2X infrastructure and
services provider Beep at CES, designed specifically the cloud. Meanwhile the Virtual Driver – ZF’s
for autonomous driving in urban environments and AD software – processes the huge volumes of
mixed traffic. The company says this next-generation information, derives safe driving strategies
model complements its established GRT model, using AI, and passes them on as input to the
which is primarily designed for use in segregated onboard actuators, replacing the human driver
lanes. Beep and ZF have plans to deliver “several and rendering the steering wheel and brake
thousand shuttles” to customers over the coming pedal redundant.
years, combining ZF’s autonomous transport system The software stack consists of two
major parts – the performance path
ZF’s original GRT and the safety path. The latter
shuttle (right) and monitors comprehensive situations
next-generation
shuttle (left)

“AUTONOMOUS
TRANSPORTATION IS
NOT SO MUCH ABOUT
THE VEHICLE ITSELF,
BUT ALSO THE NEEDS
OF AN ENTIRE
SERVICE CONCEPT”

04 ADAS & Autonomous Vehicle International January 2024


TECH INSIDER: ZF

Please can you describe the first-ever


ZF GRT shuttle delivered?
The first Group Rapid Transport vehicles were installed in
the long-term parking lot of Schiphol Amsterdam Airport
in 1997 to improve the service for passengers. The vehicles
transported passengers from shuttle stops near their cars
to the main stop at the passenger lounge. The track
consisted of two loops of one kilometer each. Each loop
had several intersections with car traffic (equipped with
barriers and traffic lights) and pedestrians (equipped with
audible alarms only).

What have been some of the major


lessons learned since its launch?
The third generation of the GRT has been in use in Rivium
business park near Rotterdam since 2022. A predecessor
system has been in operation there since 1999 and has
been continuously developed since then. Six GRT 3
vehicles carry up to 3,000 passengers per day over
a 1.8km route. The route is segregated but has several
intersections with crossing traffic of pedestrians, bicycles
and cars. The GRT 3 can carry up to 22 passengers
(8 seated, 14 standing), has a maximum speed of 40km/h
ZF has partnered and can operate bidirectionally. It is the world’s only
with Oceaneering
to produce its
autonomous system operating without a safety driver.
shuttles in the US With over 100 million autonomously driven kilometers in
under safety aspects, defines virtual guardrails for real traffic, more than 14 million passengers transported
the performance path, and intervenes if necessary and an uptime performance of over 99%, it is considered
to help mitigate critical situations. Meanwhile the the world’s most experienced autonomous transportation
former enables smooth driving in complex scenarios. system. One thing we’ve learned: autonomous
ZF has teamed with UK-based Oxa (previously transportation is not so much about the vehicle itself, but
Oxbotica), a global leader in autonomous vehicle also the needs of an entire service concept with whole-life
software for businesses, to help develop the support from planning to implementation, deployment,
Level 4 self-driving system integrated into operation and maintenance.
its latest shuttle range. As part of the
collaboration, ZF took the decision to How would you describe the
invest in Oxa, with a seat on the self-driving shuttle market?
company’s advisory board and a THE NEXT- Driverless, all-electric shuttles can relieve congested city
5% share. GENERATION streets, reduce emissions and better connect rural areas.
The two companies started SHUTTLE CAN They have the potential to drive the mobility transformation
working together in 2019, with Oxa needed to meet ambitious climate change targets while
integrating its autonomous vehicle
COVER UP TO providing a solution to the severe shortage of public
software with ZF’s ProAI – an 130KM transportation drivers. A network of autonomous shuttles
AI-capable automotive-grade can complement existing public transportation services
computational platform – and ZF’s in cities and also better connect rural areas to cities.
full-range radar.
Both companies expect the autonomous Could your first-generation GRT
shuttle market to grow substantially over the next 10 shuttles ever be integrated into mixed
years, offering the best means to increase access to urban traffic or rural highways?
mobility, improve road safety, reduce congestion and The GRT shuttles are designed for use in segregated lanes.
boost productivity. For mixed traffic, ZF has developed its next-generation
ZF itself has more than 25 years’ experience shuttle, launched at CES 2023. It is equipped with
working on ATS technology, with its autonomous state-of-the-art sensor technology consisting of lidar,
and fully electric shuttles covering over 100 million radar, camera and audio systems that provide precise
kilometers and carrying more than 14 million environmental detection. This is complemented by other
passengers since going operational in 1997. technology such as the ZF ProConnect connectivity
ADAS & Autonomous Vehicle International platform, the ZF ProAI supercomputer and the Virtual
recently caught up with Alexander Makowski, head Driver – ZF’s AD software. The next-generation shuttle can
of the autonomous mobility product line at ZF Group, cover up to 80 miles [128km] in pure-electric mode, initially
to find out more. at a maximum speed of 25mph [40km/h], and later [in the
development program] at 50mph [80km/h].

ADAS & Autonomous Vehicle International January 2024 05


TECH INSIDER: ZF

ZF’s GRT boasts


over 100 million
autonomously
driven kilometers
in real traffic

Can you tell us more about the


feasibility study in Solihull? UK feasibility
The study [see UK feasibility study, opposite] will
consider traditional rail-based requirements, and the
associated costs that can be removed compared with
study
ZF has taken the first steps toward introducing
the new requirements and costs that will be required its autonomous transport system (ATS) to
for an automated system. East Birmingham North the UK, after winning a Centre for Connected
Solihull Metro segregated transit corridor would link Autonomous Vehicles (CCAV) grant to conduct a
the commercial centers of East Birmingham and feasibility study for projects in Solihull, near Birmingham.
‘The Hub’ – a 1,300ha site in Solihull that comprises The company will work closely with industry
Birmingham Airport, Birmingham Business Park, leaders, including Transport for West Midlands,
Jaguar Land Rover and the National Solihull Metropolitan Borough Council, Syselek (UK)
Exhibition Centre (NEC), as well Ltd and Ove Arup and Partners, to develop driverless
as Arden Cross, the location of “WE BELIEVE IN shuttles in the area, with the project aiming to establish
passenger routes.
the HS2 Interchange Station
– connecting communities.
THE FUTURE OF The project, which started in the second quarter of
The study hasn’t been finalized AUTONOMOUS 2023, builds on an existing business case for a metro
route in the West Midlands, while developing an
yet. The number of passengers is
part of the output of the study, but DRIVING IN independently verified case for a segregated transit
corridor. This will be achieved by replacing the initial
a very high-level estimate is 1,000
passengers per hour, per day, by 2031.
GENERAL AND light-rail solution with a driverless, remotely supervised,
Each of the partners is an industry THE MARKET FOR rail-less service, using autonomous, emission-free
shuttles on segregated lanes.
expert in their field and has delivered
public transportation projects in the AUTONOMOUS Crucially, the study will consider the traditional
rail-based requirements and associated capital and
past. The cancellation of parts of HS2
(the UK’s high-speed rail project)
SHUTTLES operational costs that can be removed, as well as the
doesn’t affect the study, which is SPECIFICALLY” new requirements and costs that will be required for
an automated system, alongside demonstrating the
focused on providing connections
beyond high-speed rail. feasibility of delivery.
The study looks to address two fundamental
How is your partnership with transportation needs. First, growth as a result of the
HS2 Curzon Street station in Birmingham, as well as
Oxa helping to further your
the arrival of the HS2 Interchange in Solihull. Second,
capabilities? the lack of high-quality public transportation connectivity
ZF and Oxa are partnering to develop the ZF Virtual in the communities along the proposed route, which has
Driver, a Level 4 self-driving system. The software left these areas isolated from employment opportunities
stack consists of two major parts – the performance and local services.
path and the safety path. Together, both enable The routes identified, linking the commercial
operation of Level 4 shuttles or other transportation centers of east Birmingham and ‘The Hub’ in North
carriers in a safe and reliable way. Solihull, connect some of the UK’s most deprived
communities. These routes have been identified as
Some AVs have been making needed for over two decades but to date, based on
headlines for the wrong reasons. traditional transportation technologies, have been
What is your view? considered too expensive to deliver.
Yes, it’s true. The market for autonomous vehicles The possibility of running shuttles between
Birmingham International Station, Birmingham
is currently a difficult one. However, we believe in
Business Park and the NEC is also being considered.
the future of autonomous driving in general and the
market for autonomous shuttles specifically. ‹

06 ADAS & Autonomous Vehicle International January 2024


TECH INSIDER: E Z TOW

Handle
with care
Marion Ferré, deployment
project manager at EasyMile,
on the EZTow autonomous
baggage tractor deployment
at Toulouse-Blagnac Airport
By Anthony James

An autonomous tow tractor in


service at Toulouse-Blagnac
Airport since 2022 has recently What was the original brief/ODD
progressed to full L4 autonomous driving and how has this changed?
(no human on board) along an extended Toulouse-Blagnac wanted to find a solution that
route, allowing for more complex use. proved they could optimize their flow between the
The distance covered by the vehicle has baggage gallery and the aircraft parking positions.
increased from 800m to 2km, and functionality Now, following the move to L4 operation, the
has grown to include studying baggage tracks and worker who was previously the onboard attendant
optimizing trajectories and maneuvers. The new route can instead monitor missions from a tablet. Other
also adds challenges like narrow trajectories in the workers can use a rear panel on the vehicle to send
indoor area, also known as the luggage gallery, and it anywhere on the programmed line. On this line,
more interactions with other traffic. each station is named like an aircraft gate so they
Similar solutions are in service at Narita are easily recognizable to workers.
International in Japan and in a fleet at Changi This is part of the beauty of autonomous
Airport in Singapore. They also operate in several solutions – they are easily adaptable and integrable
major automotive manufacturing plants and to existing systems and layouts, and can run with
logistics centers in Europe and the USA, including human-driven vehicles, etc. It also means that
the BMW Group Plant Dingolfing and Daimler other vehicles could easily be added for fleet
Truck AG in Germany. expansion or peak times.
The recent upgrade is a key part of ground
handling service provider Alyzia’s goal to serve Can you describe some of the
more flights and optimize baggage handling while key tasks and challenges?
guaranteeing safety. Partners in the deployment are The key task is to unload baggage from aircraft.
Alvest Group, TLD and Smart Airport Systems for The EZTow
The challenge comes in the luggage gallery, as
the vehicle, EasyMile for the driverless technology is remotely it is narrow and crowded so the vehicle needs
and Alyzia. The goal is to monitored to maneuver very carefully.
demonstrate how autonomous by tablet Another challenge is running around aircraft
vehicles can optimize luggage and behind them as they are parking. The ADS-B
and freight logistics. By system needs to work well and special insurance
removing any human policies have been taken out for these positions.
intervention on board, cost and
time efficiency, scalability and What about safety?
flexibility are all unlocked. This service is only monitored from a tablet.
AAVI recently caught up However, remote operation is a possibility
with EZTow’s deployment depending on the use case. In the event of a
project manager, Marion Ferré, potential hazard, it will always take the safest
to find out more. measure, which can include stopping. In this

08 ADAS & Autonomous Vehicle International January 2024


TECH INSIDER: E Z TOW

“EASYMILE’S OWN
PROGRAMMERS WRITE
THE ALGORITHMS THAT
INTERPRET SENSOR
DATA AND APPLY DEEP
LEARNING TECHNIQUES
EZTow has begun TO THEM”
following a new
2km route

position and the accuracy of it to within 2-3cm at


all times. Any potential deviation will safely slow
down or stop the vehicle. In addition, the vehicle
is in constant communication with both its
case it usually restarts automatically. There are also environment and the EasyMile supervision tablet.
alerts on the tablet to indicate any issue. There have EasyMile’s satellite navigation technology is a
been no accidents to date. multi-GNSS system that processes GLONASS as
well as the original GPS. The system’s precision
Please describe the vehicle. is enhanced by real-time kinematic (RTK)
What’s on board? processing. The GNSS position is also used in
The vehicle body itself is manufactured by conjunction with information from the 3G or 4G
TLD, a leading manufacturer of baggage EZTOW HAS grid, with 5G testing underway.
tractors for airports, with nine factories
around the world and over 1,800
COMPLETED
employees. It is a robust and known 700KM AT What are the key goals of
platform that can tow up to 25 metric TOULOUSE- the project?
tons and has a 4.25m turning radius.
BLAGNAC IN ONE The goal is to demonstrate the readiness of
The vehicle is fully electric with lead-acid driverless solutions for commercial operations
or Li-ion batteries. YEAR WITH ZERO at airports, phasing them in with the range of
EasyMile has taken a conservative INCIDENTS manually driven vehicles still in operation. In
approach to the sensor suite, using devices Toulouse-Blagnac’s case, this means specifically
from a number of market-leading suppliers, but the towing of luggage from landing positions
is not committed to any particular technology or to the baggage hall. The service is also an
supplier and regularly implements updates. We use opportunity to test the EZTow on various airport
what we think gives us the best information on every infrastructure elements such as intersections,
part of our environment, both close to the vehicle roundabouts and turning circles, and in different
and at longer ranges. weather conditions like rain, fog, and snow. ‹
Complementing the lidars, the stereo cameras
provide input to EasyMile’s deep learning effort,
and a separate team adds another redundancy
layer in terms of software development. EasyMile’s
own programmers write the algorithms that
interpret sensor data and apply deep learning
techniques to them. The vehicle was
developed by
The high-level decision making and path planning
Alvest Group,
software runs on a dedicated computer, while the TLD and Smart
dedicated safety software runs separately to ensure Airport Systems,
high confidence in the vehicle’s safe operations. with AD tech
Using all the available data from the different from EasyMile
sensors in a fusion algorithm, the vehicle knows its

ADAS & Autonomous Vehicle International January 2024 09


P R OJ E C T B R I E F : W H I L L

Flight control
Front, side and
rear sensors
ensure collision
avoidance

An autonomous wheelchair is just


the ticket for those flying with a
disability or reduced mobility
By Anthony James

THE CONCEPT The device has full 360° object-sensing coverage, enabling
The number of people seeking wheelchair services, it to stop and move around any objects blocking its path, or
specifically in airports, is growing rapidly. Airport prompting an audible alert. “If there’s a large crowd of people
operators are struggling to keep up with the demand of in the way and the device can’t get through, it will audibly ask
maintaining a reliable workforce for push-chair services. Poor people to step aside so the passenger could continue to their
wheelchair availability can lead to delayed flights, destination,” explains Gagnon.
embarrassing headlines and frustrated passengers and family In most cases, WHILL is used in a departure scenario to
members. WHILL provides airports and passengers with a take passengers to the gate. Other services include connecting
one-to-one service that gives passengers the independence passengers within the airport, or international arrival scenarios
and freedom to determine where they want to go within the where there is a long walk from the gate to customs.
airport, reliably and efficiently. “Once the device is secured, the passenger can view
a small screen in the arm of the device,” continues
THE TEAM Gagnon. “Here, a few prompts will come up. They can
select their designated language and it explains
Satoshi Sugie, the founder and CEO of
how the device operates. Then, once a destination
WHILL, was living in Tokyo and working
is confirmed, there’s a five-second countdown
for Nissan as an automotive engineer and
and the device departs.”
designer when his friend unfortunately lost
During the ride, the passenger can hit
his mobility. His friend mentioned he was
a pause button on the screen or call for
unable to get to the grocery store due to his
assistance that will route them to an operator
condition, so they went together to find a
in case they have any questions. An operation
suitable mobility device. However, they couldn’t
team monitors the use, communicates with
find something that could operate in small
passengers and, if needed, can dispatch the
spaces and fit his needs. Sugie decided to design Users can select device to another location.
and build something himself, and asked two friends destinations
or pause the
Once the passenger arrives at the destination,
who worked at Panasonic and Samsung to lead on the
service with the device will return to its original location to pick
tech side. The resulting prototype was a huge success,
a simple click up another passenger.
encouraging Sugie to start a business. He and his two
“Currently, the devices are getting roughly
friends founded WHILL in 2012.
seven- to eight-hour run times on a single charge,” says
The three friends moved from Tokyo to Silicon Valley,
Gagnon. “Each device comes with two batteries that you can
California, where they received some early funds from tech
very easily swap out when needed. A battery takes roughly
investors. In 2019, WHILL was presented at CES, where it
four to five hours for a complete charge, so if managed
received a lot of interest from airports keen to arrange trials.
properly the devices can run 24/7 with no downtime.”
This led to the first commercial application at Haneda Airport
in Tokyo, Japan, in 2020.
PROJECT STATUS
THE VEHICLE WHILL is up and running at three major airports in
Japan. In North America, the company has one
WHILL works on-site with an airport to generate a
permanent installation – in Winnipeg, Canada.
digital map of the terminal, complete with specific paths
“We’ve done a lot of trials and pilot programs at major
for the devices to follow. “We create an invisible rail line that
US airports, and expect by the end of this year to have
our devices will follow and stick close to that line. If any
several other programs up and running at airports in major
obstacles get in the device’s way, we can take a small detour
metropolitan areas,” concludes Gagnon. “One of our biggest
and continue our path,” explains Justin Gagnon, WHILL’s VP
challenges in the US market is that all stakeholders need to
of business development.
get on the same page for the program to kick off, and this can
“WHILL works alongside the airport to set specific stations
take some time. We have learned so much along the way and
where our devices will be available for passengers to use. We
will continue to face challenges, especially with the difficulty
set goal destinations throughout the airport, including airport
of airport infrastructure. The device needs to handle multiple
lounges, gates, restaurants and coffee shops, and program
levels and trains/shuttles, and we are implementing solutions
those into the device. The completely autonomous service
to close these gaps. In terms of the passenger experience,
then takes the passengers to their designated locations.”
there’s a survey available for them to complete when the ride
is over. So far, it’s been overwhelmingly positive.” ‹

10 ADAS & Autonomous Vehicle International January 2024


JUNE 4, 5 & 6, 2024
MESSE STUTTGART, GERMANY

SAVE THE DATES!


SCAN THE QR CODE NOW!

The latest ADAS


technologies + full
autonomy solutions
+ simulation, test
and development

TESTING TOOLS SENSING AND AI SIMULATION SOFTWARE

CO-LOCATED WITH:

2024 www.adas-avtexpo.com/stuttgart

#avtexpostuttgart
Get involved online!
JUNE 4, 5 & 6, 2024
MESSE STUTTGART, GERMANY

GET YOUR FREE EXHIBITION


ENTRY PASS – REGISTER NOW!

See all the latest


development, testing,
validation and
next-generation
enabling technologies
specifically for
your ADAS and AV
programs

CO-LOCATED WITH:

150+
ADAS AND AUTONOMOUS
TECH EXHIBITORS

For information on exhibiting, contact Chris Richardson, sales director


Tel: +44 1306 743744 Email: chris.richardson@ukimediaevents.com
#avtexpostuttgart www.adas-avtexpo.com/stuttgart
Get involved online!
CONF ERENCE

The 2024 ADAS &


AV conference will
be held alongside
the expo, bringing
together world-leading
experts in the field of
autonomous vehicle
research

GET YOUR FREE EXHIBITION ENTRY PASS SCAN QR CODE TO REGISTER NOW!
www.adas-avtexpo.com/stuttgart
www.adas-avtexpo.com/stuttgart
DRIVER MONITORING

EYES ON

14 ADAS & Autonomous Vehicle International January 2024


DRIVER MONITORING

Robust driver monitoring is coming under


ever sharper scrutiny as more capable
assistance features reach the market – and
the most advanced systems could offer
advantages beyond occupant safety
By Alex Grant

HANDS OFF
With an attentive
driver, and under
the right conditions,
GM’s Super Cruise is
designed to permit
hands-free operation

ADAS & Autonomous Vehicle International January 2024 15


DRIVER MONITORING

W ithin the development arc of


autonomous vehicles, the last
two years might well be
remembered for making
driving more cooperative.
Toyota, Ford and BMW have joined General Motors in
adding ‘hands-off, eyes-on’ Level 2 highway assistance to
new models, while Mercedes-Benz offers conditional
Level 3 features in selected markets. It’s a step forward,
but one that risks blurring the lines for end users –
drivers still have to be attentive, and monitoring this
effectively is crucial.
Of course, distractions are also a challenge for
manual driving, and regulators are focusing on driver
monitoring systems (DMS) as a solution. Governments
in China and the USA have signalled plans to research
and mandate their use in future vehicles, and the
European Union’s General Safety Regulation 2 (GSR2)
has already done so. A DMS will be mandatory in type
approvals in Europe from July 2024 and in all new
registrations two years later, and their importance in AUTOMATED
autonomous vehicle development is cited in the DRIVING
regulation itself.
Today’s systems fall into two categories, either SYSTEMS COULD
monitoring attention indirectly (typically based on PREVENT 47,000
steering inputs) or directly (with a camera tracking eye SERIOUS
gaze and/or head posture); testing by AAA suggests the
latter is much more effective. On a California highway,
ACCIDENTS
Source: SMMT
the organization’s engineers (with a safety spotter on Although direct systems performed better in all
board) examined three hands-off driving scenarios: scenarios, both could be tricked by engineers and neither
staring into their laps, looking at the center console, and resulted in the assistance feature being disabled after
a third giving free rein to trick the system. Vehicles were multiple warnings. Matthew Lum, senior automotive
timed to see how long they took to issue warnings or take engineer at AAA, says this is a particular challenge when
other action. To prevent driver drivers delegate more of the task to the vehicle.
In the first two scenarios, camera-based systems from distraction, Ford “We haven’t conducted direct research [looking at]
monitors eye and
Cadillac and Subaru alerted drivers after four to eight head movement
driver distraction when using these systems. But, based
seconds, compared with between 38 and 79 seconds for using in-built on human psychology, we are very poor at periodically
indirect monitoring technology from Tesla and Hyundai. cameras observing a process which is – for lack of a better term
– largely automated,” he explains.
“That would lead me to believe that
driver distraction is probably something
that occurs more often when using these
systems. Not that drivers will intentionally
misuse them, it’s just somebody getting
lulled into a false sense of security. [That]
is why robust driver monitoring is
very important.”

Public confusion
Consumer awareness is an added
challenge. In 2022, the Insurance Institute
of Highway Safety (IIHS) surveyed 600
drivers, split equally between regular users
of Cadillac, Tesla and Nissan (and Infiniti)
assistance features. Among them, 53%, 42%
and 12% respectively treated their vehicles
as fully self-driving, and there was an
increased tendency to eat, drink or send
text messages while the systems were
active. Almost half of Tesla AutoPilot and

16 ADAS & Autonomous Vehicle International January 2024


DRIVER MONITORING

“BASED ON HUMAN PSYCHOLOGY,


WE ARE VERY POOR AT
PERIODICALLY OBSERVING A
PROCESS WHICH IS, FOR LACK
OF A BETTER TERM, LARGELY
AUTOMATED”
Matthew Lum,
senior automotive engineer, AAA

Getting to know you


Similar to eCall being a catalyst Automotive interest is picking up, he
for connected features, DMS says. In July 2023, Fingerprints signed an
cameras could be the foundation agreement with a Tier 1 to integrate its
for other in-vehicle functionality. According software within a DMS. To ensure privacy,
to Pontus Jägemalm, CTO at biometrics iris data is vectorized and encrypted on
specialist Fingerprint Cards (Fingerprints), board, and recent developments have
high-resolution infrared cameras could enabled more accurate identification from
BlueCruise uses blue also bring the company’s iris detection low-resolution, wide-angle cameras and
lighting on the digital software into future vehicles. in noisy environments.
instrument cluster
“The more traditional DMS are focusing “With these infrared-optimized
to indicate when
the vehicle is in a on more generic properties of humans – cameras, the resolution has gone up and
hands-free zone we are adding the dimension of reliably the cost penalty for higher resolution is
detecting who is in the driver’s seat,” he says. not as high,” continues Jägemalm. “The
“That can be applied to things like adjusting common use cases are also getting better
the settings in the vehicle, making payments and better; you have more properties that
and authentication. These things are getting you can detect and then therefore you
more and more interesting as the vehicle can justify having a slightly higher-quality
will be driving itself more and more.” camera in a car.”

Mercedes-Benz is the
first OEM to use Visa’s
Delegated Authentication
and Visa Cloud Token
Framework technology
to enable secure native
in-car payment, enabling
customers to pay for digital
services and on-demand
hardware upgrades using a
fingerprint sensor in the car

ADAS & Autonomous Vehicle International January 2024 17


A EUROTECH COMPANY

anywhere
READ

THE INTERNATIONAL
WITHOUT WI-FI!
ADAS & AUTONOMOUS VEHICLE INTERNATIONAL

REVIEW OF AUTONOMOUS
VEHICLE TECHNOLOGIES:
FROM CONCEPTION
TO MANUFACTURE TO
IMPLEMENTATION

January 2024
JANUARY 2024

FREE APP
TIONAL
THE INTERNA MOUS
OF AUTONO
REVIEW LOGIES:
TECHNO TION
VEHICLE
FROM CONCEP TO
CTURE
TO MANUFA ENTATIO N
IMPLEM
ADAS &
AUTONOMOU

2024
January
S VEHICLE
INTERNATIO
NAL
JANUARY
2024

download now!
Face
value

Face Receive the latest


ems
g syst
monitorinmature and
Driver to
continue sophistication
grow in
e models
languag
Large language
barrier

issue weeks before


could
down the
PUBLISHED

Breaking humans and computers


AV development
evolvADautonomous between and improve
Nissan: seeks to boostresidential roads, accelerate
BY UKi

A UK project and urban more assertively


on rural act
driving training AVs to

value
g with
partly by
MEDIA

Mappin
have long
been synonymous
the latest
REM
HD maps Could shift?
driving.
& EVENTS

autonomousmark a fundamental
technology

the hard copy is


SEARCH ADAS AND available
AUTONOMOUS VEHICLE INTERNATIONAL
Driver monitoring systems
continue to mature and
PUBLISHED BY UKi MEDIA & EVENTS

IN YOUR APP STORE NOW


grow in sophistication

Mapping Nissan: evolvAD Large language models


-----
HD maps have long been synonymous with A UK project seeks to boost autonomous Breaking down the language barrier

Includes the latest


autonomous driving. Could the latest REM driving on rural and urban residential roads, between humans and computers could
technology mark a fundamental shift? partly by training AVs to act more assertively accelerate and improve AV development

news & Free


Magazine archive
DRIVER MONITORING

Among camera monitors, some are able to detect where


The Gentex
the eyes are looking while others can detect only which
mirror-integrated
driver and cabin direction the face is pointing. A problem with the latter
monitoring system is that the eyes may be looking somewhere – for example,
at a mobile device – different from where the nose is
pointing. So far, no implementations of camera monitors
look to see whether the hands are free to take the wheel if
needed – i.e. not occupied with a burger and a Big Gulp.”
Euro NCAP has similar concerns and added direct
driver monitoring to its ratings system (shared with
ANCAP) in January 2023, noting that the technology
has become mature and accurate enough to be effective.
Impairment and cognitive distractions will be added to
GM Super Cruise users had experienced a temporary the ratings from 2026, and technical director Richard
suspension of assisted driving features having Schram says it’s important that these systems be
repeatedly ignored warnings. designed with the driver experience in mind.
IIHS introduced new DMS ratings last year, “We don’t want the system constantly telling you
awarding systems with gaze and head posture that you’re not paying attention, [because] that will really
monitoring and multiple alerts, and that do not pull reduce acceptance of the system,” he explains. “We are
away from a full stop if the driver isn’t paying attention. pushing for clever systems. Ideally, the car knows you
David Zuby, the organization’s executive vice president are distracted [but] if there’s nothing critical happening
and chief research officer, adds that even the most in front of you then it isn’t really a problem per se. If you
advanced systems have limitations. look away for half a second, the car might say, ‘Oh, there’s
“Most Level 2-type features don’t have adequate more pressure on me because the driver isn’t seeing
monitoring of drivers’ attention,” he explains. “Only a anything’. That’s the interaction we want.”
few companies are using driver-monitoring cameras to
detect whether the driver is looking at the road ahead.
Many systems use only hands-on-wheel detection.

“AMONG CAMERA MONITORS, SOME


ARE ABLE TO DETECT WHERE
THE EYES ARE LOOKING WHILE
OTHERS CAN DETECT ONLY WHICH
DIRECTION THE FACE IS POINTING”
David Zuby, EVP & chief research officer, IIHS

GM has launched a public education


campaign to help customers understand

Privacy concerns the capabilities of Super Cruise

With connected and software-based vehicles, In Europe, GSR2 forbids DMS data being accessible or
Tesla has monitored driver interactions with made available to third parties, and event data recorders
AutoPilot since the system launched in 2015. must not store any personally identifiable information after
Other OEMs are collecting similar data to steer ongoing a collision. By comparison, AAA’s Matthew Lum points out
development. Jeff Miller, Super Cruise lead solution that in NHTSA’s recent Notice of Proposed Rulemaking
manager at General Motors, says customers must agree to for Advanced Emergency Braking, the organization is
terms and conditions before using the system, including considering requiring “targeted data recording” of AEB
collecting “operational and safety-related information”, activations – including camera data. It doesn’t say whether
but stresses that privacy is a priority. this information could be released.
“In order to operate Super Cruise, you must have an “We feel that consumers should be entitled to determine
active and eligible Connected Services Plan,” he comments. what happens with their data. With driver monitoring
“Vehicles are connected to OnStar Emergency Services, systems, I would say that it’s probably the time to have
so advisors can assist drivers should they become a conversation about that,” says Lum.
non-responsive while Super Cruise is active. “If there are no robust privacy protections, you might
“When active, Super Cruise uses an in-vehicle driver start to see significant instances of people just putting a
attention system with proprietary head pose and eye piece of tape over the cameras. Even if that causes certain
gaze software that helps make sure your eyes are on safety features to be disengaged, some people will consider
the road. The system does not record or share images, that a fair trade-off. Disabling some of these ADAS features,
audio or video.” which took a lot of R&D, is counterproductive.”

ADAS & Autonomous Vehicle International January 2024 19


DRIVER MONITORING

Full cabin monitoring


Camera-based systems are developing quickly. Brian
Brackenbury, director of product line development at
Gentex Corporation, says the automotive industry is
benefitting from innovations from the cellphone industry
and considering what can be added on top.
Sensors mounted near the rearview mirror are less
prone to occlusions than those on the steering column,
he explains, while near-infrared technology can track
gaze through sunglasses, and higher-resolution cameras
could monitor the entire cabin. This would provide
opportunities for additional safety features, such
as enabling vehicles to pre-tension seatbelts and
prime airbags more accurately before a collision,
as well as offering new features for customers.
“Not only are [the latest cameras] higher
resolution, they’re a combination of infrared
and color. We can start doing things like selfie
cam [and] video telephony and start using Whole interior
that camera for more than just regulatory stuff environments can
to bring value to the end consumer – we see also be monitored
that as a big step,” Brackenbury comments.
“We’re also really excited about 3D depth
perception. Once you’ve already got [our] camera profile) and also extending the safety of other occupants
pointed in the cabin, you can add structured light, in the vehicle. Moreover, over-the-air updates mean these
[and] the camera is actually looking at those [invisible, features can be sold to models already on the road.”
near-infrared] dots in the field. We can model the Cabin monitoring is also a
occupants in the vehicle in the 3D world as a means to development focus for Smart Eye, and
better enhance secure safety systems. [This means we can] “THE SMOOTH company CEO Martin Krantz notes a
more accurately identify if they truly are holding onto the
steering wheel, or maybe they’re floating over it, [whereas]
AND SEAMLESS significantly higher level of complexity
involved. While a DMS would typically
2D can still get a little confused.” HUMAN-MACHINE have 20 features, cabin monitoring
Paul McGlone, CEO of Seeing Machines, is similarly
optimistic about the potential for full cabin monitoring. INTERACTION requires higher-resolution cameras with
a wider field of vision, supporting up to
The company has recently partnered with Magna to ENABLED BY CABIN 100 features – including the state of
integrate a DMS into a rearview mirror, which offers a occupants, forgotten object detection
suitable viewpoint, and further developed its algorithm to MONITORING and even the position of child seats.
support more features – for example, closer interactions
between the DMS and ADAS functions and enabling
TECHNOLOGY WILL For suppliers, those next stages
include increased need to understand
smarter, AI-driven, digital assistants. BE ESSENTIAL IN and interpret human physiology – such
“As driver adoption increases over time, accelerated by
regulatory tailwinds, the optical and processing systems
FUTURE CARS” as intoxication, drowsiness and ill
health – and the automotive industry is
put in place can then be harnessed to offer a range of Martin Krantz, CEO, Smart Eye benefiting from research in other areas.
value-adding features to the manufacturer, the driver and Smart Eye technology is supporting
indeed passengers,” he explains. studies of human behaviors for qualitative studies, and
Below: Seeing
“This includes features based around convenience, monitoring responses to entertainment and advertising.
Machines is working
such as simulating augmented reality using eye position, with Qualcomm on Krantz says the company is already working with
comfort (for example, adapting the cockpit to the driver’s cabin monitoring OEMs and Tier 1s to explore applications in future
vehicles. “As new regulation requires vehicle
manufacturers to adopt DMS, technology that monitors
the state of the driver will become as common as safety
belts and airbags in cars. The next step is for the software
to understand drivers and passengers on a deeper level.
By tracking the facial expressions, gaze direction and
gestures of each person in a car, you gain another layer of
insight into what they are doing and feeling. This enables
much more advanced user interfaces than what we have
today,” he explains.
“The smooth and seamless human-machine
interaction (HMI) enabled by this technology will be
essential in future cars. DMS is already improving road
safety in millions of cars on the road, allowing us to
explore what this technology could mean for advanced
HMI down the line. That’s the next level.” ‹

20 ADAS & Autonomous Vehicle International January 2024


STANDARDIZATION
FOR AUTOMOTIVE
DEVELOPMENT

ACCELERATE ENGINEERING
FOR MOBILITY
www.asam.net
DATA O P E R AT I O N S

Mastering the ADAS/AV data pipeline


is crucial to the successful deployment
of self-driving vehicles
By Ben Dickson
Illustration: Sean Rodwell

22 ADAS & Autonomous Vehicle International January 2024


DATA O P E R AT I O N S

SELF-DRIVING
VEHICLES
OFFLOAD UP TO
5,000GB OF DATA
PER HOUR
Source: Automotive Edge Computing
Consortium (AECC)

Lake
ADAS & Autonomous Vehicle International January 2024 23
DATA O P E R AT I O N S

he rapid evolution of ADAS and


autonomous vehicles is creating “HAVING ACCESS TO A DATA LAKE
a growing need to manage the
burgeoning volume of data. This
AND THE RIGHT DATA IS BECOMING
data is essential for training and THE MOST IMPORTANT PRECURSOR
refining the machine learning
models that are becoming the
TO MACHINE LEARNING ITSELF”
‘brains’ of our cars. Data operations, Rajat Sagar, senior director of product management, Qualcomm Technologies
or DataOps, has emerged as a critical component of the
ADAS/AV technology stack. It spans from the edge to the
cloud, orchestrating the collection, processing and storage “We can play back drives in our internal tools to look at what
of petabytes of data, and providing engineers with the was happening when and go back to when the disengagement
right tools to annotate the data for training and testing happened and see what was around the car and what the driver
machine learning models. reacted to,” Wong explains.
Tim Wong, director of technical marketing, automotive In Nvidia’s development cars, a test operator seated in the
at Nvidia, underscores the magnitude of this task: “It is a lot back has a more detailed view of the model’s operation. They can
of data, and DataOps is probably almost as hard a problem as monitor how the system is detecting lane markings, vehicles and
autonomous driving. It’s a huge piece of the puzzle.” pedestrians, and the confidence level of the machine learning
The importance of data extends beyond just volume. models running behind the scenes. If they detect errors or
Rajat Sagar, senior director of product management at inconsistencies, they can flag them for immediate action or
Qualcomm Technologies, emphasizes the role of data later review. In production vehicles, a graphical user interface
as the primary driver for machine learning training presents a 3D reconstruction of the car’s perception of the
and the models themselves. “As vehicle complexity road and the environment.
grows to the next level with either higher autonomy This approach has proved invaluable for the Nvidia
levels or added connected services and experiences automotive team, allowing them to gradually identify the
within the car, having access to a data lake and the types of data that their models have not been trained on,
right data is becoming the most important precursor such as specific types of lane markings or unique
to machine learning itself,” he says. arrangements of pedestrians or objects.
As we navigate the future of ADAS/AV, the pieces are
coming together to tackle this great data challenge, and Catching corner cases
DataOps is at the heart of the endeavor. For Qualcomm Technologies’ automotive team, corner cases –
unusual situations that occur infrequently – serve as crucial
Tracking disengagements indicators for the need to gather new data for the company’s
The journey toward perfecting machine learning models for ADAS/AV platform. As Sagar says, “At some point, the vehicle
ADAS/AV begins with training models on extensive volumes starts hitting corner cases – if there’s one in every 10,000km, you
of labeled data. Once these models are deployed on cars and won’t see those corner cases every mile. Collecting enough data
roads, they are regularly fine-tuned with fresh data collected on those corner cases becomes impractical and cost-prohibitive
from real-world driving sessions. if you collect the entire data.”
However, there comes a point when the data collected To address this challenge, Qualcomm Technologies has
by the vehicle offers diminishing returns for model developed a ‘shadow mode’ monitoring system that observes the
improvement. This is when engineers need a mechanism behavior of the machine learning stack within the vehicle and
to filter out data that no longer contributes significantly to identifies the scenarios that require recording and subsequent
the enhancement of the models.
For Nvidia, a leading player in this field,
the behavior of the backup driver is a key
indicator for data collection.
“If you are testing and the car disengages
from autonomous mode, usually one of two
things has happened,” says Wong. “Either
the safety driver decided the technology
wasn’t aggressive enough and decided to
do something, or the car did something
unsafe and the driver decided to take over.
I care about the safety issues, so if the safety
driver thought something was unsafe, I want
to go back and look at that situation.”
Nvidia’s Drive platform is designed to
collect a wealth of information, including Qualcomm’s Data
telemetry data such as throttle, brake, Factory can
crunch huge
steering angle and turn signal indicators. amounts of data
It also captures raw data from cameras,
lidars, radars and other sensors.

24 ADAS & Autonomous Vehicle International January 2024


DATA O P E R AT I O N S

To streamline the annotation process, engineers


employ automated labeling tools, incorporating
machine learning both in the car and in the cloud.
Automated labeling is complemented by human
oversight to ensure annotations are accurate.
Nvidia also places great importance on data
annotation and mining. When data collected from
the car is uploaded into the cloud, it is tagged with
information such as road conditions, GPS locations,
time of day, location of the sun, weather conditions
and urban density.
“We can look throughout the training data and say,
how much do we have of rainy days, snowy days, water
on the road, potholes? And we can kind of figure out
at a very macro view, do we need more night drives?
Nvidia’s Drive Sim Do we need more night drives when it’s foggy? Do we
can generate data need more night drives when it’s heavy rain on unlit
missing from
real-world data sets streets?” says Wong. “We’re providing a lot of meta
transmission to the cloud. “These scenarios are tags into our data that help our data scientists figure out, as a whole,
tagged using a machine learning model that is how much coverage we have and how much we need to collect.”
trained and adapted to learn what new data needs to be collected
over time, based on how the stack is performing,” Sagar explains. Updating models
For instance, if the confidence level of the main control As companies continue to amass more data from real-world driving
stack drops below a certain threshold, or if the human driver scenarios, they face the challenge of deciding how frequently to
intervenes, the shadow mode stack tags the scenario and gathers update their models. These updates are crucial for enhancing safety
the data for future analysis. measures and adapting to changes in driving environments.
This system also allows engineers to focus on specific “Our highest priority is to improve the model on edge cases. And
scenarios where more data might be required, such as a specific depending on the frequency they occur, we prioritize those edge
combination of road, traffic, pedestrian and weather conditions. cases and make sure they’re improved in our product,” says Palghat.
In such cases, the shadow mode can be programmed to The process of model updating is continuous and dynamic.
actively collect data from the scenario, even if the AV successfully “We generally conduct continuous updates, so the stack is always
navigates the situation. evolving. The regular cadence is important because it gives
predictability on offering updates to our customers or
Tagging and mining over-the-air (OTA) update planning,” says Sagar.
Once fresh data is harvested from the car and CRUISE SAYS However, he also notes that Qualcomm Technologies’
transmitted to the cloud, it needs to be amalgamated automotive team can implement targeted updates
with other kinds of information before data
LESS THAN 1% OF and training based on specific new feature
scientists can use it to refine and enhance the THE RAW DATA IT additions or requests.
performance of machine learning models. COLLECTS IN Simultaneously, the machine learning team is
The next step in the process is what Qualcomm SAN FRANCISCO tasked with the maintenance and enhancement of
Technologies engineers refer to as ‘metadata the training data set. This involves deciding which
tagging’, an annotation process that enriches the
CONTAINS USEFUL new data should be incorporated and which old data
collected data with useful information for future INFORMATION should be removed.
analytics and mining tasks. Metadata includes data “Our computer vision stack is now in its fifth
such as weather, road and lighting conditions, as well as generation. We started with a single-megapixel camera
the type of scenario and behavior that the vehicle and other and have reached multiple eight-megapixel cameras. At every
agents are exhibiting. generation, the data collection is a massive activity and we have to
The annotated data becomes a treasure trove for data make sure the efforts are not wasteful,” Sagar says.
scientists, who can mine it for novel insights and findings. For To ensure the efficiency of this process, Qualcomm
instance, they might uncover that driver takeovers in a specific Technologies has developed mechanisms to mine the data and
city occur more frequently under certain visibility conditions at specify the training examples required for each generation of its
intersections. These new discoveries can then inform new rules machine learning stack. For instance, if the previous data set
for data collection within the cars, ensuring that the data gathered contains some particularly challenging corner case scenarios,
is as relevant and useful as possible. they will be retained in the new version of the data set and only
Sriram Palghat, director of product management at phased out when equivalent or better data is available.
Qualcomm Technologies, explains the process: “After collecting The data preparation, mining and annotation activities
the data, we tag additional metadata so it’s easily searchable. For undertaken in the previous stages play a critical role in making
example, we can add information such as ‘car with roof and no sure the training data set accurately represents the variety of
bike racks on top’. If our developers want to improve that kind of conditions and scenarios that a car will encounter in the real world.
scenario, they can just query our data from our data lake. They’ll “Today, with machine learning, you can easily run the car for
be able to pull that specific type of data, build a quick data set out 1,000km and show that it is detecting cars. But is that representative
of it and they can train their model.” of the countries and the hundreds of thousands of corner cases?

ADAS & Autonomous Vehicle International January 2024 25


DATA O P E R AT I O N S

“TO ME, IT’S NO LONGER ABOUT


Does it prove the robustness of the stack? Probably not. That’s
MILES ON THE ROAD ANYMORE.
where DataOps comes into the picture – the time spent on IT’S ABOUT THE NUMBER OF
preparing it and our years of experience really matter,”
explains Sagar.
SCENARIOS YOU CAN TEST”
Tim Wong, director of technical marketing, automotive, Nvidia
Managing a growing data set
Nvidia has adopted a strategy of continuous data collection as Advancements in connectivity modules will also significantly
its vehicles traverse the roads and encounter new conditions influence the future of data harvesting and transmission to the
and scenarios. cloud. The cloud data ingestion pipeline, too, is continually evolving
“We have our own data center for AV application to accommodate the increasing influx of data from diverse sources.
development and it is continuously growing. It is petabytes However, the mere act of storing data in the cloud is not
and petabytes of data. As we drive more and have more data, sufficient. The ability to effectively use this data through data
the data scientists are getting very selective on what they want mining is crucial, which is why the development of tooling is
to add because of the risk of overfitting,” says Wong. equally important.
Overfitting is a common pitfall in machine learning, where “You can record everything on the cloud, but if you’re not able
a model is excessively trained on a specific type of data. While to use it and conduct data mining, then it becomes impractical to
it performs well in similar instances, it struggles to generalize just store data coming from the global sources in an uncontrolled
to broader situations. To mitigate this, data scientists at Nvidia manner,” explains Sagar. “Vehicles need a holistic platform to get
monitor the data set to ensure it remains balanced as more the best out of the data and the next level of systems that are
training examples are incorporated. being put out there.”
However, in addition to managing the growing data
set, the machine learning team must make sure model Synthetic data and generative AI
training and performance remain efficient. Nvidia’s Sometimes, data is not readily available or is hard to
engineering team employs various methods, such as obtain. Nvidia has made significant investments in
fine-tuning pre-trained models and adjusting model synthetic data, leveraging its Drive Sim platform to create
architectures for enhanced efficiency. a realistic virtual environment. This digital twin of real
“Sometimes, more data is not the answer. We need roads and cities serves as a testing ground for ADAS/AV
to optimize the model, such as flatten or prune the hardware and algorithms.
layers to decrease the response time,” Wong says. “The beautiful thing about simulation is that it can create
data on situations that are tough to find in the real world, such as
A holistic ecosystem of software tow trucks and emergency vehicles,” says Wong. “It’s a lot easier
and hardware to do that in Drive Sim because you obviously don’t want to train
To fully comprehend the future of the data pipeline for ADAS/ around real emergency vehicles.”
AV, it’s essential to take a zoomed-out view of the entire In many cases, the engineering team can use Drive Sim to
ecosystem of hardware and software deployed both on the generate data that is missing in the data sets. This allows them to
edge and in the cloud. create highly specific scenarios, such as a nine-year-old girl, dressed
A key element in this ecosystem is the system-on-chip in dark clothes, running in the middle of the street on a rainy night.
(SoC) deployed in vehicles, which determines the volume “I want to know what happens in those situations, but I don’t
of data that can be collected and processed. want to necessarily stage that and put someone in harm’s way,”
“If you don’t have processing capacity and headroom, comments Wong.
you won’t be able to run shadow modes and complex Nvidia is also using generative AI to create variants of recorded
algorithms to select data at the edge,” says Drive Sim can
drives, altering factors like weather, lighting and traffic conditions.
Qualcomm Technologies’ Sagar. also generate This approach not only enriches the data set but also allows testing
specific lidar of machine learning models to be scaled across
point cloud data a multitude of scenarios.
“Whereas in a day I may be able to do a few
scenarios staged on road, overnight I can do a
couple of million scenarios on the digital twin,
and the performance of those nightly tests tells us
whether we are converging toward better behavior
or not,” Wong explains.
This shift in focus from road miles to test
scenarios represents a significant evolution in
ADAS/AV development. “Generative AI can really
scale your test cases. To me, it’s no longer about
miles on the road anymore. It’s about the number
of scenarios you can test, because if you tell me
this AV car has driven 10 million miles, I know that
99% of that, nothing happened. But you tell me that
that car has gone through 100 million different test
scenarios. I’d feel even more confident about it
being on the road,” concludes Wong. ‹

26 ADAS & Autonomous Vehicle International January 2024


ADAS Testing Just Got Easier
The new VBOX 3i ADAS - designed exclusively for
ADAS testing with outstanding positional accuracy.

• Multi-constellation and dual frequency 100 Hz


GNSS RTK receiver
• Combines IMU and wheel-speed data to
maintain accuracy
• Resilient RTK for Open Road testing
• Compatible with steering robots
• Intuitive set-up software for complex
ADAS scenarios
vboxautomotive.co.uk/3iadas
MAPPING

Out
of
time
A Here 3D map of San Francisco:
offering a simplified view of
the real world, 3D maps can
enhance navigation applications
and help ensure better decisions
in complex situations

28 ADAS & Autonomous Vehicle International January 2024


MAPPING

HD maps have long been synonymous with


autonomous driving, but could the latest
technology mark a fundamental shift?
By Nick Gibbs

A s we move toward autonomous


driving, a high-definition map
is either “the most important
sensor in the car” as Dutch
mapping giant Here claims,
or an expensive, unneeded add-on that runs counter to
the definition of ‘autonomous’, as boundary-pushing EV
companies such as Tesla and China’s Xpeng say.
The debate is raging right now as car companies
seek to increase the levels of autonomy from hands-on,
eyes-on (defined as Level 1 and 2 by the Society of
Automotive Engineers), to hands off, eyes on (Level 2+,
2++) or hands off, eyes off (Levels 3-5).
The prize is a share of what US smart-chip maker
Qualcomm reckons will be a US$59bn annual market
in assisted driving systems by 2030.

ADAS & Autonomous Vehicle International January 2024 29


MAPPING

what it claims is a much lower cost. “It’s not the raw data
“WE HAVE DEALS WITH NINE CAR that you send to the cloud. It’s processed data – about
10KB per kilometer,” continues Shashua. “We have deals
MAKERS WHO SEND US DATA, AND with nine car makers who send us data, and we have
WE HAVE MILLIONS OF CARS millions of cars every day mapping the entire world at
a very, very low cost.”
EVERY DAY MAPPING THE ENTIRE This is not high definition because of the lack of
WORLD AT A VERY LOW COST” lidar collection, but Shashua claims that it has other
advantages over HD. “There’s also a lot of information
Amnon Shashua, CEO, Mobileye about how humans drive that you don’t find in
high-definition maps.”
The road to autonomous driving is increasingly being The swarm data collected shows how road users
led by ADAS specialists such as Intel’s Mobileye, as they interact with the infrastructure – where they position
add more competence to what started out as simple themselves at junctions, for example. Grabbing
lane-keep and adaptive cruise control systems. 95,000,000km of data daily is not useful solely
THE AUTO for informing Level 3 and 4 autonomy, but also for
Automatic for the people INDUSTRY HAS Mobileye’s Level 2+/Level 3 SuperVision technology
Mobileye has changed the conversation on what HD and, from 2025, the company’s hands-off, eyes-off
mapping means. What started out as a laborious DEFINED A STANDARD Chauffeur technology.
process of running a small fleet of specially adapted PROTOCOL FOR MAP
vehicles to map a defined area has become an DATA INFORMATION Revenue share
exercise in using millions of regular cars to grab EXCHANGE, CALLED Such is Mobileye’s lead in this area that some car
low-data snapshots to build a comprehensive picture ADAS INTERFACE companies are looking for partners who can offer
of the roadways of entire countries. something similar but on a more collaborative basis,
“The conventional way of creating an HD map is to
SPECIFICATION where the revenue is better shared. For example,
drive a lidar- and camera-equipped specialized vehicle (ADASIS) Volkswagen and BMW have turned from Mobileye
around a city, collecting cloud points everywhere,” says to Qualcomm for certain future vehicle platforms.
Amnon Shashua, founder and CEO of Mobileye. “Then Currently a leader in providing connectivity and
someone does some manual work to build it into a map. computer power for infotainment systems, Qualcomm is
It costs about US$10m per city, which is huge. And then expanding its semi-autonomous capability on its Drive
you need to update them because things change.” platform, which includes a plan to use sensors from
Mobileye instead uses its EyeQ camera/sensor, Built from relatively
customer cars to map the roads on which they drive.
which has become an ADAS standard, to create sparse, anonymized, Moving to scenarios where drivers can take their
Road Experience Management (REM) maps for ultra-lightweight data, hands off the wheel and then their eyes off the road
Mobileye’s REM HD
maps are nevertheless
rich in detail

30 ADAS & Autonomous Vehicle International January 2024


MAPPING

The company, which models itself


on Tesla in terms of its digital-led
approach to new car development,
has put semi-autonomous technology
at the heart of its customer offer for its
saloons and SUVs.
“Development of our XNGP, which
does not rely on high-precision maps,
is speeding up,” founder and CEO
He Xiaopeng said on his company’s
earnings call in August.
He listed three key benefits. First,
it means cars can go where they like
without being restricted to geofenced
areas. “As long as your phone can
MapAuto 6.5 guide you there, our XNGP without
requires the car to have knowledge of where it is, in the – Baidu Apollo’s HD map reliance can also guide you there,” he said.
next-gen 3D
view of Qualcomm’s head of automotive, Nakul Duggan. “Second, you don’t need to jump through hoops to get
lane-level
“Mapping remains a very important part of the overall intelligent map policy approval, and third, you don’t have to spend extra
perception process,” he says. bucks to purchase the high-definition maps.”
However, there are important distinctions. Meanwhile, Mobileye had to go through a third-party
“Dependence on maps varies greatly according to the Chinese mapping company – Geely’s ECarX – to build
part of the world you are in, and what kind of operational up its REM product in the country, using the same
design domain you are implementing for,” notes Duggan. proprietary low-data process.
“Some advanced ADAS and automated driving features This obviously gives local players a big advantage.
are highly mapped, often to the point where they will One of those is tech giant Baidu, equivalent to Google
stop working if there’s no map, while others are able to in size and clout, which has developed its Apollo
fall back to some low level of capability without a map.” mapping service and launched its intelligent MapAuto
Different regulatory structures in the US and China, 6.5 in July this year.
for example, have persuaded the likes of Tesla and Xpeng This provides “a full 3D lane-level map” of all the
to move away from mapping. Tesla, in a recent post on roads in the country, the company says, and offers
social media site X (formerly Twitter), said, “Tesla FSD standard-definition “lane definition” and high-definition
[Full Self Driving] doesn’t rely on high-definition maps, versions. It also offers what it describes as a “lightweight
which means Autopilot can be enabled at locations the HD map” that takes up less data space.
car has never seen before.”
Tesla’s FSD operates similarly to Mobileye’s system
in that it guides the car based on data from other users. “SOME ADVANCED ADAS
“This path is determined by what most people would AND AUTOMATED DRIVING
have done in any given scenario, powered by learnings
from our global fleet of millions of vehicles,” it said. FEATURES ARE HIGHLY
Chinese market
MAPPED, OFTEN TO THE POINT
Tesla is free to use that system in the US, where WHERE THEY WILL STOP
legislation is looser. In China, however, strict regulations
on mapping encouraged EV startup Xpeng to drop it from
WORKING IF THERE’S NO MAP”
its XNGP hands-off, eyes-on system. Nakul Duggan, head of automotive, Qualcomm

V2I developments in China


Locating the car in the space install, per kilometer, 50 cameras, 20 that connect to a cloud-computing
around it can also be achieved lidars, 20 millimeter-wave radars, 10 network to monitor vehicles in real time.
through communication with intelligent roadside units (with mobile Sections even come with embedded
roadside furniture, and nowhere is this connectivity) and four oxygen sensors wireless charging for EVs.
concept being more actively pursued on roads with selective autonomous The mission is partly to improve
than in China. driving support, with commercial traffic speeds, with the data
Vehicle sensors suffer blind spots adoption from around 2026. infrastructure eventually allowing the
and maps might not be up to date. China’s rollout has already started. mass control of driverless cars to speeds
Roadside sensors help solve those For example, the recently upgraded of 120km/h, much higher than current
problems. According to a report from Shanghai-Hangzhou-Ningbo averages, by commanding vehicles to
the bank UBS, China is planning to Expressway includes embedded sensors brake and accelerate at the same time.

ADAS & Autonomous Vehicle International January 2024 31


MAPPING

sooner than camera or radar sensors, enabling the


system to anticipate turns and adjust speed.
DMP Co, which describes itself as “the world’s
leading automotive high-definition map” is a key supplier
to Japanese manufacturers and grew its US and
European coverage with the purchase of Michigan-based
HD map company Ushr in 2019.
Ushr was the original supplier to General Motors’
hands-off, eyes-on Super Cruise product, which debuted
in the Cadillac CT6 sedan in 2018.
Meanwhile, Mercedes, for its Drive Pilot, and BMW,
with its upcoming Level 3 ability on the i7 limo, use HD
maps from Here, which is part owned by BMW and
Mercedes, as well as Audi and others.
“As far as I know, we’re the largest supplier of HD
maps in commercial contracts,” Here’s then-CEO Edzard
Overbeek told AAVI during CES 2023 (he later stepped
down as CEO in May 2023). The company claims to have
the largest lidar-generated HD database in the world.
Like Mobileye, Here takes mapping information
from ordinary vehicles and reckons it has 35 million
The Here HD Live cars feeding it 500,000,000km of sensor data every
Map uses shared hour, albeit at standard definition. This, it says, gives it
vehicle sensor data advantages over the opposition, including even Google,
to ‘self-heal’ which announced its own HD mapping company earlier
this year called Geo Automotive, while partnering with
Volvo on the lidar-equipped EX90 EV to initially feed it
with HD data.
Safety case “It’s great that Google now has access to Volvo and I
While Tesla in the US and Xpeng in China look to operate don’t want to downplay that, but it’s a little bit different,”
without HD mapping, the burden of regulation and continues Overbeek, who likens Here to “neutral
responsibility on European OEMs has ensured that car Switzerland” in the mapping world, when compared with
makers that have received the go-ahead to offer Level 3+ the likes of Google, whose broader business model is
hands-free driving in certain to harvest data to sell ads. “Companies are starting to
situations have included HD realize that when you use Google Maps, then you’re
mapping as part of a broad range
of sensors and redundancy.
“THE REALIZATION basically selling your data to Google for them to build
better advertisements. We don’t do that.”
Although a high-definition NOW IS THAT Predictably Overbeek believes autonomy is impossible
mapping requirement is not
specifically mentioned in UNECE
PROBABLY THE without HD mapping. “Over the past couple of years
certain companies have said that sensors in the car are
regulation UN R157, designed for MOST IMPORTANT good enough, but I think everybody has stepped away
autonomous driving, it’s clear that it from that thought. The realization now is that probably
offers another safety net. Mercedes SENSOR IN THE CAR the most important sensor in the car is the map,” he says.
has spoken about needing HD maps IS THE MAP” Countering that idea is Cruise, a company at the
in its S-class and EQS limos with forefront of driverless robotaxis, with, until very recently,
Edzard Overbeek, CEO, Here
Drive Pilot Level 3 capability to a commercially operating fleet running in San Francisco,
offer “stable positioning through a Austin and Phoenix in the US.
representation of the surroundings independent of, for The company is trying to bust out of its HD
example, shadowing effects or a soiled sensor”. It has also map-constrained geofenced environment. “We’ve
referenced the map’s “inch-by-inch range and its detailed been making it less costly, more efficient and faster
intersection and track [lane] model”. In fact, Mercedes’ to map, but we’re also working at it from the other
head of automated driving, Georges Massing, has previously Visit the direction,” CEO Kyle Vogt said on the company’s
spoken to AAVI about the need to ensure HD maps are AAVI website earnings call in September 2023.
available in the regions it wants to expand into (see to read “That is building a new technology where AVs
Time machine, April 2022, page 22*). the latest no longer rely on these complicated and somewhat
As more systems allow drivers to take their hands off mapping and expensive maps and instead consume readily available
the wheel (if not their eyes off the road), auto makers are localization map data and rely more on their sensors.”
looking for HD maps to act as a backup to sensors. For news! Whether or not auto makers can stomach the cost,
example, Nissan went with Japan’s Dynamic Map Platform it’s clear that high-definition maps offer another leg
(DMP) to bolster the ProPILOT Assist 2.0 feature on its of stability to their self-driving car ambitions. As the
Ariya electric SUV in the US. industry navigates the regulatory roadmap, mapping adds
The DMP mapping feeds data to a High-Definition a layer of competence they can cite to win over skeptics. ‹
Location Module from Mitsubishi Electric, allowing
drivers to take their hands off the wheel. DMP says the *Visit the website to access AAVI’s archive of back issues:
map-equipped module identifies curves in roads much www.autonomousvehicleinternational.com

32 ADAS & Autonomous Vehicle International January 2024


V2X

Social
NETW

AVL is working with a


range of partners on
how to simulate and
test V2X technology

34 ADAS & Autonomous Vehicle International January 2024


V2X

ORKING
V2X is often cited as a
cornerstone of autonomous
driving, but with misaligned
standards and delayed
5G rollout, is it also
becoming a bottleneck?
By Alex Grant

ADAS & Autonomous Vehicle International January 2024 35


V2X

F
“ONE PUSH MIGHT COME FROM THE
EURO NCAP VISION 2030, WHICH WILL
rom streaming media to delivering new
REQUIRE V2X COMMUNICATION FOR THE
features over the air, connectivity offers a FIVE-STAR RATING THAT CAN BE EASILY
richer customer experience and opportunities
for vehicles to collaborate. Pooling and sharing UNDERSTOOD BY CONSUMERS”
sensor data could vastly expand the Andrea Leitner, global business segment manager, AVL
situational awareness of connected vehicles, providing
advanced warnings for human drivers and enabling
further automation. Regulations in North America,
China and Europe recognize that vehicle-to-
everything (V2X) can improve road safety, but
the path ahead isn’t entirely straightforward.
Andrea Leitner, global business segment
manager at AVL, believes regulatory influences will
be important. Today’s V2X systems are focused on
providing information, she says, but data
connections are less sensitive to interference from
weather than radar or lidar, and the ecosystem could be
extended to other modes of transportation such as trains
and e-scooters. However, the industry is using two competing
standards – wi-fi-based dedicated short-range communications
(DSRC or ITS-G5) and cellular (C-V2X). This is causing
uncertainty for OEMs and infrastructure suppliers, especially
in Europe where both are commonplace.
“A global consensus on one standard would be a major
driver for V2X in every car,” Leitner says. “This needs to be
addressed on an international level, with consent between
legislators and industry. One push might come from the Euro

Above: VW’s Local


Hazard Warning

Simulating V2X
system received
an ‘Advanced’
award from Euro
NCAP in 2020
The sharpening gaze of regulators and networks. This allows us to replicate realistic
NCAPs means auto makers are increasingly communication scenarios, like dense traffic in
looking for support to validate the narrow inner-city crossings, and test system Below: Soft
performance of V2X systems, while requiring reliability in depth.” target testing
new equipment and robust test processes. AB Dynamics recently supported the three-year of V2X-assisted
ADAS, as part of
Andrea Leitner of AVL says the company SECUR project, led by UTAC, assessing V2X-
the SECUR project
has developed evaluation procedures that can assisted ADAS and creating proposals for Euro
verify vehicle behavior during model-in-the-loop NCAP test protocols. Customers are increasingly
and hardware-in-the-loop testing. Full-scale, demanding that soft targets for ADAS testing not
end-to-end testing is supported by the AVL only ‘look’ like a vehicle to sensors, but also have
DrivingCube, which immerses the vehicle in the same digital signature.
a virtual environment including simulated/injected “There is a standard message that every car
sensor data and V2X messages. The testing not sends out – the Co-operative Awareness Message
only considers protocols but also how modems (CAM) – and other messages
respond to the surrounding environment. that warn of specific events,
“V2X systems use GHz bands, which can be like end of queue or other
impacted by the geometric scenario causing hazards such as collisions,”
reflections and distortions – for example, buildings explains the company’s
and street signs – leading to rapidly changing Andrew Pick. “We need
channel conditions while driving. It is important to be able to trigger those
to verify performance and fault handling under messages on demand, as
such real-life conditions,” she explains. part of our test scenario,
“Together with research partners, AVL has and that’s something we’ve
developed realistic channel models, capturing been able to do from our
such dynamics in real time, using geometry-based platform and incorporated
stochastic channel models and artificial neural into our software.”

36 ADAS & Autonomous Vehicle International January 2024


V2X

NCAP Vision 2030, which will require V2X communication for


the five-star rating that can be easily understood by consumers.
Together with established verification and validation
procedures, as being suggested by AVL [see Simulating V2X,
opposite], this will certainly support V2X deployment.”
Yunex Traffic, formerly part of Siemens, has worked on
intelligent transportation systems (ITS) for over a decade,
proving use cases for assisted highway merging and sharing
map and traffic signal data at a small scale. Jack Durdle, one
of the company’s product lifecycle managers (detection and
connected mobility), notes that both common standards have
benefits. DSRC is a bespoke automotive standard, offering low
latency but with high infrastructure costs, whereas C-V2X is
easier to scale where coverage and bandwidth allow, but better
suited to less time-sensitive data.
“Recently there has been wider spread acceptance across the BMW is testing
V2X technology
industry for a hybrid approach to V2X deployments, where safety to export power
applications are covered by dedicated ITS-G5 RSUs (roadside back to the grid
units) and the larger-scale coverage of non-safety-critical during peak
applications is handled by C-V2X. However, this will often differ demand periods
from country to country or even on a regional basis,” he says.
“Unfortunately, I do not think there is a clear-cut or correct Anritsu’s chief technology officer, Jonathan Borrill, points out
answer in terms of a ‘winner’. That can make it quite challenging that although regions are converging (though not exclusively) on
in terms of interoperability. We have designed our latest RSU C-V2X, capacity and quality of service can still be problematic in
with this challenge in mind, where it has the capability to support crowded areas, such as in a queue of traffic, which affects delivery
both communication methods. This challenge will certainly of important messages. The latest 3GPP mobile broadband
be exacerbated as more automotives become equipped with protocols - Release 16 and 17 - are working to address those
competing V2X technologies.” shortfalls by enhancing side links (direct ‘PC5’ connections
between vehicles, bypassing the network) and how devices
Cellular convergence establish trust with edge computing nodes.
Consensus is growing in some markets. In 2020, the “Automotive use cases are very much looking to
United States Federal Communications Commission exploit 5G standalone networks [which are built for 5G]
(FCC) announced that it was looking to phase out DSRC to that can offer network slicing and other quality of service
repurpose the upper 30MHz of the 5.9GHz band for C-V2X, mechanisms, and also use edge computing for local
and several manufacturers have already been granted waivers compute functions that deliver low latency,” he explains.
to deploy solutions on the road. “So far, the deployment of 5G standalone has been
Dr Georg Schmitt, BMW Group’s project leader for vehicle slower than many people initially forecasted. The first focus of
connectivity and strategy, says cellular is the focus for the deployments seems to have been to improve network capacity
company’s future products as it and data rates for smartphones, where these automotive-related
integrates with other network-
based services. BMW’s first
“THE MAIN features are less important. As the operators move more toward
industry verticals, then we can expect more deployment of 5G
5G-equipped vehicle, the iX CHALLENGES ARE standalone that will enable these significant improvements for
electric SUV, launched in 2021
and the manufacturer sees
THE CHIPSET automotive use cases.”

C-V2X enabling all road users AVAILABILITY FOR


to communicate without
connecting to the mobile 5G-V2X CHIPS,
network. 5G also provides the AND [GREATER]
network slicing and resulting
quality of service required for CROSS-INDUSTRY
automated features, Schmitt says.
“Only the cellular ecosystem
STANDARDIZATION”
offers support for short range, via Georg Schmitt, project leader, BMW Group
PC5 interface, and long range via
so-called Uu interface. C-V2X is offering PC5 and Uu-based
applications in a unique manner. Only via 5G-V2X that is
supplementing LTE-V2X can use cases that link to and enhance
ADAS and AD functionality be realized. These use cases will Anritsu sees
C-V2X as key
also consider VRUs and two-wheelers that can’t be realized
to the advent
with wi-fi-based communication technologies,” he comments. of autonomous
“The main challenges are the chipset availability for 5G-V2X driving
chips, and the cross-industry standardization of these use cases
to allow full interoperability and guarantee functional safety.
These topics are in the current focus of 5GAA and BMW.”

ADAS & Autonomous Vehicle International January 2024 37


V2X
Rohde & Schwarz
is testing various
V2X technologies

Futureproofing
The growth of C-V2X presents a new challenge
for the cellular ecosystem, supporting vehicles
that have much longer lifespans than smartphone
handsets. Holger Rosier of Rohde & Schwarz believes this
will result in multiple technology generations operating
simultaneously to ensure systems remain operable.
“Staying within the same technology generation is usually
not a big deal; releases are backward compatible. Going to
a new technology such as 4G to 5G and, in the future, 6G,
requires additional solutions to remain compatible,” he says.
That rollout has its own challenges, adds Holger Rosier, “Having [multiple] technology generations integrated
technology manager, automotive at Rohde & Schwarz. The company into vehicles is a straightforward approach, also used in
is providing equipment for testing V2X conformance and compliance, smartphone devices. Nevertheless, a migration path needs
and 5G offers more reliable data transmission for automated to be in place.”
functions. Some countries require mobile network operators Qualcomm’s Jim Misener foresees similar solutions,
(MNOs) to meet deployment targets along transportation corridors adding that these are already available if manufacturers
when auctioning frequency, but that work isn’t complete yet. want to include them. Most manufacturers are defaulting
Rosier explains, “Providing ubiquitous 5G connectivity with to 5G as the latest technology to ensure longevity, he says,
higher performance even to consumers in rural environments is, but they are also encountering potential obsolescence
business-wise, a challenging task. A feature that demands a new 6G as MNOs decommission 2G and 3G networks to make
technology is the combination of data communication and sensing. spectrum available for newer technologies.
“The case study is eCall in Europe – how do you move on
Both applications will benefit from frequency ranges newly assigned
to more advanced technologies, because the eCall mandate
to mobile communications and sensing [as a] purpose. Data
is 3G? There’s a lot of fretting, hand wringing, hair pulling on
transmission will be improved from integrated sensing applications, how one transitions. One simply has to have a plan as to how
while radar operation may benefit from interference coordination.” you refarm the spectrum. It is an issue, but these refarming
AB Dynamics is also meeting growing demand for soft targets advances are happening now,” he explains.
that can be used to test V2X. Andrew Pick, director of track test Suman Sehra of Harman is similarly confident that 5G
systems, says customers typically favor cellular solutions as they offer will provide long-term operability for C-V2X systems. “I am
ad-hoc local and long-range communications, but compatibility is an seeing 5G go well past a decade and a half, a decade or
issue even within the same two,” he says. “The usable life [of the vehicle] should be
technology type and groups covered within at least two generations of network
of manufacturers. Automotive “ONE OF THE BEST transmission. Our product, technology and [chipset]
applications could learn from USE CASES FOR vendor choices ensure that backward compatibility.”
other sectors, he adds.
“V2X is at its most powerful V2X FOR AVs IS
when every vehicle on the road
has it. One of my interests is
SENSOR SHARING” technology generations, but you don’t have to. The protocols do
sailing, and on boats you’ve got Jim Misener, V2X ecosystem lead, Qualcomm exist, but in North America and China we don’t have much
the automatic identification spectrum. In Europe we do, so there’s a discussion happening.”
system (AIS) [which means] you can track one another and see if The inconsistent pace of global 5G deployment, as a
you’re on a collision course or not. They’ve got it sussed but it’s quite fundamentally different technology from 4G, is also an
a long way from that with road vehicles. There’s actually an analogy influencing factor. Suman Sehra, global vice president of
there where it’s all compatible and standardized.” product and innovation, connected vehicle and infrastructure
at Harman, claims that most customers are pursuing 5G
Protocol potential but the rollout is happening in “pockets”, which makes
Qualcomm’s global V2X ecosystem lead, Jim Misener, V2X-assisted autonomy challenging.
stresses that there are ways to cut that complexity. “We are hearing from early-stage experiments with
Although DSRC and C-V2X are not interoperable, the MNOs to test the quality of service from a network
protocols can be almost identical, which means that only perspective, and I think it will be done in phases. You may
the radio access technology differs. However, regardless not be able to accomplish everything on day one, because
of advances in protocol, the industry is competing for of the quality of service. But situational awareness and
scarce resources. advanced warning systems with 5G today, and the compute
“One of the best use cases for V2x for AVs is sensor sharing infrastructure improving over time, can perhaps result in a more
– not just car sensor sharing but, for example, infrastructure reliable sensor than what is currently available,” he comments.
sensors providing that information for a car. If you start to sensor “Whether that is acted upon is going to be based on the OEM
share, even with really clever protocols you need more and more choice, based on the risk of the situation that is being handled.
spectrum with more and more objects being shared – even Any actuation will likely see the first light of day in a little bit
abstractions of the objects,” Misener explains. more of a controlled environment – a port or big university campus
“More than moving to 5G for V2X, you need spectrum. If you have or dedicated lanes for platooning trucks. That may come prior to
spectrum and the OEMs want, you can advance the radio access some of the consumer auto.” ‹

38 ADAS & Autonomous Vehicle International January 2024


E VO LVA D

evolvAD builds on
Nissan’s ServCity
project, which
notched up 3,000
autonomous test
miles on complex
urban roads in
London

BOLD
40
inten
ADAS & Autonomous Vehicle International January 2024
E VO LVA D

A UK project seeks to boost


autonomous driving on rural
and urban residential roads,
partly by training AVs to act
more assertively

N
By Anthony James

issan is playing a major role


in a government-funded
project to bolster the UK’s
burgeoning autonomous
driving sector. Launched at
the end of September, evolvAD will be key to pushing
AD technologies at the OEM as part of its long-term
vision, Nissan Ambition 2030. The project will test
how well fully electric Nissan Leafs equipped with
cutting-edge AD technology deal with tricky
residential areas and rural roads. It will also
examine the role of V2I technologies in supporting
the deployment of autonomous vehicles.
“In our previous HumanDrive and ServCity
research projects, our AD team and partners
have tackled highways and complex city
environments,” explains Robert Bateman, evolvAD
project manager and manager of the research and
advanced engineering team at Nissan Technical
Centre Europe (NTCE).
“Nissan will now enhance its technology

tions
further by testing and trialling it in other driving
environments, specifically urban residential and
complex rural roads. The project will also explore
what transport opportunities autonomous mobility
can provide to A roads and minor roads that are
mostly found within rural and intercity communities.”

ADAS & Autonomous Vehicle International January 2024 41


E VO LVA D

Test vehicles are


fitted with a wide
range of sensors

Project
partners
These types of driving environments present their Five companies make up predict the often-surprising behavior of humans forms
own unique set of challenges for AD technology. For the evolvAD project a key part of the project.
example, drivers in residential areas often face narrow “When we speak to our Japanese colleagues, they
roads, single lanes with parked vehicles on either side Nissan often remark how, until they came to London, they
and slow driving speeds. Rural roads can include Lead partner and leading didn’t realize pedestrians won’t always wait for the
similar conditions but with higher driving speeds, the development of the green man before crossing,” he explains. “Better
winding profiles, blind corners, blind gradients and connected and autonomous prediction algorithms can help the car to understand
few to no road markings. vehicles (CAV) that will be where and when a pedestrian is going to cross. It’s
Delivered by a consortium of five industry partners trialled during the project also about ensuring that when you are in one of our
including Nissan as technical lead, evolvAD is jointly vehicles, it provides a comfortable ride rather than
funded by government and the consortium partners, Connected slamming on the brakes every time it thinks a
with some of the money coming from the government’s Places Catapult pedestrian might cross the road. We’re trying to make
£100m (US$124m) Intelligent Mobility Fund, Applying advanced machine our AD as human-like as possible in how it responds.”
administered by the Centre for Connected and learning techniques to
generate high-definition
Autonomous Vehicles (CCAV) and delivered by Test locations
maps from aerial imagery
the UK’s innovation agency, Innovate UK. Urban residential road testing will be done in
The research project will run for 21 months, partnership with TRL, which will use SMLL’s
Humanising Autonomy
coming to an end in March 2025, and will see six real-world testbed, spread across London roads in
UK supplier with advanced
members of the NTCE team working with around Greenwich and the Queen Elizabeth Olympic Park.
vulnerable road user
20 experts from Connected Places Catapult (CPC), (pedestrians, cyclists and The testbed features 24km of instrumented public
Humanising Autonomy, SBD Automotive and TRL. motorcyclists) perception urban roads, both single- and multi-lane, including
“TRL, which manages the Smart Mobility Living and behavior estimation traffic circles. All testing will take place during
Lab (SMLL), is looking at the development of capability daylight hours, mainly between 9:00am and 3:00pm:
infrastructure,” notes Bateman. “It’s also looking at “This is to avoid the school run – but there is an actual
supply chain readiness regarding the test specifications SBD Automotive school on one of the main roads we are using for the
and design specifications that UK suppliers will need Onboard cybersecurity and test, so we will evaluate at the school pick up, as well,”
to deliver an autonomous vehicle to an OEM. CPC is advanced safety case says Bateman.
working on a UK-developed, high-definition map, There are also speed bumps, traffic signals,
and Humanising Autonomy is focused on trying to TRL one-way systems, pedestrian crossings, bridges,
better predict what pedestrians are going to do.” Developing vehicle underpasses and overpasses. SMLL’s extensive test
The latter’s website says the company’s “ethical system validation processes zone has been chosen to enable the project’s partners
computer-vision software analyzes videos to quickly utilizing infrastructure on the to rigorously test and observe the performance of their
classify, interpret and predict human behavior so Smart Mobility Living Lab technologies and vehicles from every angle, as they
that it can better inform automated decision-making (SMLL) testbed take on the everyday challenges of driving down a
engines”. Bateman says improving how an AV can busy urban road.

42 ADAS & Autonomous Vehicle International January 2024


E VO LVA D

“Take speed bumps,” continues Bateman. “There are lots Burrage Road, a busy, single-lane route at the heart of
of different types, including those that place three ridges in the testbed – with lots of parked cars down either side – is
a line across the width of the road – but which should you go of particular interest. “The information from the roadside
over? Do you go over the middle or both?” Furthermore, cameras will help the test subject to understand if it needs
“Very few human drivers actually drive fully around a mini to move toward the middle of the road to avoid a vehicle
roundabout [traffic circle], with some going straight over.” that has begun to pull out, for example,” says Shah. “It will
Electric scooters will also be in the mix: “In the last three need to move further out to go around, but in so doing it
or four years we’ve had e-scooters appear alongside cyclists.” will map potentially with an oncoming vehicle – how will
“We’ll be testing even when it is raining or when there the stack deal with that, in comparison to driving on
is light fog,” notes Nirav Shah, an NTCE research engineer a dual carriageway?”
working on the project, when asked for further examples of Bateman shares his colleague’s enthusiasm for
how evolvAD will differ from HumanDrive and ServCity. the challenge ahead. “As part of our 2030 vision,
“We will also be testing on roads where there is no we want to make this technology available to
division between the oncoming vehicle or the test everybody,” he says. “To do that, it’s got to work
vehicle’s direction of travel, whereas in previous projects in rural areas and busy residential roads, whether
there has been a central reservation between lanes.” we’re using it for delivering goods or to help
people visit friends or family – you’ve got to be
Moving picture able to get down those streets.”
The project will also explore and trial vehicle-to- However, he says previous research has revealed
infrastructure (V2I) technology to improve situation that this is far from an easy task for a computer:
awareness, path planning and overall performance. TRL will “Currently, if a parked car started to pull out, the
connect test vehicles to infrastructure and send new sources autonomous vehicle would give way unless there wasn’t
of data to the vehicle to improve its situational awareness. another car coming toward it for quite some distance.
“There are 270 cameras in total across the SMLL testbed,” AVs will need to be more assertive, going forward. In the
explains Shah. “We will use the data from these cameras to previous ServCity trial, the car just wasn’t ready for it.”
better understand whether objects are moving or stationary.” For rural environments, where vehicle speeds mean the
stakes are even higher, testing will initially be conducted
inside proving grounds within the UK, namely UTAC
“WE’RE TRYING TO MAKE OUR Millbrook and Mira. This testing will include the
AD AS HUMAN-LIKE AS development and validation of enhanced autonomous
vehicle motion control in high dynamic use cases, and
POSSIBLE IN HOW IT RESPONDS” will provide lots of useful data to inform further
Robert Bateman, project manager, Nissan Technical Centre Europe simulation modeling.
Use cases such as blind corners, road gradient ascents/
descents and low-quality road lane lines will be used for
optimum vehicle trajectory, speed and motion planning
at heading speeds up to 96km/h with vehicle acceleration
limits increased to 0.5g. Testing will only move to public
roads after an intense period of simulation to fine-tune
performance. “Rural testing has begun at UTAC Proving
Ground in Millbrook utilising its outer handling and
alpine routes,” notes Shah.

Safe operation
Three cars will be used for testing, with some additional
cars set up for data collection. Two of the test cars will be
used for urban testing while the third will cover rural trials.
The vehicles are equipped with cameras, lidar, GPS and
radar, and the computers required to process the incoming
information and steer the vehicle. Localization and path
planning are based on several sensor inputs, as well as a
digital map stored internally in the vehicle, so that the car
is not dependent on any single sensor for its safe operation.
During the trials, the automated vehicles will be
occupied by a trained test driver and operator responsible
for overseeing safe vehicle operation. All test cars will
drive within the regular speed limits of the various roads
that they encounter.
“Before we even put the software into the vehicle, we
ServCity, which do simulations using data previously collected from the
ended February
testbed area and some of the area beyond, so that we get
2023, saw
500,000 lines of an idea of any unique features that will be required to then
code developed update the software,” explains Shah.

ADAS & Autonomous Vehicle International January 2024 43


E VO LVA D

All evolvAD test


vehicles will
feature a safety
driver on board

Expert input
Shah notes that the safety drivers are integral to the project,
providing valuable input that shapes the AD software. “All the
safety drivers are very well trained not just in test driving but
in every aspect of driving on a public road,” he says. “They all
drive like an expert chauffeur and they help us understand
how the vehicle should behave from their perspective. There’s
a lot of discussion and collaboration with our safety drivers
as this ensures the vehicle doesn’t behave like someone
driving on the road for the first time.”
As to how best to solve the conundrum presented by a
parked car wanting to pull out, Bateman says it’s all a matter
of tuning: “If there is enough space and we want to be
“We then do more simulation and test the software for assertive, then potentially we could go more toward the
any unexpected behavior, and then we take it onto a public middle of the lane, keeping enough space on the right-hand
road with a safety driver, where we run the software offline to side for the oncoming vehicle to pass. In software terms,
compare it with what the safety driver is doing,” he continues. it’s not only longitudinal movement but also lateral
All of evolvAD’s safety drivers are fully qualified and have movement, and deciding when to move laterally is
been trained to react in the event of a system failure. “If the very interesting in those scenarios. A lot of it is
control system of the vehicle fails, then the safety driver down to fine-tuning, which takes up a lot of the
follows the training they have undergone previously at the software engineers’ time.”
proving ground, which includes acceleration or steering Bateman notes that Nissan learned a similar
override,” says Shah. “There are a lot of safety checks, lesson from the previous ServCity project: “When we
protocols and procedures undertaken before the vehicle first started, as the car approached a roundabout, it
can begin testing on the open road.” would slow right down to give way, even when there
The team are reluctant to reveal any safety driver was no oncoming traffic, but that isn’t how humans drive
intervention data, at this stage: “Everything depends on what – we tend to maintain the same speed if the path is clear.
testing we are doing and the maturity of that software and However, the software that first came across from Japan
hardware at the time,” explains Bateman. “However, the plan slowed the car down to give way before then pulling off. If
is that when we get to the end of the project there will be no you were the car behind, and you weren’t paying attention,
intervention from the safety driver.” you’d end up going straight into the back of it.”
He continues, “Nirav and his team immediately began
tuning the software to make it more assertive, so now it says,
“THERE ARE A LOT OF SAFETY ‘I’m at the roundabout. There isn’t a car coming. I don’t need

CHECKS, PROTOCOLS AND to slow down, I can go’. We then began to tune it to take the
same ‘lane’ around the roundabout as a human would take.
PROCEDURES BEFORE THE We used to joke that one of the roundabouts was more of
a ‘square-about’ – you had to straighten up a bit, then go
VEHICLE CAN BEGIN TESTING” around a little bit and then straighten up, etc. In the end,
Nirav Shah, research engineer, Nissan Technical Centre Europe we tuned it so rather than the steering wheel being jerky, it
moved far more naturally.”

Future outlook
While all evolvAD vehicles will be equipped with 100%
autonomous drive capability, Nissan is keen to stress that
the project does not signal any intention to launch a fully
autonomous vehicle in the UK and Europe in the near future.
Instead, evolvAD fits into a wider autonomous drive
research and development program that is taking place across
Nissan’s R&D facilities worldwide. As such, the project’s
findings will help inform future Nissan AD systems for
passenger vehicles, with a focus on how the OEM can
ensure its systems integrate into urban environments.
The company already offers an L4-capable hands-off
ProPILOT 2.0 self-driving system for use on highways under
approved conditions, in certain countries. Research projects
ServCity gathered
5.45m gigabytes such as evolvAD will be vital in taking Nissan’s future
of autonomous technology to the next level, helping to ensure such systems
driving test data better integrate into urban environments. ‹

44 ADAS & Autonomous Vehicle International January 2024


!/0Čƫ !2!(+,Čƫ* ƫ2(% 0!ƫ10+*+)+1/ƫ2!$%(!
!/0Čƫ /+(10%+*/ƫ%*ƫƫ.!(%/0%ƫ/%)1(0%+*ƫ!*2%.+*)!*0
!/0Čƫ !2!(+,Čƫ*
!2!(+,Čƫ* ƫ2(%
ƫ2(% 0!ƫ10+*+)+1/ƫ2!$%(!
0!ƫ10+*+)+1/ƫ2!$%(!
/+(10%+*/ƫ%*ƫƫ.!(%/0%ƫ/%)1(0%+*ƫ!*2%.+*)!*0
/+(10%+*/ƫ%*ƫƫ.!(%/0%ƫ/%)1(0%+*ƫ!*2%.+*)!*0
!/0Čƫ !2!(+,Čƫ* ƫ2(% 0!ƫ10+*+)+1/ƫ2!$%(!
/+(10%+*/ƫ%*ƫƫ.!(%/0%ƫ/%)1(0%+*ƫ!*2%.+*)!*0
ARTIFICIAL INTELLIGENCE

Large language models are


coming for self-driving cars
By Ben Dickson

Speech
therapy

46 ADAS & Autonomous Vehicle International January 2024


ARTIFICIAL INTELLIGENCE

F or decades, science fiction has


tantalized us with the concept
of vehicles that can engage in
conversation and respond to
natural language commands.
The advent of in-car smart assistants brought
us a step closer to this vision, allowing us to
use everyday language for tasks such as finding
directions, locating parking spaces, playing
music or catching up on the news.
Now, emerging research is hinting at an even
more exciting future. Large language models
(LLMs), the artificial intelligence systems behind
products such as ChatGPT and Bard, could soon
revolutionize our interactions with autonomous
vehicles. These advances suggest a future where
we converse with self-driving cars in ways
previously confined to the realm of imagination.
And with the rapid pace of advances in artificial
intelligence, we are only at the beginning of
discovering the possibilities of LLMs in AVs.
At the core of LLMs lies the transformer – a
deep learning architecture introduced in 2017 by
WAYVE’S researchers at Google. Renowned for its ability
LINGO-1 VISION to process vast amounts of sequential data, the
transformer’s primary function is ‘next token
LANGUAGE prediction’. In simple terms, it takes a sequence of
ACTION MODEL words and predicts the subsequent word or phrase.
The initial success of transformers was largely
HAS A 60% within the realm of natural language processing.
ACCURACY RATE This was partly due to the wealth of text data
available from online sources such as Wikipedia
and news websites. Such data formed the
foundation for LLMs such as ChatGPT, which
became the fastest-growing application in history.
However, the potential of LLMs extends far
beyond text. Danny Shapiro, VP of automotive
at Nvidia, says, “If you’re curating the data and
training these things on specific data sets, you
can get something that will have very good
results. The important thing is it doesn’t have
to just be language, characters, words.”
Since their inception, LLMs have been
harnessed for a diverse range of applications,
such as writing software code, predicting protein
structures and synthesizing voice. They have also
become an important part of models such as Stable
Diffusion and DALL-E – AI systems that take text
descriptions and generate corresponding images.

ADAS & Autonomous Vehicle International January 2024 47


ARTIFICIAL INTELLIGENCE

Talk of the town


London-based Wayve launched Lingo-1, its
vision-language-action model (VLAM) earlier this
year. Trained using real-world data from Wayve’s
drivers commentating as they drive, Lingo-1 can explain the
reasoning behind driving actions. Previously, end-to-end AI
neural nets have been criticized as ‘black boxes’, providing
limited insight into why and how they make decisions.
Wayve believes incorporating the use of language to
explain a vehicle’s actions will provide a new type of
data for interpreting, explaining and training AI models.
In addition to commentary, Lingo-1 can respond to
questions about a diverse range of driving scenes. This
allows Wayve to make improvements to the model through Above: Wayve’s
feedback. The answers allow engineers to evaluate the Lingo-1 helps to
explain driving
model’s scene comprehension and reasoning, which can
decisions
enable Wayve to more efficiently pinpoint improvements
and build confidence in the system. Natural language Right: The
transformer
sources such as national road user regulations will
model introduced
further simplify and improve the training of AI models. by Google

Now, the research community is investigating how


LLMs can be applied in novel areas such as robotics
and autonomous vehicles. Central to these explorations
are vision language models (VLMs). These enable LLMs
to merge text with visual data and perform tasks like
image classification, text-to-image retrieval and visual
question answering.
“Image pixels ultimately can be used to generate the
characters, words and sentences that can be used to be
part of a new user interface inside of a car,” Shapiro says.
“For example, the driver might be looking to the left to
see if it’s clear and the front-facing camera could see a
child walking across the street on the right and describe
that to the driver so they know not to blindly just hit the
accelerator and create an incident.”
Jamie Shotton, chief scientist at Wayve, is optimistic
LLMs for self-driving cars about the potential of this technology. “The use of large
Self-driving car startup Wayve has been actively language models will be revolutionary for autonomous
experimenting with the integration of LLMs into driving technology,” he says. “Now, with Lingo-1, we are
its AV systems. It recently unveiled Lingo-1 (see Talk starting to see large language models enhance applications
of the town, above), a groundbreaking model that beyond the AI software and into the space of embodied
facilitates conversation with the AI systems steering AI such as our AI models for self-driving vehicles.”
autonomous vehicles. Lingo-1 is designed to perform a wide array of tasks.
It can answer questions about scene understanding, and
reason about the primary causal factors in a scene that
influence driving decisions. Essentially, it aims to provide
“THE USE OF LARGE LANGUAGE a detailed narrative of the driving actions and the
reasoning behind them.
MODELS WILL BE REVOLUTIONARY “Currently Lingo-1 acts as an open-loop driving
FOR AUTONOMOUS DRIVING commentator, which offers driving commentary that
explains the reasoning behind driving actions and
TECHNOLOGY” allows us to ask questions about various driving scenes,”
Jamie Shotton, chief scientist, Wayve Shotton says. “These features allow us to understand in

48 ADAS & Autonomous Vehicle International January 2024


ARTIFICIAL INTELLIGENCE

natural language what the model is paying attention to, at which we collect expert driving data – enabling
and the ‘why’ behind the course of action it takes.” a cost-effective approach to gather another layer
In practical terms, Lingo-1 can explain the car’s of supervision through natural language.”
decisions as it navigates through traffic. It can articulate
why it’s maintaining or changing its speed, braking or Dealing with hallucinations
overtaking another vehicle. Furthermore, it can respond A known issue with LLMs is ‘hallucinations’, a
to user queries about why it performed a certain action, phenomenon that happens when the model generates
the road conditions and visibility, or specific hazards in text that appears plausible but is factually false. For
a given road situation. instance, a model might make incorrect statements about
Shotton believes that the integration of language historical or scientific facts that could mislead someone
into autonomous vehicle systems can lead to more lacking in-depth knowledge in that specific field.
efficient training and a deeper understanding of While hallucinations may not pose significant
end-to-end AI driving systems. “Leveraging language problems in non-sensitive applications where users have
through AI tools like Lingo-1 can increase the safety ample time and resources to verify the information,
of these systems by aligning driving models to natural they can be potentially fatal in time- and safety-critical
language sources of safety-relevant knowledge bases, applications such as driving and healthcare.
such as the UK’s Highway Code, and updating alongside Researchers are striving to solve this problem.
them,” he says. “In turn, language-based interfaces Part of the solution lies in setting boundaries on
will help build confidence in ADAS and AV technology what the model can generate. Nvidia’s Shapiro says,
and the safer, smarter and more sustainable future of “The models used inside vehicles will
transportation they promise.” be fine-tuned on a curated version of
the broad information used in general “HOW DO YOU
Training LLMs for autonomous driving LLMs such as ChatGPT.”
LEVERAGE THE
LLMs require vast amounts of high-quality training data, To help tackle challenges such as
which poses a challenge in applications where data is hallucination, Nvidia has developed CORE TECHNOLOGY
not readily available. A common solution to this issue
is to ‘fine-tune’ existing models. In this approach,
NeMo, a framework for customizing
LLMs. With NeMo, developers can
BUT DO IT IN
a ‘foundation model’ that has been pre-trained on a fine-tune language models with A WAY THAT
comprehensive corpus of online data is further trained their own data and set guardrails to
on a smaller data set of examples curated for a specific prevent them from straying into areas PREVENTS
application, known as the ‘downstream task’. where they are not safe or reliable. HALLUCINATIONS?”
Collecting data for these downstream tasks is still NeMo is particularly efficient at
Danny Shapiro, VP of automotive, Nvidia
challenging. For instance, models like ChatGPT were working with pre-trained models.
trained on tens of thousands of instruction-following “What we’re focused on is how do
examples manually written by human experts, a costly you leverage the core technology but
and complex effort. When it comes to autonomous do it in a way that prevents hallucinations,” Shapiro
vehicles, the task becomes even more complex because says. “The results are only as good as the data that goes
it requires coordinating natural language instructions, in. So you need to curate the data that goes in and put
visual data and actions. guardrails on the topics that you want the system to be
To train Lingo-1, the Wayve team devised a unique able to cover.”
Wayve is currently
approach. They created a data set that incorporates testing a fleet of
image, language and action data collected with the self-driving Jaguar
help of expert drivers who commented on their actions iPace EVs in London
and the environment as they drove on UK roads.
The team adopted the commentary technique used
by professional driving instructors during their lessons.
These instructors help their students learn by example,
using short phrases to explain interesting aspects of
the scene and their driving actions. Examples include
‘slowing down for a lead vehicle’, ‘slowing down for
a change in traffic lights’, ‘changing lanes to follow a
route’ and ‘accelerating to the speed limit’.
The expert drivers were trained to follow a
commentary protocol, focusing on the relevance and
density of words and ensuring their commentary
matched their driving actions. These phrases were then
synchronized with sensory images recorded by cameras
and low-level driving actions captured by car sensors.
The result was a rich vision-language-action data set that
was used to train the Lingo-1 model for a variety of tasks.
The Wayve researchers highlighted the efficiency of
this approach in a September 2023 paper, stating, “Our
driving commentary data enhances our standard expert
driving data set collection without compromising the rate

ADAS & Autonomous Vehicle International January 2024 49


ARTIFICIAL INTELLIGENCE

“IN REALITY, LLMs IN AVs


ARE GOING TO START OFF
SMALL AND CONTINUE
TO GROW” Lingo-1 can answer
Danny Shapiro, VP of automotive, Nvidia questions about a
diverse range of
driving scenes
onboard computers. Achieving this requires solutions
that can shrink the models and make them faster without
Wayve is addressing the issue of hallucinations sacrificing their performance.
through a technique known as reinforcement learning There’s already a wealth of research aimed at
from human feedback (RLHF). This method ensures that making these models more efficient by modifying
LLMs align with the goals and intentions of users. The their architecture or training data. For instance, some
researchers also believe that the multimodal nature of studies have shown that smaller LLMs continue to
Lingo-1 will make it more robust against hallucinations improve as their training data set expands. Other
than text-only LLMs. research employs quantization, a technique that
“Since Lingo-1 is grounded in vision, language and reduces the size of individual parameters in an LLM
action, there are more sources of supervision that allow while minimizing the impact on its performance. These
it to understand the world better,” the researchers wrote. techniques will be instrumental in harnessing the power
“It can learn to reason and align its understanding of LLMs for self-driving cars.
between text descriptions, what it sees in the video In addition to these efforts, there are hybrid solutions
and how it interacts with the world, increasing the that can enable autonomous vehicles to use LLMs both on
sources of information that can allow it to understand board and in the cloud. “We try to do as much as we can in
causal relationships.” the car, especially for tasks that we have immediate data for
on board. It will be much faster and have very low latency
The computation bottleneck when we don’t have to go to the cloud,” Shapiro says. “But
Below: Diagram Large language models require substantial computational there will be cases where we would go to the cloud for
showing the power for training and operation. The most advanced additional information or insight or content that isn’t
basic Wayve models, such as GPT-4, can only function on cloud available on the vehicle.”
Lingo-1 structure servers. For time-critical applications like autonomous
vehicles, these models must be adapted to run on the Solving the problems of the future
Breaking down the language barrier between humans and
computers opens the door to myriad new applications.
We’re already witnessing how LLMs such as ChatGPT
are transforming tasks that traditionally require rigid
user interfaces and instructions.
Yet we’re only beginning to discover the full potential
of LLMs. As newer models and technologies emerge,
engineers will undoubtedly continue to explore ways
to harness their power in various fields, including
autonomous vehicles.
“By adding natural language to our AI’s skill set we
are accelerating the development of this technology
while building trust in AI decision making, and this is
vital for widespread adoption,” Wayve’s Shotton says.
Currently, models such as Lingo-1 are demonstrating
the promise of enhanced explainability and communication.
As researchers refine the process of curating the right
training data and setting the appropriate guardrails, these
models may evolve and start tackling more complex tasks.
“In reality, LLMs in AVs are going to start off small
and continue to grow,” Nvidia’s Shapiro says. “That’s the
promised beauty of a software-defined car. Like your
phone, it gets software updates; your car will get smarter
and smarter over time. It mimics us as humans: when
you’re a kid, you have limited knowledge and every day
you learn something new. As you grow, your vocabulary
and your understanding increase. And that’s really what
we’re doing with these vehicles.” ‹

50 ADAS & Autonomous Vehicle International January 2024


Breaking news
Exclusive features
Industry interviews
Expert opinion
Latest videos
Free recruitment section
Discover the Digital edition & magazine
latest lidar & back issues archive
sensor news,
AV trials, and
much more!

Sign up for
free weekly
e-newsletter!

Search for ADAS & Autonomous


Vehicle International to find
and follow us on LinkedIn!

www.AutonomousVehicleInternational.com
JUNE 4, 5 & 6, 2024
MESSE STUTTGART, GERMANY
120+
EXPERT SPEAKERS

CALL FOR PAPERS


Join an expert speaker line-up discussing
the challenges and innovations
behind the deployment of ADAS and
Autonomous Driving technologies

TESTING SENSING AND AI SIMULATION SOFTWARE

Interested in speaking at the Conference?


Deadline to submit your proposal: January 26, 2024

If you wish to discuss your proposal please contact Tim Sandford, conference www.adas-avtexpo.com/stuttgart
director: Tel: +44 1306 743744, Email: tim.sandford@ukimediaevents.com
#avtexpostuttgart
Get involved online!
S U P P L I E R I N T E R V I E W: D E W E S O F T

The Obsidian platform


features Linux-based
real-time processing
communication protocols to allow wide connectivity and
a range of applications; supports the majority of analog
sensors on the market; is robust and able to withstand
extreme environments; has an embedded, reliable
real-time operating system; provides easy remote
connection; has a powerful mobile app for simple in-field
operation; and includes the award-winning DewesoftX
software package with no hidden costs. These are some
of the crucial innovations that are included in the latest
Obsidian DAQ systems.

Can you elaborate on real-time

Rock
processing?
The Obsidian platform is the first product line from
Dewesoft that has an embedded Linux-based real-time
processing core capable of autonomous recording and
real-time processing. Therefore, besides simply storing
analog, digital, vehicle buses, XCP/CCP, positional

solid
information and data from other sources, it can process
them and is able to perform certain operations, tasks
or actions based on a predefined configuration.
Furthermore, combining this with real-time
communication through interfaces such as EtherCAT,
ethernet, OPC-UA and CAN(FD), and supporting
millisecond latency on analog and digital outputs, it
is suitable for mission-critical applications and opens
AAVI catches up with Bojan Contala, a whole new world of testing solutions.

business development manager at Dewesoft is also


releasing a mobile
Dewesoft, to find out more about the “WE ARE HERE application called
company’s recently released Obsidian TO CHALLENGE DewesoftM. Is there
any link between the
autonomous data recording product line AND CHANGE Obsidian and the app?
By Anthony James THE WORLD The main goal of the DewesoftM
mobile app is to enhance user

P
OF THE experience and provide an easy-to-use
software package that can be used and
MEASUREMENT operated by anyone using their
lease describe your company. INDUSTRY” Android or iOS devices. Combining
Dewesoft is developing and producing test this with the new real-time core that
equipment that simplifies the advancement of is based on open architecture and
humanity. We are here to challenge and change the world integration of a wi-fi interface on Obsidian, it is possible
of the measurement industry. Being customer focused to communicate with the device directly through the
and developing test and measurement solutions hand in mobile app. In addition, the app enables visualization
hand with the end user, combined with and monitoring of live data with various widgets and
innovative thinking, is in the company’s very simple triggering and tagging of the data.
roots. It all started with a simple idea that Users can
communicate
has now grown into global success, directly with the
There are many other data
serving measurement solutions to the DAQ via an app acquisition companies out there.
world’s leading brands through the What makes Dewesoft different?
worldwide sales, support and service Besides being very innovative and customer centric,
network that covers more than 60 there are, in my opinion, three areas where Dewesoft
countries around the globe. stands out from the crowd.
First, cost of ownership or, even more desirable these
You have just released the days, no hidden costs. Having no annual fees or upgrade
new Obsidian product line. fees for software is something unique on the market.
What’s so special about it? Second, the company offers award-winning, state-of-
With the latest release of the Obsidian the-art, easy-to-use but powerful software that is
product line, we combined 30 years of continuously developed and upgraded based on the
development experience and tons of feedback from our end users and technological
end user feedback to design and develop innovations. Finally, customers can call on outstanding
a product that can be operated easily; worldwide support and service centers that are there to
is equipped with the latest digital serve and fulfill their every need. ‹

ADAS & Autonomous Vehicle International January 2024 53


TESTING

AV-in-the-Loop
can test a range of
automated driving
functions

Family affair Simulation software and real-time hardware from


IPG Automotive offer support for all stages of testing
By Carmen Nussbächer, communications manager, IPG Automotive

T
he automotive industry is currently need to be considered in the operational design other components results in a prolonged
facing particular challenges. In the domain (ODD) of the installed systems. A set development process and high costs.
past, vehicle development focused of scenarios is defined in which the driving On the one hand, the use of simulation
on mechanical and automotive function needs to operate safely. In these solutions reduces the highly cost-intensive
engineering. Today, the software scenarios as well as during the entire system and time-consuming physical test effort.
in use and the associated control units are the lifetime, systems are tested step by step to On the other, real-time-capable vehicle
center of attention. Especially with automated eliminate hazards for the driver, passengers models with all individual components enable
and autonomous driving functions, electronic and road users. the setup of virtual prototypes early in the
systems and software play an increasingly The verification of safety-critical situations development process.
important role in vehicles. is a great challenge as it requires a huge number
Assist systems such as highway pilots or of tests. Reproducing a very large volume of AV-in-the-Loop
valet parking provide the driver with additional scenarios in the real world, on roads or proving With AV-in-the-Loop, IPG Automotive offers a
comfort on board and have a positive impact on grounds, is generally difficult. test method that fits all SAE levels. Depending on
the driving experience. To continuously ensure In addition to the growing system complexity, the use case, it consists of finely tuned software
driving safety, the applied systems have to work the acceleration of the development process due and hardware components and encompasses
correctly at all times. With the growing number to ever-shorter development cycles increases the closed- and open-loop test procedures. The
of software systems and aligned electronics, risk of errors. As the development effort as a components are applied to the holistic
the overall system complexity in vehicles is whole explodes, virtual test driving is regarded development and validation of driving functions
constantly increasing. The more complex the as the indispensable foundation of vehicle in the fields of MIL, SIL, HIL and VIL. Users
systems, the more effort is required to validate development. can thus test and validate automated and
them. Therefore, it is necessary to perform an To avoid complications in the test process, it autonomous driving functions according to
almost infinite number of test cases. is sensible to use a holistic test and simulation their specific requirements.
The evaluation of automated driving platform that covers all test methods. For The testing approach depends on the
functions depends on various components. example, failing to discover before the real test development stage and the specific test priorities.
Among other things, different traffic scenarios drive that the steering system does not match From the perspective of test coverage, software-

54 ADAS & Autonomous Vehicle International January 2024


TESTING

in-the-loop simulation is the preferred option. that the extensive number of sensors can be The software supports the early integration
Software development methods such as managed quickly. of driving functions as well as vehicle
continuous integration, testing and deployment Valet parking, urban autonomous driving or components into the full vehicle context.
(CI/CT/CD) help to master challenges during off-highway functions: for every case, experts Virtual prototypes enable the agile development
the entire product lifecycle. The continuous from IPG Automotive create an individual test of components and subsequent virtual test
generation, testing and improvement of software setup. Thanks to its modularity, AV-in-the-Loopdriving in realistic scenarios.
components enables the automation of single can be adapted and extended at any time and The CarMaker product family is geared
process steps to a large extent. additional components can be added during toward seamless model-, software-, hardware-
Idealized perception models allow tests to the test process if necessary. and vehicle-in-the-loop processes and can easily
be scaled cost-efficiently either locally or in the The holistic test approach comprises software
be integrated into existing tool landscapes.
cloud. The highly parallel execution of tests in and hardware geared toward each other, thus Furthermore, it offers flexible possibilities for
the cloud enables running the required enormous enabling seamless development. The AV-in-the- test automation and scaling from cloud options
number of test kilometers virtually in a short Loop method helps to successfully integrate to large HIL test benches. The CarMaker product
time, which increases test range and depth. virtual and real control units into test processes.
family can be enhanced with customer-specific
When testing perception, open-loop tests are Seamless testing from MIL to VIL is simplified add-ons and interfaces.
the most common method. The benefit here is by considering the ODD. Performing failsafe tests In the field of HIL, Xpack4 is used as a
that a detailed setup of virtual environments is also possible to ensure the complete validation
real-time hardware solution. The modular layout
with highly complex sensor models is no longer of automated and autonomous driving functions. enables stationary and mobile use. Different
needed when replaying recorded data from the AV-in-the-Loop is based on a variety of single-board computers, modules, carrier
real test in open loop. products from IPG Automotive. The boards, housings and equipment allow
However, the complete system including CarMaker product family with the configuration of a personalized
perception and hardware is mostly tested in a CarMaker, TruckMaker and hardware platform.
hardware-in-the-loop environment. Powerful MotorcycleMaker is used SensCompute enables
and real-time-capable computing systems ensure as simulation software. fast calculation of multiple
CARMAKER 12, sensor types in simulation.
PACKED WITH Clustering various sensors
accelerates the execution of
NEW FEATURES, computationally expensive
WAS LAUNCHED scenarios. Their optimal
IN MARCH 2023 distribution on multiple
GPUs enables parallel
calculation of camera, radar,
lidar and ultrasonic sensors.
With SensInject, raw sensor data
can be fed directly into control units.
Testing and validation of ADAS and AD
functions including perception and localization
therefore becomes more efficient.

Reliable partner
Around the globe, automated and autonomous
driving functions vary depending on the
manufacturer. No matter whether there are
fundamental or detailed differences, the driving
functions are subject to legal specifications and
also depend on the sensors used. They must
therefore be developed to fit the individual case.
As a reliable and trusted partner, IPG
Automotive supports users to develop and test
automated and autonomous driving functions.
Decades of experience in engineering projects
paired with individually configurable hardware
systems and a seamless simulation platform
enable the setup of toolchains that are tailored
to specific use cases. IPG paves the way for the
Test setups can future of mobility. ‹
be tailored to
meet individual
customer
requirements CONTACT
IPG Automotive | inquiry no. 101
To learn more about this advertiser, please visit:
www.ukimediaevents.com/info/avi

ADAS & Autonomous Vehicle International January 2024 55


S I M U L AT I O N

Team effort
How the Sim4CAMSens
Sim4CAMSens
will rigorously
test sensor
performance

project aims to forge


a robust perception
sensor ecosystem
By Mike Dempsey, managing director, Claytex

I
n a rapidly evolving automotive landscape,
the promise of autonomous vehicles
heralds a new era of mobility
characterized by efficiency, lowered
operational costs and enhanced safety.
Significant investments are being funneled into
the development and deployment of AVs, with
a critical emphasis on ensuring their safety to
satisfy regulators. At the heart of this venture
lies the indispensable role of simulation in
the development and safety assurance of AVs, Catapult and AESIN. It is supported by funding
especially given the vast array of sensor types from the UK’s Centre for Connected and
and the numerous factors affecting sensor Autonomous Vehicles as part of its
performance. The challenges extend to the Commercialising CAM program.
massive and diverse spectrum of training data The project has three key goals. The first
required, and the vital need to establish the is to quantify and simulate the perception
credibility of simulations. The Sim4CAMSens sensors under all conditions to allow sensor
project seeks to address these challenges by suppliers to demonstrate the capabilities of their
developing and maturing a modeling and devices and enable ADS developers to establish
simulation supply chain specifically for a robust process to compare competing devices. The sensor evaluation
perception sensor development and testing. The sensor evaluation framework will also framework will include
synthetic training data
The Sim4CAMSens project is a collaborative support the development and validation of
effort of innovation aimed at nurturing the sensor models.
simulation, modeling and physical testing The second key goal is to enhance synthetic environmental factors. For example, one of the
ecosystem for the developers of connected and training data by improving perception sensor first test activities, which has already started, is to
automated mobility (CAM) perception sensors models. Given the challenges that are associated investigate the effect of clothing on the detection
and systems. The project endeavors to construct with collecting enough real-world training data, of VRUs (vulnerable road users). It has already
a robust supply chain to elevate the quality of the project will develop high-fidelity sensor been observed that lidar sensors can fail to detect
modeling, simulation, test and characterization models that include the same noise factors as pedestrians if they are wearing non-reflective or
capability. This initiative hopes to accelerate and the real devices. dark clothing, and the project will also investigate
de-risk the design, development, validation and Finally, the project aims to provide regulators the effect on radar.
utilization of perception sensors and the with a framework for simulation credibility and The journey of Sim4CAMSens reflects a
algorithms crucial for automated driving AV safety. This is crucial to unlock the path to concerted effort to navigate the complexities and
functions. By forging clear links between tools, type approval and enable AVs to be deployed potential of the autonomous landscape, fostering
methodologies, standards and safety cases, safely on public roads. a cooperative relationship between simulation
Sim4CAMSens is setting a course toward developers and sensor developers and formalizing
state-of-the-art modeling and simulation Test, test, test the test methodologies and frameworks required.
environments. These environments are At the core of the Sim4CAMSens project is a This venture embodies the essence of
anticipated to generate synthetic training rigorous approach to the testing of perception collaborative innovation, driving the industry
data of requisite quality for training the AI sensors to be able to measure and quantify closer to realizing the promise of an autonomous
systems employed in AVs. their performance under a wide range of test future. We see this as the start of a journey to
The collaborative spirit of Sim4CAMSens conditions. This involves lab- and field-based test grow a relevant and competitive modeling and
manifests in its assembly of a world-class work to identify and quantify the noise factors simulation community and supply chain that
consortium of expert partners dedicated to that affect sensor performance. Throughout the will work together in the years to come. ‹
fostering an emerging perception sensors and project, different test campaigns will investigate
systems industry. The project is led by Claytex various factors that affect perception sensor CONTACT
Claytex | inquiry no. 102
and includes rFpro, Oxford RF, Syselek, NPL, performance, covering weather (particularly To learn more about this advertiser, please visit:
WMG, Compound Semiconductor Applications snow), material properties and other www.ukimediaevents.com/info/avi

56 ADAS & Autonomous Vehicle International January 2024


S I M U L AT I O N

Park
Left: Synthetic fish-eye
imagery captured in an
underground parking lot
Below: Procedurally

better
generated surface
parking lot for use in
simulation. Variations
enable testing across
the parking ODD

Advanced automated parking systems


can save time, effort and lives
By Michael Hazard, product manager, Applied Intuition

I
n the rapidly evolving landscape of system operating shortly after starting up
advanced driver assistance systems, and at shorter distances doesn’t have the
Level 3 automated parking systems (APS) same opportunity that a highway ADAS has
have emerged as a major differentiator for to see objects as they approach.
auto makers. L3 APS eliminate the need APS must accurately and efficiently fuse
for the driver to control or supervise during data from more than 10 sensors. Localization taxonomies, test suites and maps that work
parking, enabling a whole new experience of algorithms that don’t rely on high-definition from day one and help make development up
stress-free parking. These systems are designed maps are essential as such maps limit the ability to 13 times faster.
to automatically find parking spots in congested to scale globally. An APS’s localization, depth- It supports high-performance multisensor
lots and ensure safe parking in tight spaces, estimation and free-space estimation algorithms software- and hardware-in-the-loop simulation,
saving time and reducing the risk of accidents. must be sufficiently precise to enable confident catering to the various sensor sets needed for
While L3 APS use general ADAS sensing and driving within centimeters of other vehicles. parking. Fish-eye cameras and ultrasonic
perception, automotive engineering teams must Novel algorithms are also required to identify sensor simulation complement rectilinear
overcome unique parking-related challenges parking slots and interpret traffic controls, camera, radar and lidar simulation to ensure
when bringing such systems to production. even without a priori maps. accurate sensor models.
The APS must function well in diverse, Teams might also face challenges with Simulations programmatically generate
unstructured operational design domains planning, prediction and controls. The ground truth annotations for parking lot
(ODDs), including surface lots, street parking, unstructured topography of parking lots, elements, ranging from parking spaces and
residential parking, multilevel, underground where most surfaces are driveable and without traffic controls to depth, disparity and optical
and mechanical parking structures. lanes, makes it challenging to plan paths or flow. Object-level sensors make it possible to
Each ODD includes unique visual elements predict other vehicles’ behavior. Vehicle mock out perception so that teams can test
such as varied line types, surface materials, dynamics often become non-linear at lower planning, prediction and controls efficiently
wheel stops and curbs. Additionally, APS must speeds, making it difficult to predict how the in isolation.
follow specific traffic controls, such as spaces vehicle will react to a given set of controls. The solution includes high-fidelity CarSim
reserved for people with disabilities and arrows Addressing these challenges requires a vehicle dynamics that accurately models
indicating traffic flow. deliberate approach tailored to parking, from low-speed vehicle behavior. It also incorporates
As with any ADAS, sensors are the backbone design and development to testing and parking behaviors for vulnerable road users and
of an APS. While L2 highway ADAS lean heavily validation. Applied Intuition has a leading vehicles. This enables automotive OEMs to
on forward-facing sensors, L3 parking ADAS parking development solution that facilitates measure performance with regard to parking
mandate comprehensive 360° sensor coverage, the development and validation of L3 automated quality, adherence to traffic controls and more.
ensuring the vehicle is constantly aware of its parking systems. The company's automated As the automotive industry shifts its focus
immediate surroundings. Sensor coverage needs parking solution spans diverse parking domains to L3 APS, auto makers and suppliers need
to account for small nearby objects, as a parking and regions and provides preconstructed ODD a solution that addresses the complexities
associated with APS development. Applied
Synthetic camera images captured Intuition's solution not only accelerates
in a surface parking lot, supporting the development process but also offers
editing of parking slot lines, surface customization options as programs mature. In
materials and road markings
the world of automated parking systems, the
solution is the key to navigating the challenges
and ensuring that the future of parking is safe,
efficient and stress-free. Learn more at https://
applied.co/use-cases/automated-parking. ‹

CONTACT
Applied Intuition | inquiry no. 103
To learn more about this advertiser, please visit:
Visualization of sensor www.ukimediaevents.com/info/avi
coverage with ultrasonic,
radar and lidar sensors
ADAS & Autonomous Vehicle International January 2024 57
DRIVER MONITORING SYSTEMS

Gentex’s DMS
precisely monitors
driver head pose,
eye gaze and other
vision-based metrics
from the unobstructed,
cross-car line view of
the full display mirror Integrating the DMS
into the Gentex Full
Display Mirror (FDM)
enables auto makers
to meet impending
regulations in the

On
most cost-effective
and visually appealing
way possible

reflection
Mirror-integrated driver monitoring is a quick, cost-effective
is also ideal to host future technological
innovation as auto makers look to integrate
additional features and functions.
Embedding DMS into the mirror is a
way to meet impending regulatory requirements cross-platform solution that circumvents
the need to engineer or tool a system upgrade
By Brian Brackenbury, product line director, Gentex Corporation
for each vehicle. It also enables OEMs to

F
introduce DMS in a quick-to-market solution
or nearly 50 years, Gentex has helped of life. Gentex’s structured-light-based depth while maintaining scalability for future
advance automotive rear vision. Today, mapping with microvibration detection can system enhancements.
the company’s Full Display Mirror sense a child’s respiration, even they are seated
(FDM) – at its core, a digital rearview in a rearward-facing car seat. A digital sense of smell
mirror – helps auto makers introduce Gentex’s mirror-integrated DMS will help Finally, Gentex continues to advance
revenue-producing electronic features to a auto makers meet impending system regulations machine olfaction, a digital sense of smell,
vehicle quickly and affordably in a high- to reduce traffic fatalities and injuries while as the automotive industry progresses toward
performance, cross-platform location. Integrated staying consistent with hands-free driving rideshare and autonomous vehicles. The ongoing
features can include glare elimination, digital legislation gaining popularity across the USA maintenance of autonomous fleets and passenger
video recorders, trailer cams, car-to-home and the rest of the world. safety depends on continuous and consistent
automation, ADAS alerts and notifications, sensing technology.
and now, driver monitoring systems (DMS). Key benefits Gentex’s developments in nanofiber sensing
Gentex DMS uses a mirror-borne camera Embedding DMS technology in the mirror is technology stem from years of experience in the
and emitters to biometrically authenticate the an incredibly practical, safe and cost-effective fire protection industry. Placed in the vehicle’s
driver and track head pose, eye gaze and other option for vehicle integration. The DMS camera ductwork, the company’s emerging particulate
vision-based metrics to determine driver can be discreetly positioned to make it more and chemical sensing system can consistently
distraction, drowsiness, sudden sickness and aesthetically pleasing to the consumer. monitor the air quality of the vehicle cabin to
readiness for the return of manual control. Furthermore, the mirror is mounted high up on detect a wide range of contaminants. The system
Thanks to the mirror’s unique position in the the windshield and is generally driver-adjusted. could eventually help identify explosives,
vehicle, the system can be easily expanded to This allows the DMS to optically align to the biohazards, illicit drugs such as marijuana or
provide 2D and 3D cabin monitoring for detecting driver’s face while maintaining a superior view of fentanyl, pollutants such as ammonia, and many
passengers, behavior, objects and even presence the entire vehicle cabin. This high-up placement other volatile organic compounds (VOCs).
Gentex’s holistic yet scalable driver and
in-cabin monitoring technology provides
Gentex uses 2D regulatory compliance today while driving
and 3D in-cabin us closer to an autonomous future. Mirror-
monitoring integrated DMS enables auto makers to proactively
techniques to prepare for inevitable regulation, capitalize on
monitor the driver,
innovation and uphold the bottom line, all while
vehicle occupants,
body pose and creating a safer driving environment for drivers
objects left behind and passengers alike. ‹

CONTACT
Gentex | inquiry no. 104
To learn more about this advertiser, please visit:
www.ukimediaevents.com/info/avi

58 ADAS & Autonomous Vehicle International January 2024


LEARN MORE ABOUT
OUR ADVERTISERS
NOW!

FREE SERVICE!
THE INTERNATIONAL
ADAS & AUTONOMOUS VEHICLE INTERNATIONAL

REVIEW OF AUTONOMOUS
VEHICLE TECHNOLOGIES:
FROM CONCEPTION
TO MANUFACTURE TO
IMPLEMENTATION

January 2024
JANUARY 2024

Face
value
Driver monitoring systems
continue to mature and
PUBLISHED BY UKi MEDIA & EVENTS

grow in sophistication

Mapping Nissan: evolvAD Large language models


HD maps have long been synonymous with A UK project seeks to boost autonomous Breaking down the language barrier
autonomous driving. Could the latest REM driving on rural and urban residential roads, between humans and computers could
technology mark a fundamental shift? partly by training AVs to act more assertively accelerate and improve AV development

Scan the QR code or visit www.ukimediaevents.com/info/avi


to request exclusive and rapid information about the latest
technologies and services featured in this issue

www.ukimediaevents.com/info/avi
TESTING

VBox audio and


visual sensors fulfill
UNECE and Euro
NCAP requirements

Sound and Sensor pack


The VBox ADAS Sensor Pack is another great
example of a cost-effective piece of equipment
that provides test engineers with a simple yet

vision
clever solution. Admittedly, this is not a
completely original concept; there are other
sensors available that are perfectly capable of
recording the bings and bongs produced by
modern cars. It is their simplicity and robustness
that set the VBox sensors apart. They have been
purposely designed to be accurate, small, easy
to install, quick to move between vehicles, and
A new test tool enables engineers to check the painless to integrate with other equipment.
When a light goes on or a sound is made, the
safety of L3+ driving controls and features user will know about it.
By Jake Durbin, applications engineering manager, Racelogic
Visual alerts and information related to
systems including forward collision warning,

A
blind spot monitoring and safe exit assist/
s modern vehicles progress through testing and it fulfills all relevant requirements dooring are recorded by the VBox visual sensor.
the levels of automation there has of UNECE and Euro NCAP. The sensor can be mounted directly on the wing
been an increase in the need for VBox Automotive has always believed in mirror or dashboard to capture the warning light
communication between the car and developing solutions that make life easier for activation. These visual warnings can be either
the driver. This communication is test engineers. A big part of this is providing all sent to a datalogger for an instant pass/fail
designed to make it clear who is responsible for the equipment needed to conduct a test and not assessment based on the specific scenario, or
certain elements of the driving task, warning just the core datalogger technology. This ranges directly transmitted to a driving robot for
when action is required by the driver or from whole system testing such as pass-by noise real-time response to the warning.
informing the driver when the vehicle intervenes. to tailored accessories such as the pedal force The VBox audio sensor captures audible
This human-machine interface is delivered sensor for brake testing. warnings such as those given during forward
via a selection of audio and visual cues that have collision warning (FCW), driver status
become an essential safety feature of any vehicle monitoring (DSM) and safe exit assist/ dooring
cabin, and as with all safety features, it requires Below: Easy to install, sensors scenarios. It can also be used to detect
the ability to accurately benchmark, develop and can be mounted directly on the information functions given as part of occupant
test technologies. The new VBox ADAS Sensor wing mirror or dashboard, and status monitoring (OSM), seatbelt reminder
are easy to integrate with other
Pack, comprising an audio and visual sensor, test equipment
(SBR) and intelligent speed assist (ISA).
has been specifically designed for this field of Both sensors have a digital output with an
additional CAN output for the audio sensor.
This enables the synchronization of audio and
visual warnings with any other parameters being
recorded by the vehicle under test.
For now, lights and sounds are the main
method of communication between car and
driver. While it might be a primitive language,
it is undeniably effective and universally
understood. It is also a language in which the
VBox ADAS sensors are fluent. ‹

CONTACT
Racelogic | inquiry no. 105
To learn more about this advertiser, please visit:
www.ukimediaevents.com/info/avi

60 ADAS & Autonomous Vehicle International January 2024


S TA N DA R D S

Enabling consistent
use and interpretation
of ASAM standards
Source: ASAM eV

Mind the gap ‘Pedestrian’


ASAM is working hard to close the gap between standard or ‘vehicle’?

specifications and their implementation, providing expert


advice and a range of tools to ensure conformity
By Dorothée Bassermann, marketing manager, ASAM

S
tandards play a crucial role in the interoperability of its standards and making
promoting efficiency, interoperability them even more accessible.
and consistency across various ASAM is continually growing its toolchains
industries and domains. They and processes to ensure clear, non-ambiguous
simplify data exchange, reduce specifications. Documentation rules, the
development efforts and contribute to cost inclusion of examples and application
savings while ensuring that data remains guidelines, as well as the definition of for a first set of standards, for ASAM
accessible, compatible and secure. As adoption comprehensive rule sets already minimize the OpenDRIVE, ASAM OpenSCENARIO and
of standards continues to grow in the industry, room for interpretation in standards. ASAM OTX Extensions.
there is an ever-increasing need to ensure Additionally, the organization is initiating Framework, implementations, checker rules
consistent understanding and use. special alignment projects to pursue and base suite will be open-source published and
In his recent talk at the ADAS & Autonomous harmonization across a multitude of its continually updated and extended with every
Vehicle Technology Conference in Santa Clara, standards. One of the most recent activities was new project. (You can keep track of the latest
California, Ben Engel, CTO at ASAM, gave some the alignment of the semantics of dynamic developments via www.asam.net/newsletter.)
insight into the topic of interoperability and traffic signals in the on-road driving domain, Beyond these activities, ASAM is supporting
harmonization of standards. He explained that which affects ASAM OpenDRIVE, ASAM other organizations to develop a quality
although standards are designed to enable OpenSCENARIO and ASAM OSI. assurance and certification process to increase
interoperability of tools, increase efficiency and As a new initiative, ASAM is also now interoperability.
facilitate collaboration, there remain challenges investing in the creation of a configuration and All these efforts will stand as a robust
in their use, including ambiguities within and reporting framework that will allow users to solution to the current challenges, paving the
between standards, different interpretations of check files and implementations against these way for a more standardized, efficient and
standards and differing levels of implementation rule sets to ensure conformity with the interoperable automotive sector. They will
across tools. All this can lead to a significant standards. This framework will be ‘standard foster greater adoption and understanding
investment of resources for manual validation, agnostic’ and will be developed by a dedicated of the standards and significantly improve
troubleshooting or extensive discussions. ASAM project group. It will enable the interoperability. ‹
ASAM, the organization behind a wide execution of a wide variety of ASAM- and
range of global standards in automotive, user-defined checks for different standards. CONTACT
ASAM | inquiry no. 106
such as ASAM OpenDRIVE and ASAM ASAM is currently setting up a project to To learn more about this advertiser, please visit:
OpenSCENARIO, is committed to improving define checker rules and a base suite of checks www.ukimediaevents.com/info/avi

ADAS & Autonomous Vehicle International January 2024 61


DATA S T O R AG E

Ready for All-in-one logging and simulation


device that supports all stages of

a drive?
development and validation of
in-cabin monitoring systems
By Gordan Galic, technical marketing director, Xylon

T
oday’s in-cabin automotive systems
count passengers, guard babies seated
in rear seats, check drivers for signs of
drowsiness and ensure that they are
alert and ready to take over vehicle
controls. Mostly AI based, they share many
common elements with outside-looking,
driving-oriented autonomous systems and ADAS,
with one exception – humans and their behavior.
The definition of a drowsy or medically
impaired driver is outside of automotive
developers’ expertise, which is where experts
in other fields, such as medical and cognitive
science, must step in. They need to establish
biomedically proven levels of drowsiness or
distraction, usually by using medical devices
such as an EEG, EKG or other biosensor.
Automotive engineers then need to respect the
established thresholds and replicate
measurement results using built-in in-cabin
equipment like video cameras and radars.
Xylon’s logiRECORDER Automotive HIL
Video Logger has been recognized and used The Xylon QUATTRO datalogging and HIL
system (center), used to capture raw data
by several Tier 1s and OEMs as a flexible tool
for ML training data sets and for data
capable of meeting all of the often conflicting injection of control sets into an AI-based The number and variety of sensors make the
requirements from experts of various profiles electronic system. The top left corner shows cumulative logging challenging, yet all possible
involved in in-cabin systems development and biomedical testing in the lab environment sensory channels can be precisely timestamped
testing processes. Experience gained through and the top right corner shows a and logged by a single logiRECORDER or Xylon
synthetically generated 3D model of the
that cooperation has been expanded thanks to QUATTRO datalogger.
driver used in ML. On the bottom left and
internal development of Xylon’s ARTIEYE driver right are screenshots from Xylon’s test The usual inputs into the ML process are
monitoring technology suite (www.artieye.ai). vehicles that illustrate how the ARTIEYE road-recorded test data, provided in industry-
Based on experience, Xylon was able to recognize AI system monitors drivers and estimates standard and open file formats, as well as
the main challenges that R&D, AI and validation their behavior and driving status synthetic data generated in simulation
teams must overcome. environments. ML results are used for
For example, test fleets for logging data sets mechanisms narrow down the recordings to the implementation of AI-based embedded in-cabin
relevant to driver drowsiness must be equipped interesting data only, improve the quality of the driver and passenger monitoring systems. To test
with loggers that do not need attention from the data sets, save time and cut the cost of further and verify those AI electronic systems, Xylon has
vehicle’s crew. Drivers are carefully selected based on-premises data distilling. developed various automated test benches that
on their height, gender, race, etc, and not on their Test vehicles are equipped with multiple 2D use the logiRECORDER to inject reference data
technical abilities to operate test equipment. NIR video cameras and radars – the likely sensor sets into AI hardware and cross-compare the
Xylon’s loggers are self-monitored – the only thing choices for the final implementation – placed in system’s outputs with the established ground
they require is a full storage media exchange. multiple locations for the purpose of simultaneous truth values. No matter the type of golden data
build-up of data sets for various sensor setups. set, whether it is hand-annotated raw data from
Remote monitoring Good examples of positions for driver monitoring the test vehicle or a fully controlled synthetic
To ensure data collection coherence, the cameras are behind the wheel and in the rearview driver’s face from the simulator, Xylon’s loggers
complete test fleet can be remotely monitored mirror. Additionally, in-cabin systems often need supply it in HIL simulations that are electrically
by skilled engineers via a 4G mobile network. information from a vehicle’s busses, such as the and logically identical to data formats expected
Remote monitoring also enables automatic GPS position or steering wheel angle, from sensors within the vehicle environment. ‹
tracking of the number of driving hours, that monitor the vehicle’s exteriors, and from
recognition of the most interesting test roads biomedical sensors. To make the process less error CONTACT
Xylon | inquiry no. 107
and other valuable inputs into machine learning prone, reference video cameras, 3D cameras and To learn more about this advertiser, please visit:
(ML). Xylon’s sophisticated triggering other sensors monitor the data collection process. www.ukimediaevents.com/info/avi

62 ADAS & Autonomous Vehicle International January 2024


PRODUCTS AND SERVICES

Plug-and-play data solution


T
he road to autonomous driving will be built on
reliable driver assistance systems, which have
developed rapidly in recent years. Developers
must minimize all risks within their control
solutions to ensure consumer acceptance
and successful approval of their systems. This places
high demands on the computer hardware for testing
and validating the functionalities. In addition to a
compact design, the rugged datalogger must have
server performance for AI applications, and impressive
write speed and expanded memory capacity to handle
extensive datalogging.
The InoNet QuickTray-v3 integrated into the
Mayflower-B17-LiQuid is a modular, removable data
storage device in cartridge design with four U.2 NVMe
SSDs in a Raid 0 array. As a plug-and-play solution with
the latest technology, it offers storage capacities of up to
120TB and write rates of up to 26Gb/s in the encrypted
state in the application. Simple plug-and-play handling
enables a fast exchange of collected data between the
vehicle and the evaluation station. A new hot-plug
capability ensures a smooth changeover without having
to interrupt the operation of the datalogger, enabling
continuous recording shifts and simple data exchange. ‹

CONTACT
InoNet | inquiry no. 108
To learn more about this advertiser, please visit:
www.ukimediaevents.com/info/avi

FREE SERVICE!

LEARN MORE ABOUT


OUR ADVERTISERS NOW!
ADAS &
AUTONO
MOUS VEHICL

THE INTE
REVI EW RNAT IONA
OF L
E INTERN

VEHI CLE AUTO NOM OUS


TECH NOLO
FROM CONC GIES :
ATIONA

Scan the QR code or visit www.ukimediaevents.com/info/avi


TO MAN EPTI
UFAC TURE ON
L

IMPL EMEN TO
TATIO N
JANUAR
Y 2024

January
2024

to request exclusive and rapid information about the latest


technologies and services featured in this issue Face
value
PUBLISH

Driver
ED BY UKi

monit
contin oring
system
MEDIA

ue s
Mapping
grow in to mature
& EVENTS

HD maps
autono have long been sophis and
techno
mous driving. synony
mous with Nissan: ticati
logy mark Could
a fundam the latest REM
ental shift?
evolvAD
A UK project on
driving seeks to
on boost
partly by rural and urban autonomous Large
training
AVs to
residen
act more tial roads, language
assertiv
ely
Breakin
g models
betwee down the languag
n
acceler humans and compu e barrier
ate and ters
improve
AV developcould
ment

www.ukimediaevents.com/info/avi

ADAS & Autonomous Vehicle International January 2024 63


H AV E YO U M E T…?

Florens Gressner
CEO and co-founder, neurocat
Florens Gressner explains how neurocat’s data augmentation
can enhance the performance of perception systems and
improve autonomous vehicle safety
By Anthony James

Why did you found neurocat What are the main challenges your model can handle a block, you move
and how has your vision for in generating augmentations onto the next. If it fails on a block, we know
the company evolved? that are useful for perception what data we need for retraining.
neurocat was founded to help ML development?
developers make safer AI solutions. Making the augmentations look visually What’s next for neurocat and
Nowhere was the need for greater realistic and accurately reflect reality is the role it will play in the
safety more evident than in ADAS challenging. However, we at neurocat have autonomous driving sector?
perception systems, the failure of built up the expertise for doing this well so We want to do more with fewer, high-
which remains the main reason for our customers don’t need to. What was quality building blocks. Efficient ML-based
autonomous vehicle disengagements. perhaps more difficult was making our perception development will become ever
And every disengagement puts people’s augmentations practical for the most more critical as our ambitions for full
safety at risk. demanding real-world business cases – autonomy in every ODD grow. Simplifying,
Our company evolved as we sought to make aidkit futureproof, the tool that we have scalable augmentation generation
to identify the ultimate reasons for these could enable a company to win the race and a reliable testing solution. We want to
perception failures. The answer came to full autonomy. In this race, even more better leverage the former to enhance the
down to data gaps during model training. augmented data will be needed, all latter; to use our safety metrics to identify
The world is complex and there are requiring time-consuming computations the information-rich datapoints needed to
too many possible combinations of to generate, test and analyze. Thus, we’ve make our service, and thus our customers’
conditions in any given ODD to collect put a lot of effort into engineering our ML-based perception development, more
them all. Once we identified the augmentations to be scalable and run efficient and effective. This is the future
problem, we worked on the solution: faster at less cost. of ML and perception: it’s a move from
our aidkit software. today’s data-intensive approaches to
How can you be sure your leaner augmentation and targeted
How does aidkit fill these data augmentations are good performance approaches. ‹
gaps and improve performance substitutes for real data?
and safety? Augmentations are built using real images
aidkit works by augmenting existing as a basis, unlike simulation, resulting in aidkit enables
the safety
image data to enhance data sets so they a comparatively small simulation gap. validation of
include data with new conditions: rain, This fact allows simpler, understandable AI perception
fog, sun flares and so on. These are all statistical analysis to confirm that the functions
customizable to match any ODD. The augmented data is similar enough to
aidkit software can then test the models real data that we can trust the results
against the new data and discover – before obtained therewith.
deployment – where each would fail, Look at augmentations as a building
saving significant time and money. But block approach. You add one thing, such
testing is only the beginning. The potential as rain, then change it a bit, making it
of our augmentations extends to training, heavier, then add another correlated
validation, you name it. element, such as puddles. Once you know

Scan the QR code or visit www.ukimediaevents.com/info/avi

INDEX TO ADVERTISERS
to request exclusive and rapid information about the latest
technologies and services featured in this issue

ADAS & Autonomous Vehicle International app . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18 InoNet Computer GmbH. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .18
ADAS & Autonomous Vehicle International online reader inquiry service . . . . . . . . . . . . . . . . . . . . . . . . . . 59, 63 IPG Automotive GmbH. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
ADAS & Autonomous Vehicle Technology Expo California 2024. . . . . . . . . . . . . . . . . . . . . . Inside back cover Mechanical Simulation.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Inside front cover
ADAS & Autonomous Vehicle Technology Expo Europe 2024 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11, 12, 13
neurocat GmbH . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .7
ASAM eV . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
Claytex . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Racelogic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
DEWESoft, LLC. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Outside back cover www.autonomousvehicleinternational.com . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .51
Gentex Corporation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 Xylon doo.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

64 ADAS & Autonomous Vehicle International January 2024


AUGUST 28 & 29, 2024
SAN JOSE | CALIFORNIA
GET YOUR FREE EXHIBITION
ENTRY PASS. SCAN THE QR
CODE TO REGISTER NOW!

JOIN US IN SILICON
VALLEY NEXT AUGUST
TO HELP ENABLE THE
FUTURE OF MOBILITY

TESTING TOOLS SENSING AND AI SIMULATION SOFTWARE

#AVTExpoCA www.adas-avtexpo.com/california
Get involved online!
N AV I O N ® i 2
STATE OF THE ART INERTIAL NAVIGATION PLATFORM FOR
ADAS AND AUTOMOTIVE VEHICLE TESTING
• < 2 cm RTK position accuracy
• GPS, GLONASS, BeiDou, Galileo
• Driving robot interface
• CAN and Ethernet interface
• 0,08° slip angle accuracy
• Dewesoft-X-PRO software included

NAVION i2 is a
robust and rugged
navigational platform
combining the best
of GNSS positional
accuracy and high-
grade IMU.

sales.us@dewesoft.com | dewesoft.com |10730 Logan Street | Whitehouse, OH 43571 | Phone: 855-339-3669

You might also like