To achieve large-scale commercialization of autonomous vehicles, a new generation of high-precision 3D environment sensing solid-state LiDAR technology products will be required to fulfill the industry’s strict requirements.
How to Make Autonomous Driving Safe
Q&A with Dr. Leilei Shinohara, VP of R&D | RoboSense LiDAR and the Future of Self-Driving Cars
Tell us about RoboSense and your role with the company.
RoboSense (Suteng Innovation Technology Co., Ltd.) is the leading provider of smart LiDAR environment perception solutions incorporating LiDAR sensor hardware, AI algorithms, and IC chipsets, with a number one automotive LiDAR market share in Asia. Our technology transforms high cost traditional LiDAR systems to low cost and additionally provides full data analysis and comprehension. Its mission is to serve as the “eyes” for autonomous vehicles and ensure the safety of the automatic driving system.
In 2018, RoboSense won a strategic investment of over $45 million USD from Alibaba Cainiao Network, SAIC and BAIC, setting the largest single financing record in China's LiDAR industry. RoboSense LiDAR has been widely applied in autonomous driving passenger cars, logistics vehicles, and buses by domestic and international autonomous driving technology companies, OEMs, and tier 1 suppliers. RoboSense has received numerous awards, including the CES 2020 and 2019 Innovation Awards, 2019 AutoSens Award, and 2019 Stevie Gold Award.
I joined Robosense as Co-partner and Vice President and am the director of the Automotive Product Line focusing mainly on developing an automotive-grade mass production LiDAR system.
What are the main differences between RoboSense’s solid -state and mechanical LiDAR product lines?
Mechanical LiDAR uses a motor to spin the entire laser and detector unit to scan the environment. It is our traditional product since establishing the company. This type of LiDAR is mainly used during the customer’s development projects or as a reference system for the other sensors.
RoboSense solid-state MEMS LiDAR uses a MEMS micro-mirror to steer the laser beam for scanning. The MEMS mirror is called solid-state to distinguish it from a mechanical micro-mirror device. The MEMS mirror's fabrication process is non-mechanical using similar techniques to IC chip fabrication. Therefore, MEMS LiDAR is categorized as MEMS solid-state LiDAR. The RoboSense MEMS LiDAR M1 is the first automotive grade LiDAR from our automotive product line. It is focused on active front sensing. The automotive grade sensor will support customer requirements.
What do you feel makes RoboSense's technology different?
First of all, RoboSense's unique capability is a smart sensor. We believe LiDAR hardware without good software is not a useful, so we focused on developing first-class perception algorithms and providing systematic LiDAR solutions with a full set of features to support our customers and get the most out of our LiDAR sensors. The Smart LiDAR Sensor’s built-in sensing algorithm has passed tens of millions of different extreme and complex environmental test scenarios for rain, fog, snow, sandstorms, and other weather conditions, various traffic flow, and pedestrian testing. RoboSense has already adapted to most of the extreme conditions, ensuring that the perception system can be used in various complex driving environments. A rich and reliable real-time environment perception system is the result of this extensive testing.
Second, the talented people at RoboSense created unique high performance solutions. And lastly, our competitive price is another RoboSense advantage.
Tell us about the core principles of your LiDAR technology?
Ensuring autonomous driving safety is always our top priority. We make sure that when the RoboSense Smart LiDAR Sensor is integrated into the perception system of current self-driving passenger cars, it guarantees redundancy for self-driving car decision-making and eliminates car accidents, such as Tesla’s recent accident, greatly improving the safety of the automatic driving system and ensuring passenger safety.
Second, we are dedicated to meet industry mass production and application requirements. LiDAR is currently limited by its large size and high cost, and we will constantly improve it to achieve automotive-grade, mass production, high-resolution performance, high stability and reliability, and low cost.
Why do you regard LiDAR as the most important technology to enable autonomous driving?
Conventional sensors, which include cameras and radar, all have their limitations. For example, cameras don’t work well under bad ambient light conditions and radar has limitations detecting an unmoving non-metallic obstacle. Therefore, when using only radar and camera sensors, they cannot guarantee the sensing system as ASIL-D compliance. These weaknesses can be covered by LiDAR. But LiDAR cannot replace them alone since LiDAR also has limitations. Therefore, a good perception software system (like RoboSense’s) is needed to fuse together LiDAR, radar, and camera data for redundancy.
You may have heard about the recent accident of the Tesla Model 3 on Autopilot, which crashed into a police car in Connecticut on December 9th. This also proves LiDAR’s importance to guarantee the safety and compensate for the weaknesses that currently occur with conventional sensors. Both Audi’s A8 (a Level 3 mass-produced autonomous vehicle) and the Waymo One (an autopilot ride-hailing service) have used LiDAR, which is an important industry indicator. Level 3 autonomous passenger vehicles using LiDAR will gradually become the industry standard.
To achieve large-scale commercialization of autonomous vehicles, a new generation of high-precision 3D environment sensing solid-state LiDAR technology products will be required to fulfill the industry’s strict requirements, including the need for automotive-grade, mass production, high resolution, high reliability, and low cost. We expect that MEMS LiDAR will be the first generation solid-state LiDAR for autonomous driving vehicles, including RoboSense’s RS-LiDAR-M1 MEMS solid-state LiDAR.
What are your main technology and R&D milestones for this year?
Our world’s first and smallest MEMS-based solid-state LiDAR, an advanced version of the RS-LiDAR-M1 for pre-mass production will be available in the market soon. The MEMS solid-State LiDAR RS-LiDAR-M1 will start shipping by the end of 2020. This will be the biggest milestone of us in 2020. As the winner of the CES 2019 and 2020 Innovation Awards, the new RS-LiDAR-M1 is now half the size of the previous version, with dimensions of just 4.3” x 1.9” x 4.7” (110mm x 50mm x 120mm) and is equipped with enhanced performance and AI perception algorithms. It fully supports Level 3/4 driverless automated driving and also Level 2+ ADAS applications. We will also have some new products launching during CES 2020 that will cover various customer applications, so stay tuned.
Where do you see autonomous driving 5 years from now, what are the biggest hurdles still to overcome?
There will be step-by-step growth in autonomous vehicles. The biggest concerns are always safety and public acceptance. The SAE has defined AD vehicles into 5 category levels (L1 – L5). L2 (partial automation or advanced ADAS systems) and L3 (conditional AD) passenger vehicles will start growing significantly in 2020/21. Meanwhile, L4 (highly automated) vehicles for special uses, such as parking, Robo-taxis, and Robo-trucks, will enter the commercial stage at the same time. Fully automated vehicles (L5), I think, will still take a long time to be reached. If they are not able to prove that fully automated vehicles are safer than human drivers, there will be difficulty becoming popular. But the industry is moving in this direction step by step.
How do you convince the average person that autonomous driving is safe?
The biggest challenge for autonomous driving is safety. The system has to make sure it is able to reduce accidents more than human drivers. The system has to prove to the public that the accident rate is lower. To achieve this, the surrounding environment perception is very important. This means the system must “see” further and wider and understand the environment better than a human so the system will make safer and quicker decisions.
To ensure safety, fusion with a lot of different sensors is needed. When the RoboSense Smart LiDAR Sensor is integrated into the sensing system of current self-driving passenger cars, it goes beyond current millimeter-wave radar and camera limitations to identify obstacles and eliminate car accidents, such as Tesla’s recent accident, greatly improving the safety of the automatic driving system and ensuring passenger safety.
Who are your main industry partners and how are you partnering with them?
RoboSense is Asia’s market leader with an over 50% market share of all LiDAR sold. Our partners include the world’s major autonomous driving technology companies, OEMs, and Tier 1s. Our strategic partners and investors, which are known publically, are Alibaba’s Cainiao Network, SAIC, and BAIC. We also have deep cooperation with top OEMs and Tier 1 companies, such as one we recently announced with China’s FAW (First Automobile Works), the world’s leading automaker, who will use RoboSense RS-LiDAR-M1 LiDAR as FAW’s proprietary next-generation autonomous driving system. There are also other ongoing projects in Europe and America, but I cannot disclose the specific names due to NDAs. What I can say, is that we work with them in multiple ways depending on their needs. Some require more in-depth cooperation, including joint development, and some require us to solely be their supplier. So far, all our partner’s feedback is very positive, and we appreciate all the suggestions from our partners to make our products even better. We will continue doing our best to serve our partners.
What’s next for RoboSense?
RoboSense will focus on the developing of the solid-state M1 product into automotive-grade mass-production as the first priority. We are not only developing the hardware, but also software as a comprehensive smart sensor system. The delivery of our Automotive Grade MEMS LiDAR in 2020 will be one of our biggest milestones.
Then we will continue to improve performance and price of the mechanical LiDAR product line. In addition, safety is the biggest challenge we will tackle. To ensure safety, fusion with different sensors is needed. We are also focusing on multiple sensor fusion projects. Furthermore, an AD-friendly infrastructure, such as an intelligent vehicle cooperative infrastructure system (IVICS), is also needed. RoboSense is also participating in IVICS projects to provide high precision perception systems.
RoboSense will be joining CES and CES Unveiled in January 2020, with a booth at 6138, LVCC North Hall, to demonstrate our flagship products as well as a new product launch. We are planning an off-site demo with our test car, which is equipped with all our different kinds of LiDAR sensors. Stay tuned!
About Dr. Leilei Shinohara
Dr. Leilei Shinohara is Vice President of R&D, RoboSense. With more than a decade of experience developing LiDAR systems, Dr. Shinohara is one of the most accomplished experts in this field. Prior to joining RoboSense, Dr. Shinohara worked at Valeo as the Technical Lead for the world’s first automotive grade LiDAR, SCALA®. He was responsible for multiple programs, including automotive LiDAR and sensor fusion projects. Dr. Shinohara managed an international sensor product development team for the development and implementation of systems, software, hardware, mechanics, testing, validation, and functional safety to build the first automotive grade LiDAR product.
If you like this article you may like "Is Software the Future of Robotics?"
The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow
Comments (0)
This post does not have any comments. Be the first to leave a comment below.