Using our technology, we are able to reduce the cost of the whole vehicle (including the chassis, the computing hardware, and the sensing hardware, and the software stack) under $10,000 USD.
Interview with Shaoshan Liu and Zhe Zhang of PerceptIn
Shaoshan Liu and Zhe Zhang | PerceptIn
PerceptIn designs and manufacturers its own sensor for autonomous vehicles. What is the advantage of that to your customers?
Current design of autonomous driving rely on very expensive sensing and computing hardware, take Baidu's open-source Apollo project for instance, the hardware cost would be over $100,000 USD. With this kind of cost, it is very difficult to bring autonomous vehicles to the public in the near future. Our solution is mainly based on computer vision, with the help of our proprietary sensor fusion technologies. Using our technology, we are able to reduce the cost of the whole vehicle (including the chassis, the computing hardware, and the sensing hardware, and the software stack) under $10,000 USD. Instead of directly putting a car in traffic, we start by solving the transportation problem of the controlled environments, such as transportation services of campuses, delivery vehicles, and industrial forklift trucks.
I think the biggest difference between our solution and other solutions is a philosophical question: do you think autonomous vehicles are an extension of traditional cars, or do you think it is a whole new creature. Other solutions treat autonomous vehicles as an extension of traditional cars, whereas we treat autonomous vehicles as a high-end robot. In detail, other solutions plan to roll out autonomous vehicles as a new generation of cars, whereas we think of autonomous vehicles as a new transportation utility, and its only function is to transport you from point A and point B.
Say, if we block downtown Manhattan to only allow this kind of transportation services, and we guarantee that you can get into such vehicle within 5 minutes and also there will be no more traffic at all on the road, would you still need a car?
Under this scenario, autonomous vehicles essentially become a new living space, instead of a car. Say if on average you spend one hour in this new moving living space, how would you spend your time there? That is still unknown, and that is what we try to figure out next, how to deliver the best user experiences within autonomous vehicles.
A demo video of our technology can be found here: https://www.youtube.com/
Can you explain what PerceptIn’s new visual intelligence solution includes?
Sure, it is a complete hardware/software/
Please explain why PerceptIn’s core technology (in its visual intelligence solution) is well suited for autonomous vehicles in controlled environments. Can you provide some insight into this market?
As we discussed before, this starts with a philosophical question. In our belief, all intra-city transportation will eventually all happen in controlled environments. By controlled environments, I mean autonomous vehicles do not share the roads with human-driven vehicles, and autonomous vehicles move at a fairly low speed (< 20 MPH). Under this situation, affordable computer vision based autonomous driving is already good enough to serve. To enable this, we provide a full-stack solution, including hardware, software, and cloud.
As discussed in the previous question, our sensor is able to capture detailed 360-degree spatial information of the environment in real-time, guaranteeing that the vehicle never loses track of itself. In addition, the software provides accurate localization and scene understanding. Moreover, the mapping cloud generates very detailed maps of the environment to guide the autonomous vehicles.
What are PerceptIn’s goals vis-à-vis autonomous driving? Where do you see the market headed and what role will PerceptIn play in providing technology to the market?
Our mission is to enable massive adoption of autonomous driving through affordable but reliable full-stack solutions. We believe that massive adoption of autonomous driving will happen only if we can have the full solution ready and making it affordable and reliable, we target less than $10000 USD full vehicle cost. There are many companies working on autonomous driving, but most of them work on one technology only, however the pain points are 1.) lack of full-stack solutions, or the so-called turn-key solutions in this market 2.) existing solutions are too expensive. We are solving exactly these two problems.
There are a number of products and approaches being promoted to evolve the autonomous driving market. Why is PeceptIn’s approach superior to current approaches?
Our solution is complete, affordable, and reliable. It is complete since it includes hardware, software, and cloud and thus not much integration effort is required. It is obviously affordable as we target less than $10000 USD full vehicle cost. It is reliable since we have proprietary sensor fusion technologies thus in case one sensor fails other sensors can take over to guarantee safety.
What kind of uses (and users) will there be for PerceptIn’s “core technologies and solutions for the next generation of robotic computing platforms?”
Our core vision is robotization, we believe that more and more robots will be built in the next ten years to serve mankind. Our current customers include cleaning robots, in-home service robots, intelligent forklift trucks, autonomous vehicles. Soon enough we will see robots that automatically deliver your food to you, robots that set tables, clean tables, and wash dishes for you. All these different kinds of robots require a consolidated solution and a unified user experience, and PerceptIn enables this with our robotic computing platform.
Can you discuss the types of applications PerceptIn is working on with its clients? Can you provide a sense of how many client projects you’re working on?
We actually have three major product lines:
1.) the IoT-grade robotic solution, code name Zuluko: where we enable localization and deep learning technologies on IoT-grade hardware, this product line has been adopted by several cleaning robot OEMs.
2.) the commercial grade solution, code name Ironsides, in which we integrate accurate and high-performance localization and deep learning technologies to help our customers in the in-home service robots and industrial robots sectors.
3.) the autonomous driving grade solution, code name DragonFly, in which we integrate long-range accurate localization, scene understanding, and high-precision visual maps to help our customers in the delivery robot industry and forklift trucks industry.
The reason that we can support multiple product lines simultaneously is because we have a consolidated technology stack, in which the core components can be shared by all these product lines.
Can you predict when we will start to see the first autonomous vehicles in the real world? Will they be cars, boats, planes?
This is actually happening, we have seen many demo vehicles from Waymo, Baidu, and others in the past few years. In Singapore, Japan, and Europe, there are already autonomous vehicles running in controlled environments, but the problem is that they are still too expensive to receive massive adoption.
About Shaoshan Liu
Shaoshan Liu is the co-founder and chairman of PerceptIn, working on developing the next-generation robotics platform. Before founding PerceptIn, he worked on Autonomous Driving and Deep Learning Infrastructure at Baidu USA. Liu has a PhD in Computer Engineering from the University of California, Irvine.
About Zhe Zhang
Zhe Zhang is the co-founder and CEO of PerceptIn. Prior to founding PerceptIn, he worked at Magic Leap in Silicon Valley and prior to that he worked for five years at Microsoft. Zhang has a PhD in Robotics from the State University of New York and an undergraduate degree from Tsinghua University.
The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow
Comments (0)
This post does not have any comments. Be the first to leave a comment below.