The ins and outs of the technology can admittedly get very complicated but, essentially, autonomous robots are a fusion of (sometimes artificially intelligent) software, physical robotics hardware, and sensors.

What Are Autonomous Robots, and Why Should We Care?

Contributed by | Eliport

What are autonomous robots? This might not be the big question on everyone’s mind, in between taking the kids to school and writing that report before tomorrow, but it is an important one to ask, as the development of these machines hums quietly along in parallel with our everyday lives. From robot helpers in our workplaces to autonomous vacuum cleaners in our homes, we are entering a new era of robot-human cohabitation, where little machines facilitate our lives in unprecedented ways. 


Photo by Joseph Chan

To make the most out of robots, and ensure that everyone contributes to deciding how they change our lives, we need to be clued up. Robots, and their workings, can no longer remain unknown to us because they are becoming more and more ubiquitous in our societies — particularly in big cities. At Eliport, for example, we aim to revolutionise last-mile delivery with autonomous ground-robots. Our robots are likely to significantly change not just how deliveries are made, but also eventually how other city operations, such as street-cleaning or law-enforcement, are carried out. 

We argued in our last article that understanding the ‘smart city’ concept is the only way to ensure that we have a say in how it impacts our living spaces; in this week’s article, we will say the same about autonomous robots. Understanding what they are, how they work and how they are going to affect our societies will allow us all to have a voice. So, in this week’s article we will take a brief look at how autonomous robots work, before exploring their effects on our cities in next week’s edition.

 

So, what exactly are autonomous robots, and how do they work?

Nowadays, basically every machine that helps people can be considered a robot. From the ATM on your street corner to the shiny coffee machine in your kitchen, automation is already a major part of the everyday. Concepts similar to robots have been around since the 4th century B.C in Greece, but the first official ‘robots’ were the industrial robots introduced in the early 20th century. They were pioneers in replacing the jobs that were too dangerous/repetitive for humans to do, working tirelessly where humans couldn’t. Then came the mobile robots, around the middle of the 20th century, which could move in the air, in water, and on the ground. Today, we are seeing the rise of a more advanced kind of bot: the autonomous robot.

By definition, all robots are at least semi-autonomous, in that they will react to specific events and conditions without needing to be directed in real-time. An autonomous robot, however, is one that acts and behaves with a higher degree of independence. It can accomplish complex objectives on its own, without humans or wires (i.e. it does not need to be permanently plugged into an electrical source). It can also maintain itself, such as charging itself when necessary — as demonstrated by, for example, the Roomba vacuum cleaner. These robots are essentially a set of data (predetermined information) and behavioural rules, whose algorithms and environmental sensors allow them to do the job they’ve been programmed to do ‘autonomously’, and stay out of danger and trouble. This predefined information can be, say, a map of the environment, or a neural network of images that has already ‘learned’ to recognise people, animal, cars, etc..

Autonomous robots are generally mobile and can therefore move around on their own. Like all other mobile robots, autonomous ‘bots come in many different forms — from flying drones, to ground-based robots, to water-based and even underwater machines. At the moment, they are often limited to a given environment — such as a factory space, shopping mall, railway station, or warehouse. However, as the technology becomes more advanced, they will be put to use in a wider array of environments, from labs and research centres, to our streets (like our future ground-based robots) and the home. The possibilities are (nearly) endless.

The robot’s workplace is often challenging, frequently requiring work in areas that are too dangerous or difficult for humans to reach, and can contain chaotic and unforeseen variables. The exact type, orientation and location of the robot’s next object of work, for example, can all vary unpredictably (at least from the robot’s point of view). The robot must be able to deal with these changes and apply different solutions, although they occasionally need a little help from a human minder. 

When it comes to sensors, as with software, different robots use different types. Simpler forms of autonomous robot, for example autonomous vacuum cleaners, rely on infrared or ultrasound sensors to navigate and ‘see’ their environment. Higher-level robots, like autonomous vehicles, tend to use cameras, radars (radio sensors) or lidars (laser sensors) that give them the ability to constantly identify and categorise the things they ‘see’. These sensors are essential for gathering the necessary data, along with that the robot may receive from other data sources such as maps, to allow it to constantly assess its environs and make real-time ‘decisions’. The more advanced robots need this decision-making ability in order to execute three principal tasks: obstacle avoidance, localisation and mapping, and route planning. 

As robots’ environments often contain chaotic and unforeseen variables (see above), some robot developers are looking into ways to make robots ‘self-learning’, allowing them to acquire new methods of accomplishing their tasks or adapting to their changing surroundings. These ‘self-learning’ robots are sometimes called adaptive or intelligent robots; they use use machine learning or deep-learning, both subsets of artificial intelligence (AI), to automatically learn and improve from experience. One example of this isAibo, the AI Japanese robot pet.

 

Conclusions

The ins and outs of the technology can admittedly get very complicated but, essentially, autonomous robots are a fusion of (sometimes artificially intelligent) software, physical robotics hardware, and sensors. Their software could be referred to, metaphorically, as their ‘brain’, their sensors as their ‘senses’, and their hardware as their ‘body’. In the future, all autonomous robots are likely to use machine learning in some capacity, although a mixed approach — combining the different types of hardware and software above — will probably remain the most popular option.

At the moment, most autonomous robots are able to navigate their environment somewhat but are still in need of a fair bit of human assistance along the way. This is likely to change, however, in the next few years. As the technology grows more and more ‘intelligent’, and engineers develop more sophisticated hardware, software, and sensors, these robots will become much more independent. Widespread adoption of autonomous robots, which is likely to happen sometime in the future, will have a groundbreaking effect on society. These robots will facilitate not only the obvious things — such as e-commerce deliveries — but also help humans in cleaning, science and research, transport, law-enforcement and much more. Next week we will explore this in detail, looking at how autonomous robots are changing the physical, social and economic landscape of our cities right now, and how they will continue to do so in future.

To explore more questions like this, take a look at our publication, or visit our website to see how we plan to revolutionise last-mile delivery with autonomous robots. Invest in our equity crowdfunding campaign to join us on what promises to be an incredible journey: https://www.startengine.com/eliport.

 

Sources 

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

Helios™2 Ray Time-of-Flight Camera Designed for Unmatched Performance in Outdoor Lighting Conditions

Helios™2 Ray Time-of-Flight Camera Designed for Unmatched Performance in Outdoor Lighting Conditions

The Helios2 Ray camera is powered by Sony's DepthSense IMX556PLR ToF image sensor and is specifically engineered for exceptional performance in challenging outdoor lighting environments. Equipped with 940nm VCSEL laser diodes, the Helios2 Ray generates real-time 3D point clouds, even in direct sunlight, making it suitable for a wide range of outdoor applications. The Helios2 Ray offers the same IP67 and Factory Tough™ design as the standard Helios2 camera featuring a 640 x 480 depth resolution at distances of up to 8.3 meters and a frame rate of 30 fps.