Like transformative technologies that came before, the ethics of AI is coming under increased scrutiny, giving birth to regulations and policies constraining the scope of its application.

Artificial Intelligence, Robotics & Ethics
Artificial Intelligence, Robotics & Ethics

Michel Chabroux, Senior Director, Product Management | Wind River

Artificial Intelligence is a dynamic technology sector, powering a broad spectrum of emerging applications in the fields of industrial robotics and robotic process automation (RPA). Like transformative technologies that came before, the ethics of AI is coming under increased scrutiny, giving birth to regulations and policies constraining the scope of its application. What are the risks and what’s being done to promote an ethical application of one of today’s most empowering technologies?

 

Where AI Intersects with Robotics

What exactly is AI and what role does it play in robotic technologies? The concept of leveraging computational, pattern-recognition systems capable of learning and reasoning has expanded over time, coming under the broader rubric of machine learning, natural language processing, and data science. In industrial robotics, AI is typically used to sense and perceive the environment in which a physical machine operates in order to safely automate operations (e.g., a robotic arm on an automotive assembly line). Industrial robots tend to be limited to repetitive motions requiring little need for continuous learning and are not commonly perceived as threatening.

But AI is much more than sensors and actuators in industrial machines; it also is the force behind the development of powerful software for modeling, decision support, predictive analytics, and other intelligent applications capable of generating questionable, autonomous output. As such, unlike machine robotics, AI applications that ingest massive amounts of data that require real-time interpretation and analysis to effectuate ongoing learning are more susceptible to risk.

 

What Are the Risks, Ethical and Otherwise?

Realizing the benefits of AI in an ethical manner can be quite a challenge considering that the purview of the technology is still in the process of being defined and the environment within which it operates can be somewhat murky. Regardless, the debate surrounding AI’s potentially problematic aspects is ongoing, currently centered on a few key issues:

  • Privacy and Security — AI runs on data, and since all data collection and storage is now digitized and networked, cybersecurity vulnerabilities may pose a threat to individuals and organizations.
  • Opacity/Transparency — How is the data processed and how is it being used? The patterns recognized by AI systems may not be truly representative of the analytical decision output. Which data is selected and how is its quality ascertained? There needs to be transparency, community engagement, and ‘algorithmic accountability’ built into the system to ensure confidence that AI-derived output meets ethical standards and is fair and free of bias.
  • Biases — Bias can impact algorithms in a number of ways, be it the use of flawed data or datasets unrelated to the issue at hand (statistical bias), unconsciously attributing positive or negative qualities to the subject being analyzed (unconscious bias), or interpreting the information in a manner that confirms one’s preconceived notions (confirmation bias).

 

Addressing Ethical Considerations

As with other disruptive technologies that preceded AI, the formulation of laws and regulations is playing catch up as it relates to this area of incredible growth and opportunity. There are significant technical efforts to detect and remove bias from AI systems, but they are in early stages. In addition, technological fixes have their limits in that they need to develop a mathematical notion of fairness, which is hard to come by.  

Though very little actual policy has been produced, there have been some notable beginnings. A 2019 EU policy document from the Center for Data Innovation posited that ‘Trustworthy AI’ should be lawful, ethical, and technically robust, and spelled out the requirements to meet those objectives: human oversight, technical robustness, privacy and data governance, transparency, fairness, well-being, and accountability. And currently, in April 2021, the EU framework has been codified, resulting in proposed legislation the New York Times calls “a first-of-its-kind policy that outlines how companies and governments can use a technology seen as one of the most significant, but ethically fraught, scientific breakthroughs in recent memory.”  

 

A Framework for Evaluating AI Applications

AI can deliver substantial benefits to companies that successfully leverage its power, but if implemented without ethical safeguards, it can also damage a company’s reputation and future performance. However, unlike with other newly introduced technologies (e.g. Bluetooth), developing standards or drafting legislation is not easily accomplished. That’s because AI covers a broad, amorphous territory — everything from battlefield robots to automated legal assistants used for reviewing contracts. Indeed, just about anything related to machine learning and data science is now considered a form of AI, which makes AI a poor candidate for the development of industry standards. Regardless, something must, and inevitably, will be done.

Going forward and augmenting work that’s come before, a proposed framework to ensure ethical implementations is coming into focus. The framework is built around four key pillars: Trust, Transparency, Fairness, and Privacy.

  • Trust — First, being able to prove the trustworthiness of an AI application is the threshold issue that needs affirmation. People need to know that the AI applications they are using come from a reliable source and have been developed with responsible and credible oversight.
  • Transparency — Having transparency about how AI is being used and explaining its benefits in specific use-case scenarios will go a long way in reducing concerns and expanding adoption.
  • Fairness — Developers need to show that AI is being deployed in a fair and impartial way. Since AI in its elemental state lacks the ability to apply judgment, and instead focuses primarily on pattern recognition, the algorithms need to be fine-tuned to remove biases. Processes should also be introduced to avoid the biases that we, human, bring due to our own individual experiences.
  • Privacy — It’s critical that developers consider how using AI may impact any personally identifiable information (PII) embedded in the data being processed. While AI processing does remove some privacy concerns in that it bypasses human interaction with sensitive data, it raises others such as the extent and scope of how the information is being used, where it is stored, and who can access it.

 

Conclusion

AI is here to stay and efforts to tame its power will surely follow. Thinking proactively, industry players are looking into ways to shoehorn their applications into the framework mentioned previously. One possibility is using a peer review system to help establish trust and improve transparency. Here, AI developers would submit their use cases for review in an AI Community akin to open-source environments that came before.

While open-source initially experienced pushback, the situation eventually turned around. At first, the thought of giving away source code for free struck fear in the heart of CFOs in technology companies worldwide. Over time, it became clear that the transparency driven by distributing open-source software where a community monitored, updated, and offered commentary created huge efficiencies and led to more significant developments. Today, there isn’t a software company that doesn’t take advantage of open-source code.

Another way to add clarity to AI applications is to develop an ad hoc organization where companies would commit their projects and applications to a central AI Registry. Unlike a formal standards body that is typically dominated by large organizations that control the agenda, the registry would be a self-reporting body for gathering feedback, advice, and affirmation. Ultimately, the best way to ensure that AI applications are deployed ethically, might be to embed ethics in computer science curricula from the very start.  

 

 

The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow

Comments (0)

This post does not have any comments. Be the first to leave a comment below.


Post A Comment

You must be logged in before you can post a comment. Login now.

Featured Product

3D Vision: Ensenso B now also available as a mono version!

3D Vision: Ensenso B now also available as a mono version!

This compact 3D camera series combines a very short working distance, a large field of view and a high depth of field - perfect for bin picking applications. With its ability to capture multiple objects over a large area, it can help robots empty containers more efficiently. Now available from IDS Imaging Development Systems. In the color version of the Ensenso B, the stereo system is equipped with two RGB image sensors. This saves additional sensors and reduces installation space and hardware costs. Now, you can also choose your model to be equipped with two 5 MP mono sensors, achieving impressively high spatial precision. With enhanced sharpness and accuracy, you can tackle applications where absolute precision is essential. The great strength of the Ensenso B lies in the very precise detection of objects at close range. It offers a wide field of view and an impressively high depth of field. This means that the area in which an object is in focus is unusually large. At a distance of 30 centimetres between the camera and the object, the Z-accuracy is approx. 0.1 millimetres. The maximum working distance is 2 meters. This 3D camera series complies with protection class IP65/67 and is ideal for use in industrial environments.