In a survey, some 80% of telecommunications executives stated they believe their organization cannot respond appropriately to cyberattacks without AI. What’s more, 69% of all senior executives agree they would not be able to respond without AI on their side.
What Security Privileges Should We Give to AI?
Martin Banks | Modded
Technology has woven its way into our lives, and the more it does, the more vulnerable we become to exploits. That’s why cybersecurity is a growing concern in nearly every industry, from retail to healthcare and beyond. At the same time, cyberattacks are growing more sophisticated, and so are the people carrying them out.
We need help, that much is clear. The information technology (IT) and cybersecurity field is turning to artificial intelligence (AI) and machine learning solutions to both offset and optimize some of their responsibilities.
In a survey, some 80% of telecommunications executives stated they believe their organization cannot respond appropriately to cyberattacks without AI. What’s more, 69% of all senior executives agree they would not be able to respond without AI on their side.
There’s no question this technology is powerful and growing more effective every day at a variety of high-importance tasks. But are there limits to what we should allow AI to do? What security privileges should we be entrusting to artificial intelligence?
What Security Is Safe to Automate? What Is Not?
Artificial intelligence is an excellent tool for aiding in threat detection, network analysis and monitoring, user authentication, pattern recognition, and a variety of other security-related tasks. AI also frees up security professionals to focus on more nuanced or mission-critical operations, often covering rote and repetitive work.
Around 51% of enterprises primarily rely on AI for threat detection, followed closely by security predictions.
But not everything should be automated when it comes to security. There are still tasks that require key human decisions, such as sharing cybersecurity information, training management and staff, upholding compliance, and staying informed about new changes or rules.
More nuanced arrangements of the security solutions should also be handled by humans. One example is authentication and user access control. The automation should be able to detect potential threats, block access instantly to protect the network and systems, and lock down segments of the network. That shouldn’t be the end of it, however.
The information, including details about the affected user, should be passed on to a human analyst who has the final say in how to proceed or what to change. What if that user is legitimate and was flagged as nefarious through a misunderstanding? They could miss out on an entire day’s work or worse, depending on how long it takes someone to identify what happened with the system. There is such a thing as too vigilant, especially when the algorithms and systems behind the cybersecurity platforms are designed by humans.
Additionally, these technologies aren’t always trained to spot anomalies that slip through the cracks or deal with other types of security concerns. Physical security is one example where automation works best when combined with human observation and reaction, along with robustly designed physical barriers.
Automatic security gates, especially in restricted and confidential locations, are incredibly reliable, and some have been tested for well over 200,000 cycles of constant duty. Grilles and gates keep unwanted parties out and allow authorized personnel access to a property. That’s why they are still one of the most important physical security measures relied on today.
Yet, almost all of the most secure locations include human guards at those gates as an extra precaution and a deterrent. That is precisely how AI should be leveraged when it comes to cybersecurity at the enterprise level.
Gate systems can analyze employee and vendor ID badges and make a split-second decision about providing access or not. But it's all data- and algorithm-driven. The human guards stationed nearby can help ensure the system isn’t being exploited or making the wrong decisions based on faulty logic.
The technology is extremely effective and efficient at what it does, but every so often, it needs the human element as a backup, whether for brick-and-mortar premises or strictly digital ones.
What Security Privileges Should AI Be Given?
The question is not what privileges AI should have, but rather what privileges should be supported by human analysts. The technology is effective for automating just about everything under the sun, but it’s not yet capable of making more complex decisions, even with massive troves of data to sift through and act on.
AI is remarkable at:
-
Biometric authentication and access control.
-
Detecting threats and potential attack vectors.
-
Taking immediate actions against cyber-events.
-
Learning new threat profiles and vectors through natural language processing (NLP).
-
Securing conditional access points.
-
Virus, malware, ransomware, and malicious code identification.
Most of the technology’s limitations stem from either a lack of system resources and computing power, or poorly defined and implemented algorithms, rules, and definitions. Human-designed artificial intelligence also displays various biases, oftentimes mimicking their creators, when turned loose on datasets.
For all of its promise, AI is still not a mature technology and should not be viewed as a failsafe, even for the tasks it’s best suited for, like those named above.
The Importance of an Active InfoSec and Cybersecurity Team
The takeaway is that AI can be an incredibly powerful cybersecurity tool. When it comes to real-time monitoring, threat detection, and immediate action, there is no equal in tech today. AI security solutions can react faster and with better accuracy than any human could. The pitfalls come from anomalous events outside the defined parameters of these data-driven solutions.
It highlights the importance of having an infosec team on hand, whether in-house or outsourced. They can offer valuable support to the AI tools, particularly when it comes to the secondary review of the response.
Even human oversight can be lessened with the help of AI. Researchers from the MIT Computer Science and Artificial Intelligence Laboratory have created a data-driven solution, with the help of PatternEx, called AI2. It predicts cyber-attacks with up to 85% precision using continuous input from human experts.
Building the Right Support System
The argument is less about how AI should be deployed or leveraged, and more about where to build up its foundation. The technology is growing more sophisticated daily, especially with the help of real-time analytics, machine learning, and edge computing solutions. It is capable of doing nearly anything from threat detection and continual analysis to user authentication and the management of conditional permissions.
Among those polled, 62% of enterprises have already adopted and implemented AI to its full potential for cybersecurity, or are still exploring additional use cases. Only 21% of enterprises have no plans at all to incorporate AI.
There are more arguments for giving AI privileges in cybersecurity than not. The trick is striking a balance between precise automated systems and more nuanced human input.
The content & opinions in this article are the author’s and do not necessarily represent the views of RoboticsTomorrow
Comments (0)
This post does not have any comments. Be the first to leave a comment below.