Press "Enter" to skip to content

Why AI and ML are not cybersecurity solutions–yet

AI and ML are often touted as silver bullets, but real-world applications for the technology seem thin on the ground.

Artificial intelligence (AI) and machine learning (ML) are some of the latest tools being used in the fight against application security vulnerabilities. However, the complexities involved can make it hard to discern what’s actually being used and what lives in a fictional Hollywood setting.

More about cybersecurity

I spoke to Ilia Kolochenko, CEO of web security company High-Tech Bridge to clear up any confusion.

SEE: Artificial intelligence: Trends, obstacles, and potential wins (Tech Pro Research)

Scott Matteson: What is the overall state of application security today? Has it improved in the last 12 months? If not, why?

Ilia Kolochenko: The overall number and complexity of application security risks continue to grow steadily. Web, mobile, and even IoT applications have become an inseparable part of our personal and business daily life. People buy, sell, take loans, learn, vote, and even fall in love using applications. Virtually every startup has its own application, let alone large corporation and governmental entities.

However, an increasing lack of skilled technical talents and a predominant trend to cut application development costs by outsourcing have produced a skyrocketing number of insecure and vulnerable applications. Even worse, startups that often have to compete in a very aggressive and turbulent environment are simply disregarding application security and privacy due to a lack of available resources.

Scott Matteson: AI and ML are often touted as silver bullets, but real-world applications for the technology seem thin on the ground. How can businesses benefit on a practical level from AI and ML?

Ilia Kolochenko: First of all, we need to define the AI acronym that is widely misused today. Strong AI, capable of learning and solving virtually any set of diverse problems akin to an average human does not exist yet, and it is unlikely to emerge within the next decade.

Frequently, when someone says AI, they mean Machine Learning. The latter can be very helpful for what we call intelligent automation – a reduction of human labor without loss of quality or reliability of the optimized process.

However, the more complicated a process is the more expensive and time-consuming it is to deploy a tenable ML technology to automate it. Often, ML systems merely assist professionals by taking care of routine and trivial tasks and empowering people to concentrate on more sophisticated tasks.

Scott Matteson: Although application security best practice has been discussed for years, there are still regular horror stories in the media, often due to a failure in basic security measures. Why are the basics still not being followed by significant numbers of businesses?

Ilia Kolochenko: The root cause is a missing or incomplete cybersecurity strategy. With the rapid proliferation of technology into every part of business holistic cybersecurity management becomes a very arduous and onerous task. Many companies don’t have a consistent, coherent, and risk-based security strategy, let alone application security program. Very few companies have an up-to-date inventory of their applications, processed data and implemented security controls. So how can they protect what they don’t even know about it?

SEE: IT leader’s guide to deep learning (Tech Pro Research)

Scott Matteson: As many businesses grapple with GDPR and personal data requirements, is there a role for ML in data discovery or is the technology not yet mature enough?

Ilia Kolochenko: Yes, this is a process that can be reliably automated using ML technology.

Scott Matteson: What privacy perils come with machine learning—particularly following GDPR implementation?

Ilia Kolochenko: Speaking about GDPR in the context of ML, we need to keep in mind that some training datasets may contain real PII and thus make GDPR compliance virtually impossible. Sometimes data removal request, for example, can be unfeasible or unreasonably expensive to comply with.

Scott Matteson: Is the weaponization of AI a real threat and just how worried should businesses be?

Ilia Kolochenko: I think it is largely exaggerated these days. AI and ML are no silver bullets in cybersecurity, likewise in cybercrime. Bad boys are actively using ML to better profile their victims and accelerate attacks, however, these are the upper limit for the moment.

Also see

istock-905574028ai.jpg
Image: PhonlamaiPhoto, Getty Images/iStockphoto

Source: TechRepublic