Press "Enter" to skip to content

How AI, ML, and automation can improve cybersecurity protection

Read insights from industry experts on how artificial intelligence and machine learning will help prevent cybersecurity breaches.

Image: peshkov, Getty Images/iStockphoto

More about cybersecurity

Traditional cybersecurity tools such as mere anti-malware software or login audits aren’t going to be sufficient in 2020–additional resources will be needed to protect organizations and their employees from cyberthreats. Artificial intelligence (AI) and machine learning (ML) are making productive inroads in the cybersecurity space.

SEE: The 10 most important cyberattacks of the decade (free PDF) (TechRepublic)

I spoke with Anish Joshi, vice president  of technology at AI solutions provider Fusemachines, and Greg Martin, general manager of the Security Business Unit at Sumo Logic, a machine data analytics organization to get their input on the topic. The interviews have been lightly edited.

Scott Matteson: What are the common pain points with cybersecurity?

Anish Joshi: Security risks in applications are ever growing in number as well as complexity.
With the advent of technology like web, mobile, and even the Internet of Things (loT), applications have pervaded personal and professional lives as they use tech for a variety of different purposes, potentially increasing its footprint for damage. There is probably no organization that doesn’t have its own application. However, the number of applications that are vulnerable to threats has catapulted due to problems like a shortage in skilled technical manpower, whose expertise is necessary to build and protect such software. There is also a tendency to cut down application development costs through outsourcing, leading to the creation of software of low quality.

An even more dreadful fact is that application security and privacy is being overlooked by startups that lack the resources to address such concerns and are often bogged down by fierce competition in a cutthroat environment.

It all boils down to there being no in-depth cybersecurity strategy. Cybersecurity management becomes a very cumbersome and demanding task as technology pervades into every part of business. A lot of companies suffer from a lack of a reliable and systematic risk-based security strategy. Many also lack an application security program, with the exception of a few who have an updated backlog of their applications, processed data, and implemented security controls. It seems improbable to secure these applications without any proper knowledge.

Greg Martin: Attackers have learned to largely automate their attacks, increasing the frequency of attacks by an order of magnitude. Because of this, alert fatigue, false positive alerts, and the sheer volume of attacks and the amount of raw data available to analyze makes reacting accordingly a near-impossible task for humans. This is all magnified by the widely recognized skills gap/talent shortage in cybersecurity.

SEE: Cybersecurity in 2020: Eight frightening predictions (TechRepublic)

Scott Matteson: What are the most prevalent risks?

Anish Joshi: The most prevalent risk is the stealing of private and confidential information through phishing emails. When phishing emails are opened by employees of a company, this can cause malware to infiltrate the company’s computer system, eventually causing it to lose a lot of money, trade secrets, as well as its name and reputation.

This does not just affect companies as a whole but also individuals, as their privacy is violated, and their information can be used to commit fraud. One example would be stealing money from their bank account.

Greg Martin: The sophistication of attacks advances daily, and we are seeing a notable rise in fileless attacks, which is increasingly enabling attackers to “live off the land,” meaning they are leveraging existing scripting capabilities like PowerShell and existing network management tools to propagate and laterally move within enterprise networks. Due to this nuanced activity, additional security tools for detection and response are required, which are generating more alerts and complexity for already overworked, understaffed SOC [security operations center] teams.

SEE: What is fileless malware, and how do you protect against it? (free PDF) (TechRepublic)

Scott Matteson:  How can AI and ML help with these issues?

Anish Joshi: There’s no shortage or unavailability of data, especially in this digital age. AI and ML can help by processing and analyzing massive amounts of data to spot unusual trends, behavior, and patterns in what is known as anomaly or fraud detection. It’s an essential tool to help stop crime in the world of finance. Conventional methods are failing to detect cybersecurity threats as criminals are inventing new ways of getting around firewalls. Due to this, organizations need to be better equipped in preventing such attacks from hackers–the only way they can do this is by using AI and ML technology that are sophisticated enough to tackle problems that keep evolving with time. 

Greg Martin: Providing automated analysis of alerts enables analysts to cut through the noise and triage the alerts presenting the greatest organizational danger. Humans simply can’t process and analyze data as quickly or effectively as a machine learning-driven automation engine can, and humans can’t scale rapidly to meet spikes in demand the way automation can. Automation can be used to facilitate ML-driven security event triage and malicious behavior detection.

Scott Matteson: How do AI and ML improve cybersecurity measures?

Anish Joshi: Password protection, authenticity detection, and multi-factor authentication are some of the measures that can be implemented in cybersecurity. ML algorithms can be used to classify the strength of a password and help suggest ones that are stronger and harder to guess; they also allow for more sophisticated authentication mechanisms to be implemented like a biometric login that uses AI to detect certain physical traits. Such technology allows for the application of multi-factor authentication, a robust security mechanism, due to which a system becomes harder to infiltrate.

Greg Martin: AI/ML and automation greatly enhance endpoint protection, but where we see the most benefit in the technology is guiding security operations in what exactly to do with those threats once they hit the enterprise. The ever-increasing sophistication and persistence of cybercriminal activity is requiring security operations teams to rethink how they use people, processes, and technology. The antiquated practice of running a SOC as a human-led, 24/7, tiered analyst system using a SIEM [security information and event management] or log management tool to correlate alerts for manual investigation has proved inept. What is needed is a re-imagined notion of the SIEM/SOC platform where intelligent automation is the driving force in alleviating the data burden facing today’s analysts.

Scott Matteson: What is involved with reducing the threats?

Anish Joshi: A few ways to reduce cybersecurity threats is to tighten the current security system by turning off unnecessary services, using settings with the lowest level of privilege, running regular system security scans to keep it updated, and making sure certain sensitive data never gets leaked from the system. It’s also important to make employees in a company aware of certain security issues like watching out for phishing emails that steal private information, using variable passwords with good strength that are hard to guess, double-checking that personal belongings like confidential files, credit cards, and badges aren’t left unattended, keeping data in an encrypted format, and purchasing a cyberinsurance policy in case of any emergency.

Greg Martin: Providing security operations with automated tools to find the needle in the haystack and use valuable human intellect where it’s needed most. A skilled SOC analyst is the very best defense we have in combating threats; however, most of the analysis and processing they deliver is mundane and amounts to a waste of human capital. Ideally, human SOC analysts should be focused on more advanced and more valuable tasks like threat hunting, threat intelligence, and attribution. By employing automation, the idea is to free analysts up to work on highly-important tasks, and let automation do the grunt work.

Scott Matteson: What are some examples of this in real life?

Greg Martin: A good example of this is cybercriminals being able to hide in the noise caused by an unmanageable number of alerts and false positives for security operations to analyze. 83% of organizations admit they can’t even get to half of the alerts they receive daily (recent study by Fidelis).

We’ve seen actors successfully hide in the noise during Monero cryptocurrency mining operations, where attackers establish an anonymized revenue stream after initial infection of a new Linux host. After infection, actors used Haiduc (a SSH brute forcing tool) to self-replicate and capitalize on other infection opportunities without being detected. Only through automated cross-source analysis will you be able to detect high-volume, low-fidelity events for what they are and triage accordingly. 

Scott Matteson: How might the bad guys evolve their tactics to overcome preventive measures?

Anish Joshi: Since AI and ML are not only used in cybersecurity but also in cybercrime, the bad guys use them to better profile their victims and accelerate attacks. The same technology that can be used in fraud detection can also be used to overcome the existing security measures.

Greg Martin: Attackers themselves will employ greater forms of automation to increase the frequency and sophistication of attacks. It will be a constant race.

SEE: Cybersecurity in 2020: More targeted attacks, AI not a prevention panacea (TechRepublic)

Scott Matteson: Where is the field headed?

Anish Joshi: With the rapid advancement in technology the world is going to get more interconnected by computer systems, which opens up more possibilities for cybercrime to occur as almost everything can be within the reach of hackers. Although technology of increasing sophistication will result in more robust and secure systems, there also will be ample opportunity for hackers to harness their unprecedented power for wrong purposes like committing fraud.

The next generation of cybersecurity products are increasingly incorporating AI and ML technologies. At Fusemachines, we are helping companies level up on the protection of classified data and secrets. By training AI software on large datasets of cybersecurity, network, and even physical information, cybersecurity solutions providers aim to detect and block abnormal behavior, even if it does not exhibit a known “signature” or pattern.

Over time, companies will incorporate ML into every category of cybersecurity product. In order to get there, Fusemachines is helping companies by providing a set of dedicated consultants, AI engineers, and data scientists.

Greg Martin: Placing automation side-by-side with analysts on the frontlines. Human security operations are not going to be replaced by machines; rather, analysts’ skills and effectiveness will be enhanced by automation, enabling them to offload data crunching and mundane tasks and focus on cognitive processes machines can’t carry out.

SEE: How to build a better cybersecurity defense with deception technologies (TechRepublic)  

I also heard from Paul Trulove, chief product officer of identity governance provider Sailpoint, who said:

“Let’s take a step back and look at why artificial intelligence (AI) and machine learning (ML) have become the darlings of security. It isn’t because the robots are taking over, but rather because the humans are. The population of the internet is somewhere in the 4 billion range or about half the world’s population. Where there are humans, there is risk. That means cybersecurity has become the frontline for how we approach risk in our businesses, our governments, and even our personal lives.

“The sheer volume of cybersecurity threats for organizations is increasing, and the reality is that humans cannot outrun that risk without the help of technology like AI and ML. AI and ML are force multipliers in that they enable IT security teams to spot the proverbial needle in the ‘threat haystack’ more rapidly by quickly synthesizing massive amounts of data to detect the real areas of concern. This is true not just for IT security teams but of identity teams as well who face a similar big data problem of the identity variety during their digital transformation. This means more identities (human and non-human) have to be managed across more applications (legacy and cloud) and data (structured and unstructured).

“We’re seeing more organizations taking their existing approach to identity governance to the next level. They are using identity to help them understand who has access to important applications and data (and how that access should and is being used) while leveraging AI and ML to spot areas of user access that could pose a risk to the business or that could signal a compromised user account.

“With AI and ML as part of an identity governance program, businesses have the power to manage and spot risky user behaviors, anticipate user access needs, and automate security processes for what is often tens or hundreds of thousands of users–humans and bots. All of those users have access to business-critical systems and applications. You can also bet that those users are being targeted by hackers, which is all the more reason to embrace AI and ML across the cybersecurity and identity boards.”

Also see

Source: TechRepublic