Press "Enter" to skip to content

ML algorithm spots toxic emails and chats in real time

Image: Kim Britten

People managers and HR leaders have a new tool for spotting and stopping harassment at work: a machine learning algorithm that flags toxic communication. CommSafe AI can document incidents of racism and sexism and track patterns of behavior over time. This makes it easier to spot bad actors and intervene when a problem starts instead of waiting until a lawsuit is filed.

The Comm Safe algorithm monitors email and chat services, including platforms such as Microsoft Teams and uses machine learning to measure sentiment and tone of written communication in real time. According to Ty Smith, founder and CEO of CommFree AI, the algorithm understands nuance and context, such as when “breast” refers to a lunch order from a chicken place and when it refers to the human anatomy. Smith said the algorithm avoids the false positives generated by monitoring solutions that use rules or keyword searches to spot problematic comments.

Smith said the goal is to change behavior when the problem first appears. He also recommends that employers always tell employees that the monitoring service is in place.

“Instead of sending these messages over Slack, they will keep it to themselves or only talk this way when they see the person in person,” he said. “Either way it’s caused a change in behavior in the individual and reduced risk for the company.”

Bern Elliot, a research vice president at Gartner who specializes in artificial intelligence and natural language processing, said sentiment analysis has improved over the last few years due to increased compute power for analyzing larger datasets and the ability to have numerous content types.

“Algorithms can now encompass a broader time frame and content range and do it at scale,” he said.

CommSafe customers also can analyze archived communications as part of a harassment investigation.

“If a woman comes to HR and says a co-worker harassed her over Slack six months ago, the software can surface any instance of toxic communication to support whether that happened or not,” he said.

SEE: The COVID-19 gender gap: What will it take to bring Black women back to work?

Smith said diversity officers can use this tracking and monitoring over time to identify bad actors within the company.

“If a company is going to hire a DEI officer and make them responsible for this work, that person needs to know where to start,” he said.

Elliot said the key is anonymizing this information generally and giving access to only a few people who can address the problematic behavior privately.

“One person should be able to de-anonymize the information,” he said.

Measuring trust and safety

The challenge is scaling this corrective feedback to companies with 10,000 employees or online games with 100,000 users and monitoring multiple channels for numerous problems, including IP infringement, spam, fraud, phishing, misinformation and illegal content, such as non-consensual sexually explicit imagery.

“We haven’t come up with the right tools to manage this,” Elliot said. “The really big companies have big teams of people working on this and they still have a ways to go.”

HR leaders can’t necessarily spot or track repeat patterns of behavior and individuals on the receiving end of harassment don’t always want to call out the problem themselves, Elliot said.

“If you dig, these things haven’t started out of nowhere,” Elliot said. “These are patterns of behavior and you can see if there are indications of behaviors that someone from HR wants to take a look at.”

SEE: Why a safe metaverse is a must and how to build welcoming virtual worlds

Elliott suggested that companies use this monitoring software to measure group social safety as well.

“A pattern of behavior could be within a group or with an individual, and you can correlate with other things,” he said. “People don’t break those rules all the time; there are certain triggers that make it OK.”

Elliot suggested companies consider implementing this kind of sentiment analysis beyond employee communications.

“Toxic content is really a fairly narrow problem–it’s looking at content generated by third parties that you have some responsibility for that is the bigger issue,” he said.

The bigger challenge is monitoring trust and safety in conversations and other interactions that includes text, voice, images and even emojis.

Building a model to identify hate speech

Smith started a tech-enabled risk assessment company in 2015 with an initial focus on workplace violence.

“Once I stood up the company and started working with big customers I realized that active shooter situations were a very small piece of that problem,” he said.

In early 2020, he shifted the company’s focus to toxic communication with the idea of getting ahead of the problem instead of responding to bad things that have already happened.

He held a brainstorming session with military and law enforcement officials who had experience dealing with violent individuals. The group landed on toxic communication as the precursor to workplace bullying, racial discrimination and other violent behavior.

Smith said that the CommSafe team used publically available datasets to build the algorithm, including text extracted from Stormfront, a white supremacist forum, and the Enron email dataset which includes email from 150 senior managers at the failed company.

After selling a beta version of the software, CommSafe engineers incorporated customer data to train the algorithm on more industry-specific language.

SEE: Glassdoor: 3 in 5 have witnessed or experienced discrimination at work 

“It typically takes between three and eight weeks for the AI to learn about a particular organization and to set a baseline of what the culture looks like,” he said.

The company is planning to release a new version of the software at the end of May. In February, the company received a certification from ServiceNow which means the CommSafe software is now available in the ServiceNow Store.

The algorithm does not recommend a particular course of action in response to a particular email or Slack message. Smith said HR customers identified the real-time monitoring as the most important feature. Companies also can roll out the monitoring software as part of a ServiceNow integration.

“CommSafe AI customers can build workflows on the ServiceNow side to allow them to solve the problem in real time,” he said.

CommSafe also is working on a phase one Department of Defense contract to test the algorithm’s ability to detect warning signs of suicide and self-harm.

“We’re working with DOD right now to see if we can explore this use case across a broader audience,” he said.

Source: TechRepublic