AI Joins the #MeToo Movement

 
AI joins the me too movement.png
 

“Hello . . . If you feel something inappropriate has happened to you at work, I can help.”

That’s the welcome from Spot, a free, web-based chatbot that was recently launched to help individuals report workplace discrimination and harassment. Spot promises users a “confidential, unbiased platform” to report unwelcome conduct in the workplace, and uses cognitive technology and natural language processing to engage the user with questions about the reported conduct. Spot then privately records the exchange that can later be shared, should the complainant chose to do so.

Using artificial intelligence (AI)  to identify potential employee misconduct is not new to the HR realm. Software applications that monitor employee communications and internet use have long been used to identify and block inappropriate content. These applications help companies spot unlawful employee conduct, identity and property theft, data security issues, and other inappropriate and offensive behavior. In doing so, companies have balanced the business interests of the company with the reasonable expectations of privacy of its employees.

For all its advances, using AI data-analysis technology to identify and target patterns of workplace harassment is still limited, both by privacy concerns and the technology’s efficacy. AI can’t flag oral communications or physical harassment—yet. Off-hand comments, jokes, and sexually-suggestive behavior may be nuanced, and—for now—imperceptible to AI.  If advances in natural language processing, facial recognition and cognitive technology continue, these applications of AI to combat workplace harassment are in the not-too-distant future.  That currently leaves employee self-reporting as the best way for employers to identify potential harassment. Enter a new suite of AI tools intended to help employees do just that.

Bots to the Rescue?

The EEOC recently reported that almost 75% of all workplace harassment incidents go unreported. Fears of retaliation, as well as concerns about anonymity and confidentiality, were each cited as reasons that an employee may be unwilling to report harassment.

Spot’s developers claim that a bot can allay concerns around employee reporting because it is a non-human that accepts reports 24/7, for as long as an employee may need; and cannot share the report unless expressly permitted by the employee. While Spot developers claim that its bots are better able to handle complaints because they operate without conscious bias, research has shown that human biases can creep into programming, calling such claims into question. 

Botler.ai, also uses technology  to encourage reporting by empowering users with knowledge of the relevant law. The bot uses natural language processing to help the user determine whether a reported incident could be classified as sexual harassment or another form of sexual crime. Its database was created using court documents from 300,000 court cases in Canada and the United States, and draws largely on testimonies from court filings, since testimonies align most closely with a conversational tone.

Non-profit Callisto recently developed technology designed to detect repeat perpetrators of sexual assault and give victims tools to decide whether to report allegations of sexual misconduct. Its pilot program was rolled out to college campuses late last year, and is the first program of its kind designed to take action upon the detection of repeat offenders.

Employer vs. Machine: Legal Ramifications

For now, these new AI developments bring with them familiar questions for employers about how to address employee allegations of workplace discrimination and harassment raised outside the Company complaint reporting systems.  Generally, any reports received via these third-party sites should be handled consistent with the company’s internal complaint reporting process.  These new technologies may also mean an uptick in sexual harassment complaints—including non-meritorious claims.  A new challenge presented by these technologies is whether employees with non-meritorious claims will feel emboldened by their conversations with various bots to pursue their claims.

It’s still too soon to evaluate whether employers should bring these types of cognitive technologies in-house in their efforts to combat workplace harassment.  Given the reported efficacy of chatbots in other areas—like customer service—it’s unlikely that companies will soon be turning over its HR investigation process to a machine.  Nevertheless, the potential for improved employee reporting of workplace harassment via AI technology is a promising development for employers and employees alike. The Future Employer blog will keep you posted on new developments in this area. 

 

Guest User