Can people really be deterred from posting hateful messages online just by asking them not to?
A growing number of companies hope so as they try “nudge” technology to keep user behavior civil on the internet.
Comment warnings, or behavioral nudges, are pop-up boxes that appear when people use online comment fields.
LinkedIn this week became the latest platform to introduce comment warnings as a supplement to its automated systems that try to detect and block unwanted content—with imperfect results. It acted in response to a rise in complaints about inappropriate language this year, said Liz Li, director of product at the Microsoft Corp. company. LinkedIn in the past six months has removed more than 20,000 pieces of content for being hateful, harassing, inflammatory or extremely violent, she said.
The platform will prompt users to “Join us in keeping LinkedIn respectful and professional” and link to its community standards the next time they begin drafting a comment, post or direct message.
“We took a step back and examined what more we needed to be doing to ensure that LinkedIn remains a place for respectful and professional conversation,” Ms. Li said. “While some of our restrictions may be automatic, we know we won’t always get it right.”
Most users will only receive the new prompt once. But LinkedIn is testing a separate nudge for users who have posted inappropriate content in the past. They will see a slightly different request—“Please keep LinkedIn respectful and professional”—and get it every time they go to write something.
LinkedIn introduced nudges to its platform this week.
Photo:
LinkedIn
Such gentle warnings may be easy to ignore, but they can change behavior online, said Linda Kaye, an associate professor in cyberpsychology at Edge Hill University in the U.K.
“Comment warnings disjoin the thinking process,” she said. “They make you think twice about what would have been impulsive behaviors and actions.”
Facebook Inc.’s
Instagram introduced a nudge system to its main feed in July 2019. The technology, which tells users their comment or caption “looks similar to others that have been reported,” came after internal research showed many people don’t mean to post harmful comments, said Liza Crenshaw, an Instagram spokeswoman. Some users just get caught up in the heat of the moment, she said.
The platform has seen positive results from the system, Ms. Crenshaw said, although she declined to disclose figures.
The company last month introduced similar nudges to comments on Instagram Live and the Android version of the Facebook app, with plans for further rollouts.
Publishers are turning to the technology too. Media outlets like AOL, RT and Newsweek this year have used OpenWeb Technologies Ltd. to deploy nudges in their comment sections.
Salon.com LLC invested in OpenWeb’s comment warning system in 2018, when the technology provider was called Spot.IM.
Aside from protecting the site’s writers from hateful comments, there were commercial reasons to try the nudges, said the publisher’s chief revenue officer, Justin Wohl. A civil comments section encourages reader engagement, increasing opportunities to show readers ads or convert them into subscribers, he said. And a nudge mechanism—rather than one that automatically blocks inappropriate language—encourages debate to remain on the site rather than move somewhere like
Twitter,
he added.
“If you’ve got that angry thought burning away in your tongue, and you’re seeing it blocked automatically, you’re just going to go and put it somewhere else on the internet,” he said. “Nudging means we can say to these people, we’re open to publishing your view, but we’re also taking a stance here to make this a safe space for everybody.”
But asking for good behavior has its limits.
According to OpenWeb research, 34% of users edited a comment after receiving a nudge. Slightly more, 36%, hit publish anyway; their comments were passed on to a moderation team. And 12% abandoned the process.
It is unlikely nudges will work on everyone, said Mary Aiken, professor of forensic cyberpsychology at the University of East London. “No amount of ‘nudging’ is going to deter a functioning sadist or sociopath from engaging in activity which delivers pathological vicarious gratification,” she said.
But nudges aren’t designed to reach the determined troller, said Nadav Shoval, co-founder and chief executive of OpenWeb. The company instead hopes to reach the bulk of impolite commenters—those everyday people who occasionally lose their temper and forget their manners online, Mr. Shoval said.
“It’s our deep belief that most people have a good intent,” he said.
Write to Katie Deighton at [email protected]
Copyright ©2020 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8