by Melissa Quinn |
Elected officials and world leaders are questioning whether Internet companies are doing enough to combat terrorism online, following a string of terrorist attacks in Europe over the past few months.
Yet, the question of how to best address the issue, and on which parties the responsibility should fall, is more nuanced.
British Prime Minister Theresa May has emerged as one of the most prominent voices in the discussion, particularly following the attacks in Manchester and London that left more than two dozen people dead.
During a meeting of the G-7 last month, May called on fellow world leaders to take collective action in pushing technology firms to do more to combat terrorism online.
May followed up on her declaration this month when she and French President Emmanuel Macron discussed whether new legal liability is needed for technology companies who fail to remove terrorist content from their platforms.
But many technology companies have already instituted their own policies to deal with terrorist chatter, and industry groups warn that government intervention is not the appropriate step forward.
"That kind of thing can create exactly the wrong incentives for social networks instead of trying to focus on these delicate and nuanced decisions they have to make," said Mark MacCarthy, vice president of public policy at the Software and Information Industry Association. "Each individual company has to step up and live up to their real social and economic obligations in this realm. They have a responsibility to keep their systems clear of material harm."
Indeed, over the last few years, many Internet companies have taken steps to address extremist content on their platforms, with new measures implemented as recently as this month.
Facebook, which had relied on human moderators to identify inappropriate content, is increasing its reliance on artificial intelligence to help analysts identify and block terrorist content.
Google and YouTube also unveiled four new measures they're taking to fight terrorism online, which include an increased reliance on technology to filter content. The efforts from Google and YouTube come in the wake of May's calls, but lawmakers in the U.S. generally agree that companies have taken the initiative to combat extremist content posted on their platforms.
"I am pleased major technology companies are responding to this increase in traffic on their sites, using artificial intelligence and algorithms to identify suspect activity online," Rep. Nita Lowey, D-N.Y, said in a statement to the Washington Examiner. "Now more than ever, it is essential that we partner with industry to identify and prevent the spread of unacceptable content."
Like Lowey, Rep. Ted Poe, R-Texas, said companies such as Facebook and Twitter are "doing better," but he fears terrorists are switching to other platforms such as Telegram to communicate and spread their messages.
"Most of the terrorists, they work online. They don't mail letters to each other anymore. They don't call them on the telephone. Those days are over," Poe told the Washington Examiner. "They've moved to the Internet, and we can help stop this activity by not giving them a way to communicate. I'm all for doing whatever we can."
Poe and lawmakers on both sides of the aisle, and in both chambers of Congress, have sought to tackle the issue of terrorism online legislatively.
A bill Poe introduced in 2015 passed as part of the Department of State Authorities Act in December, which requires the president to send a report to Congress on his strategy to combat terrorists' use of social media.
Poe said the Trump administration has yet to send its own plan, which was due to Congress in March, but attributed the delay to the change in administrations.
"There isn't an overall plan yet," he said. "There have been conversations. They see it as a problem, but I haven't seen an overall plan."
Other legislation includes a bill from Sen. Dianne Feinstein, D-Calif., introduced in 2015, that would have required technology companies with knowledge of terrorist activity to report the information to the authorities.
But MacCarthy, the vice president of SIIA, said he doesn't believe legislation is needed.
"The proposals that we've seen are harmful in that they would inevitably sweep up more speech than should be swept up," he said. "Companies, they react to legal risk the way good companies do, they're managing their operations. If they're exposed to legal liability for a poorly defined concept like ‘terrorist material,' they will inevitably push the envelope a little too far, and useful and important conversations about terrorism and how to take steps to combat it may be caught up in a net where companies are trying to comply with legal obligations."
Even without legislation on the books, technology companies have already found themselves in court because of terrorist content on their sites.
Last month, relatives of the victims of the 2015 attack in San Bernardino, Calif., filed a lawsuit in federal court against Twitter, Google and Facebook, accusing the companies of providing platforms for the Islamic State to promote its extremist beliefs and recruit followers.
Similar cases have favored the technology companies, with the companies citing Section 230 of the Communications Decency Act as the "key defense" in these suits.
"Basically, what Section 230 boils down to is it provides broad immunity to social media platforms, immunity based on user-generated content," said Aaron Mackey, a legal fellow at the Electronic Frontier Foundation. "Anyone online that has a service in which a third party can post content, the publisher is not liable."
Amending Section 230 hasn't been a prominent part of discussions on fighting terrorism online, but Poe said it's an option he wouldn't rule out.
"I would hope it wouldn't come to that," he said. "If it takes that to get this activity taken down, then it's worth looking at. … The idea that [terrorists] use our own American companies to promote terrorism is just horrific. We need to relook at those current laws, and we may have to amend it."