Twitter
Inc has clarified its definition of abusive behavior that will prompt
it to delete accounts, banning "hateful conduct" that promotes violence
against specific groups. The
social media company disclosed the changes on Tuesday in a blog post,
following rising criticism it was not doing enough to thwart Islamic
State's use of the site for propaganda and recruitment.
"As always, we embrace and encourage diverse opinions and beliefs, but we will continue to take action on accounts that cross the line into abuse," Megan Cristina, director of Trust and Safety, said in the blog.
The new rules do not mention Islamic State or any other group by name.
"You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability or disease," according to the revised rules. (bit.ly/1MFueNn)
The company previously used a more generic warning that banned users from threatening or promoting "violence against others."
J.M. Berger, co-author of a March 2015 Brookings Institute "census" of ISIS Twitter use, which found that the militant group had operated at least 46,000 accounts from September to December of last year, said the change would lead to more aggressive reporting of abuse by users who flag accounts that break the rules.
"The new definition is much clearer and takes some of the guesswork out of determining if a Tweet violates the rules," Berger said.
Rabbi Abraham Cooper, who heads the Digital Terrorism and Hate Project at the Simon Wiesenthal Center in Los Angeles, said that "terrorists and hate groups will leave" if Twitter enforces the revised rules.
He said that would require blocking repeat offenders from setting up new accounts with altered handles and remove thousands of existing counts that violate the policy.
Tuesday's announcement did not disclose changes to Twitter's enforcement strategy. A company spokesman declined to say if any were in the works.
The new rules also said that Twitter might respond to reports that somebody is considering "self-harm" by contacting the person to express concern and provide contact information to mental health practitioners.
Lawmakers in Congress proposed legislation earlier this month that would require social media operators, including Twitter and Facebook Inc, to notify federal authorities of any detected "terrorist activity."
(Reporting by Jim Finkle in Boston and Dustin Volz in Washington; Editing by Peter Cooney)
"As always, we embrace and encourage diverse opinions and beliefs, but we will continue to take action on accounts that cross the line into abuse," Megan Cristina, director of Trust and Safety, said in the blog.
The new rules do not mention Islamic State or any other group by name.
"You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability or disease," according to the revised rules. (bit.ly/1MFueNn)
The company previously used a more generic warning that banned users from threatening or promoting "violence against others."
J.M. Berger, co-author of a March 2015 Brookings Institute "census" of ISIS Twitter use, which found that the militant group had operated at least 46,000 accounts from September to December of last year, said the change would lead to more aggressive reporting of abuse by users who flag accounts that break the rules.
"The new definition is much clearer and takes some of the guesswork out of determining if a Tweet violates the rules," Berger said.
Rabbi Abraham Cooper, who heads the Digital Terrorism and Hate Project at the Simon Wiesenthal Center in Los Angeles, said that "terrorists and hate groups will leave" if Twitter enforces the revised rules.
He said that would require blocking repeat offenders from setting up new accounts with altered handles and remove thousands of existing counts that violate the policy.
Tuesday's announcement did not disclose changes to Twitter's enforcement strategy. A company spokesman declined to say if any were in the works.
The new rules also said that Twitter might respond to reports that somebody is considering "self-harm" by contacting the person to express concern and provide contact information to mental health practitioners.
Lawmakers in Congress proposed legislation earlier this month that would require social media operators, including Twitter and Facebook Inc, to notify federal authorities of any detected "terrorist activity."
(Reporting by Jim Finkle in Boston and Dustin Volz in Washington; Editing by Peter Cooney)
No comments:
Post a Comment
comment