SHARE


Facebook risked the safety of its content moderators when a security lapse exposed their personal information to suspected terrorists on the social network.

More than 1,000 Facebook staff reviewing and removing inappropriate content from the platform were affected by a bug in the software they use, which was discovered in November 2016.

This bug meant that the personal profile of a moderator appeared as a notification in the activity log of Facebook groups when they removed administrators from the platform. This meant that the moderator’s personal profile was available to be viewed by the other admins in the group.

It appears that an activity log feature was introduced for Group admins in mid-October last year. Permissions were created to keep employees’ moderation actions from creating entries in the log, but by revoking a group admin’s privileges, moderators inadvertently created an entry which could be viewed by other admins of the group, although no notifications were produced to draw attention to it.

Out of the 1,000 moderators affected, six employees were determined to be “high priority” victims after Facebook determined that their profiles were likely to have been viewed by potential terrorists. Moderators suspected there was a problem when they started receiving friend requests from people affiliated with the organisations they were investigating.

Once the breach had been discovered, Facebook’s head of global investigations, Craig D’Souza, contacted some of the affected employees directly, who were considered to be at the highest risk, and communicated with them using email, video conference and Facebook Messenger.

The Guardian was able to contact one of the six employees affected, who is an Iraqi-born Irish citizen who quit his job, fled Ireland and went into hiding in eastern Europe for a few months when he realised that seven individuals affiliated with a suspected terrorist group had viewed his personal profile.

The bug apparently was not fixed until 16 November, two weeks after it had been discovered, meaning it had been active for a month, although it had also retroactively exposed the personal profiles of moderators who had censored accounts as far back as 2016.

Apparently Facebook offered to install a home alarm monitoring system and provide transport to and from work to those in the high risk group, as well as counselling.

A Facebook spokesperson told us: “Our investigation found that only a small fraction of the names were likely viewed, and we never had evidence of any threat to the people impacted or their families as a result of this matter. Even so, we contacted each of them individually to offer support, answer their questions, and take meaningful steps to ensure their safety.

“In addition to communicating with the affected people and the full teams that work in these parts of the company, we have continued to share details with them about a series of technical and process improvements we’ve made to our internal tools to better detect and prevent these types of issues from occurring.”

Facebook has made changes to its infrastructure to prevent a worker’s information becoming available externally. The company is also in the process of testing new administrative accounts, which will not require moderators to use their personal accounts when working.

This article originally appeared at itpro.co.uk



Source link

NO COMMENTS

LEAVE A REPLY