News
Automated Socialbots Invade Facebook
University of British Columbia researchers were able to gain access to Facebook with a herd of automated "socialbots" that mostly went undetected by site security for two months. The bots were able to harvest thousands of users' personal information.
"We believe that large-scale infiltration in [online social networks] is only one of many future cyber threats, and defending against such threats is the first step towards maintaining a safer social Web for millions of active Web users," they wrote in a paper describing their experiment.
The socialbots were programmed to regularly post updates to their pages and to seek out mutual connections with existing friends who would be more likely to accept new friending requests, mimicking human behavior well enough that only 20 of 102 socialbots were detected and blocked by the Facebook Immune System.
Facebook was chosen for the experiment because it was believed to have a more robust defensive system for detecting automated activity, the researchers wrote.
The Facebook Immune System performs real-time checks on all read and write actions in the Facebook database. "In fact, we did not observe any evidence that the FIS detected what was really going on," they wrote.
The socialbot herd and its controller were all maintained on a single machine for simplicity's sake, but a more extensive network could be created using traditional botnet methods with distributed, compromised computers hosting the socialbots.
They created 49 male and 53 female accounts, each one initially friending other socialbot accounts to create the illusion of a real person with real Facebook friends. In a two-week "bootstrap" phase, each bot sent out 25 friendship requests per day to random account IDs. Of 5,053 requests, 976 -- or a little more than 19 percent -- were accepted.
It was found that sex matters. Female socialbots had an acceptance rate of 22.3 percent, compared with 15.9 percent for the males. Interestingly, all of the 20 socialbots that were identified and blocked were female. All were discovered because a Facebook user had flagged them for spamming.
After the bootstrapping phase, the socialbots dissolved their connections with each other and spent six weeks in propagation, sending requests to their friends' friends. Of 3,517 such requests, 2,079 or 59 percent were accepted.
The bots were programmed with HTTP-request templates that allowed each to send friendship requests as if they were sent from a browser. They also used an API provided by iheartquotes.com to pull random quotes and blurbs that were used as messages for their status updates.
The command server had interfaces with three useful websites: a CAPTCHA-breaking business to defeat CAPTCHA codes used to identify spamming bots; hotornot.com, a photo-sharing website that was used to grab photos for socialbot account profiles; and mail.ru, an e-mail provider.
"As the socialbots infiltrated Facebook, they harvested a large set of users' data," the researchers wrote. "We were able to collect news feeds, users' profile information, and ‘wall' messages. We decided, however, to only focus on users' data that have monetary value such as Personally Identifiable Information."
Information gathered included birthdates, addresses, names of spouses, places of work, school attended, hometown, e-mail addresses and phone numbers.
Operating a socialbot network from your own computer apparently does not break any laws as long as there is no identity theft or fraud involved, but the researchers addressed the ethical question raised by their experiment.
"We believe that minimal-risk realistic experiments are the only way to reliably estimate the feasibility of an attack in real-world," they concluded. "These experiments allow us, and the wider research community, to get a genuine insight into the ecosystem of online attacks, which are useful in understanding how similar attacks may behave and how to defend against them."
About the Author
William Jackson is the senior writer for Government Computer News (GCN.com).