Twitter's bot problem is well documented, influencing discourse on divisive topics like politics and civil rights. But it's getting harder and harder to spot such nefarious bots, who often borrow biographies and tweets from real (and often stolen) profiles to evade detection. (The New York Times recently published an outstanding feature on bots and follower factories.)
Can we distinguish bots from real users using data science? Journalism professor Mike Kearney has developed an R package and Shiny web application that analyzes the tweets of a given user to calculate the probability that user is a bot. For example, it correctly detects that @TwoHeadlines is a bot:
It does fairly well according to my tests, with @MagicRealismBot (99.8%), @RealPressSecBot (99.8%), @NYT4thDownBot (99.3%) and @threat_update (100%) all correctly identified as bots. I'm also pleased to report that my own Twitter account @revodavid is classified as human (2.6% chance of botness). The app has also been used to identify a potential bot interjecting in the gun control debate. On the other hand, @RlangTip is classified as a bot (98.8%), despite my being reliably informed that it's written by humans.
To detect bots, the app applies a gradient boosting machine learning model implemented using the gbm package in R. Factors used to identify bot-like behavior include the user's number of followers and followed accounts, the bio, and use of hashtags, @-mentions, and capital letters in account's last 100 tweets.
Try out the BotOrNot app at the link below, and let us know of any surprises in the results in the comments.
ShinyApps: {botrnot}