In Part Four of this series, we argued “Twitter inadvertently engineered a platform for A/B testing ideological warfare. In truth, it is A/B/C/D/E testing on toward infinity.” In the simplest colloquial terms: Russian agents can test politically disruptive messages to see what works best.
Twitter makes it easy and inexpensive to introduce a social virus. This is not the type of virus that requires (currently) special anti-virus software to protect your computer. A social virus is an idea. Increasingly it is a virulent meme. Sometimes it is a trope. It can often be linked to a variety of mechanisms that promote and share ideological news with no journalistic fact checking. Some of the effort steers users to YouTube videos, now populated daily with new varieties of authentic hatred and inauthentic candidate representations that are impossible to regulate fully.
Core Challenges: Humanity, Automation & Authenticity
It is difficult to place content (text, images, videos, news, identities) spreading on Twitter accurately into categories that are tied to the account history, such as human-vs-machine, manual-vs-automatic, real-vs-fake. Nevertheless, it remains important to look at how specific users, as well as like-minded clusters of users, take advantage of platform affordances to promote certain kinds of content.
In this post, we dig further in the aggregate and individual activities of accounts that exhibit high levels of suspicious Tweet activity in the Canadian election.
As of this writing, the raw data looks like this. We identified 201,509 screen_names that produced these 2.14 million Tweets. There are 69,990 screen_names (33%) that appear in all four collections.
The top hashtags and screen_names for these archives are:
Dropping out the TrudeauMustGo collection, we were interested in the most active screen_names when you combine the three remaining collections. The top 10 most active screen_names in a combined collection of 1.68 million Tweets are:
Readers can inspect the “Top 3” linked in the bullets below and judge for themselves what we can infer.
Scholars previously reported that Twitter accounts ending with 8 numerals are often bots or trolls. Roughly 11,000 screen_names in our collection end with 8 numerals. About 1,500 are currently scoring as very likely to be bots in our model, and of those 1,500 only 631 provide a location, though the most common are not exactly Canadian.
The largest one-day volume spike in this collection came not after the blackface revelation (which was the September 19 surge in volume) but on September 30, 2019.
In Part 6 we explore that largest one-day spike to test theories about who or what what may be driving network activities.