Category Archives: Tools

Computational propaganda deployed in Mexico since 2012

Mexico said to be early adopter of computational propaganda and other social media propaganda techniques:

The country’s high level of internet access and history of corruption make it a frequent testbed for digital manipulation techniques often later seen elsewhere.

Source: To See The Future Of Social Media Manipulation In Politics, Look To Me
When your hypothesis is that computational propaganda is a Russia-specific thing, its awkward to discover its widespread use in social media activities in Mexico since 2012.
Computational propaganda, as well as traditional propaganda methods, are widespread all over the world, by individuals, groups, organizations, academics, media and government agencies.
The root issue is the structure of social media platforms themselves.
One way to slow down viral propaganda meme networks would be to adopt an anti-spam proposal from 20 years ago – end free posting. 20 or so years ago, a proposal was put forth to stop email spam by charging a small fee for each message sent. May be it would just be a penny – but a fee structure would discourage the mass dissemination of propaganda so easily. Or at least make it only useful for billionaires which is what they would prefer.

Fake videos may become the next propaganda focus

The threat of fake news is about to get immeasurably worse. Start-ups and internet users are discovering ways to quickly create realistic video using artificial intelligence, which could make it hard to know what’s fact and what’s fiction.

Source: Fake videos are on the rise. As they become more realistic, seeing shouldn’t always be believing
We are no where close to peak propaganda. In the near future, a smart phone app will digitally manipulate real people’s faces into fake video scenarios. Imagine a politician’s face incorporated into a fake prostitution video … or a pissed off student incorporating a school teacher’s image into a fake video grabbing a student’s butt … or a rogue police officer wanting to solve a crime by creating a fake video linking a suspect to a location .. it just goes on and on.
Imagine how fake stories and viral memes can (already are) used to exert control over others.
Now multiply that by a thousand times what we see today.
At some point, the only solution may be turn off all media.

Two computer science students create tool to detect "bots" on Twitter

Two computer science students created a Google Chrome extension that when clicked tells you if a Twitter user appears to be a bot or not.
They claim it has 93.5% accuracy[1] (but see the footnote for a hint at some of the problems in how they came to that conclusion). It uses “machine learning” technology to attempt to identify Twitter accounts that may be automated “propaganda” accounts. Per the article, their classifier was trained using Tweets identified as left or right leaning – and those which they could not categorize as left or right must be bots. Or something. Regardless, that implies political views play a role in classification as a bot. Would a bot tweeting about cats be identified? Would a propaganda bot promoting backyard gardening be identified?
The results could be manipulated by users. When the bot check reports its results to you, you can optionally agree or disagree – and that information gets fed back to the classifier. A sufficient number of users could likely re-train the classifier to intentionally classify real people as bots, and bots as real people.
Source: The College Kids Doing What Twitter Won’t | WIRED
I am not convinced that software tools can classify propaganda bots with sufficient accuracy to be useful over the long term. There will be an arms race to create better bots that appear more natural. I fear that such tools may be used to stifle speech by incorrectly – or deliberately – classifying legitimate speech as “bot” generated to have that speech throttled down or banned.
Note also that Twitter – and Facebook – profit by having emotionally engaged users reading, liking, sharing and following more people. It is not yet in their financial interest to be aggressive about shutting down bots.
Footnote
How good is 93.5% accuracy? Let’s consider a different example to understand this: the use of drug search dogs in schools to locate drugs in lockers.
Let’s say the dog has a 98% accuracy in finding drugs in a locker and a 2% false positive rate. Further, let’s assume there are 2,000 lockers in the school.
Let’s assume 1% of the students actually have drugs in their locker.
1% of 2000 students means 20 students actually have drugs in their locker. (And with the 2% false rate there is a chance that 1 of these actual students will be missed.)
In using the dog, the police will identify that 2% (the false positive rate) of the lockers incorrectly or 40 lockers will be suspected of having drugs in a school where only 20 lockers have drugs.
In other words, twice as many students will be falsely accused of having drugs as students who actually have drugs.
When doing broad classification searches, even a 98% accuracy rate is problematic as it may produce more false negatives than true positives, which is not what you would intuitively guess when you hear “98% accuracy” or in this Twitter bot analysis, 93.5% accuracy.
Further, in determining their 93.5% figure, while their approach is admirable and possibly the best that can be done, they compared verified Twitter user tweets to suspected “bots” from unverified accounts. Most Twitter accounts are unverified and they are only hypothesizing that an account is a bot when producing this metric. (FYI I think they have done an excellent job with their work, the best that can be done, and am impressed with their work. My comments should in no way be interpreted as negative comments towards these two students. For the record, I have a degree in computer science and an M.S. in software engineering and have familiarity – but not expertise – in machine learning, classifiers and pattern matching systems.)
Indeed, as the article points out, hundreds of people have already complained to another bot checker about being falsely classified as a bot. The Wired reporter attempted to contact the account holders of a small sample of accounts identified as bots and quickly found accounts that appeared to be run by real people.
Side note: the linked article in Wired is excellent journalism, something I certainly do not see enough of! Glad to see this article!

How "Bot Armies" get Twitter hashtags trending

Of interest, a bot army is said to have “taken to Twitter” to influence Twitter social media posts. Bots generate enough Tweets that eventually get shared and then turn into actual hashtag memes passed along by real people. In this way, propaganda bots can initiate and control messaging on social media.
This is also known as “computational propaganda”. In the old days, propaganda usually required a printing press or a broadcast license. Social media made it possible for everyone to be a propagandist. Computational propaganda creates fully automated propaganda dissemination.
Source: Pro-Gun Russian Bots Flood Twitter After Parkland Shooting | WIRED