What is a social bot? It’s an artificial intelligence (AI) computer program living on social media, looking like and communicating like its target audience and altering users’ perceptions of reality and influencing debate. Twitter has roughly 320 million active users each month. Research shows that up to 15% of Twitter accounts are run by these bot users. This accounts for roughly the population of Spain. If you add the AI-driven users on YouTube and Instagram — and the chatbots on Tinder, Snapchat and Reddit — what do you see? It’s a landscape that forces us to accept the fact that we, as humans, are not alone on social media. Once we agree on this, one question naturally rises: What do our bot neighbors do? Unsettlingly, they wait for orders most of the time. But what do these orders look like?
The notorious Cambridge Analytica case taught us many things, one being that the next advance in political influence will be the ultra-massive usage of big data to learn audience preferences and their psychological weaknesses. These preferences are realized by analyzing our geographical and demographic data, who we are on the outside (age, gender, education, income); our attitudinal data, our consumer and lifestyle habits (what car we drive, what we wear and eat, what magazines we read); and our behavioral data, who we are in the inside (how we perceive the world and what drives us).
From the case, we also learned that 70 likes are enough to know more about the person than his friends. At 150 likes, we know more about a person than her parents, and with 300 likes, we’ve outgunned a person’s partner. How long does it take for a regular Facebook user to give 300 likes? Half a month? What about more active participants? Once a user’s profile is in hand, awaking the bots and sending custom-made messages specifically engineered for each individual to, for example, increase their polarization can be done in no time.
Add to this a few ever-increasingly sophisticated tools for manipulating reality, like deep fakes — AI video-synthesizing algorithms that allow users to make other people appear to be saying things they never actually said — and you have an even larger problem. In the near future, deep fakes could become a kind of a standard phrase of denial, allowing bad actors to renounce all inconvenient statements, referring to them as “deep-faked.”
There are also filter bubbles, algorithm-imposed news biases that flood us with content that supports our current opinions and beliefs, narrowing down our view of the world; echo chambers, closed communication systems that amplify user-specific “single versions of truth,” without ever exposing us to alternative states of affairs; and Laser phishing, algorithms that learn to mimic the communication styles of our friends and relatives. With all of these things taking place, a phenomenon coined by Aviv Ovadya — reality apathy — is not far away. People are becoming overwhelmed by the volume of fake realities around them, and they may eventually stop paying attention to the news. This is when democracy becomes unstable.
Now, it’s not that bots are running around doing whatever they feel like. There are positive technological developments trying to prevent their wrongdoings. The biggest online platforms are becoming increasingly more concerned, and we are entering into an artificial intelligence arms race, where one good AI system tries to find and blow the whistle on breadcrumbs left by a different, bad AI system.
However, no matter how hard we try to diminish the wrongdoings of bad actors, it all seems to be boiling down to the human factor of social media users. It’s in our nature to react to sensational information, and debunking does little to convince those who already believe in false information.
That’s why we need to work hard to raise awareness on how to deal with attempts of manipulation. Some well-known tips include the following:
1. Don’t jump to conclusions. Hitting that “share” button is easy, but you need to appreciate the potential consequences of your activities. once someone sees a mad story from someone they trust, they are far more likely to push it further along without questioning it
2. Consider the source. If it doesn’t look familiar or if you see a given source for the first time, pay attention. Check where it is (and isn’t) online. Verified information will be quickly featured on well-established sources. Visit one or two of your most-trusted websites to make sure they also share the story.
3. Remember that if it sounds too good or crazy to be true, it probably is. Bookmark and visit a fact-checking website often. These sources are great at debunking false news that has caught attention online.
All in all, if we work hard to question the media we’re consuming, we’ll be able to help in this battle against social bots.