Listen to Audio
Social-networks have become a great tool to hijack the attention of the masses, conducting manipulation on our mindset and world-views. Political interest groups, commercial organizations, and foreign governments are using those platforms to easily influence the way we view the world.
The combination of user personal data and AI is a futile ground for crowd manipulation, which is essential “mind hacking”, and should be treated just like illegal computer hacking.
There are potential solutions to this issue, among which is the concept of using Ai to protect ourselves from mindset and world-views manipulation online. For example, https://perspectiveguard.com uses AI to analyze what you see on social networks (posts, tweets, comments, sponsored content and targeted ads).
Services like Facebook, Twitter, Youtube, Instagram and Reddit have become a cornerstone in our everyday lives, from keeping in touch with friends, reading news, discovering great vacation destinations, or enjoying original content, the social media revolution was mostly beneficial for the user.
When Zuckerberg started Facebook back in 2004, it was a great solution for a need in the world – connecting everybody and making communication and sharing better.
Many predicted that the social revolution will allow individuals spread ideas and worldviews, reach like-minded people, and basically democratize the information ecosystem, which back then was controlled by several media outlets and news companies.
Since then Facebook and Twitter played a major role in many political events on a global scale, such as sparking social movements, helping organize protests, overthrowing totalitarian regimes, and overall making more voices heard.
The negative aspects came into play when the ads business model was introduced, which is all about stickiness and drawing more eyeballs, increasing conversion, and maximizing revenue, pushing the users interests aside, and changing the incentives and moral compass of those companies.
The merger of AI and ads is only natural, and proven to be effective in service the right ad, in the right time, to the right person based on multiparameter prediction.
Ads service AI’s are constantly monitoring our interests and behaviour, and keeping a detailed record about our personality profile.
Today’s profiling AI is arguably understanding us better than we understand ourselves. A proof of that can be found in the phenomena of users complaining about Facebook showing surprisingly specific ads, followed by a claim that Facebook is recording their voice without consent (https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-listening-listen-does-is-microphone-app-phone-mark-zuckerberg-a8299281.html).
The reality is probably simpler, and can be explained by a combination of great amount of personal data gathered legally, external data sources, and extremely accurate AI suggestions.
Nevertheless, those incidents highlight the issues that rise when AI meets users and content delivery.
Google, Facebook and Twitter pouring billions into research and development of their data and AI infrastructure, employing hundreds of Phd’s, so it’s easy to see how they’re getting surprisingly strong results.
Fake news is not the real problem
Since the 2016 US election, we’ve been exposed to everything from political controversy, to foreign governments weaponizing social networks and conducting direct crowd manipulation.
One can claim that the conditions were right for the symptoms to appear – the futile ground of AI-powered social media and abundance of personal data made it easy for adversaries to take advantage of the opportunity.
Fake news and Tweet farms are being used at scale to inject carefully engineered content, that reach the right people at the right time, and had the desired effect.
These methods aren’t different from the targeted advertising methods we’re exposed to on a daily basis.
The real problem is our digital realm is the sheer amount of personal data and AI technology that Google and Facebook own, which being mainly use to make more ads money.
Fighting fake news is a shallow and symptomatic approach, that blames the immediate suspect, and tries to fix the and most obvious issue, pulling the spotlight from the deep questions we should ask Facebook, Google and Twitter.
It’s easy to imagine how even without fake news and tweet-bots that spread inflammatory content, the sophisticated adversaries can find a way to use real news, highlight real tweets, and spread divisive and contentious messages, that ones again will reach the right people in the right moment, and will have the desired effect of targeted social engineering.
Your mind is hackable
Our brain is a decision making machine, processing sensory inputs from the world and trying to build an accurate mental model to act upon.
Once our informational intake includes Google, Facebook and Twitter, our input can be biased, and so are our worldviews, beliefs, mindset and behaviour.
Human-hacking or brain-hacking is a term coined by Dr. Yuval Noah Harari, when describing a world where AI can understand the internal workings of the human psyche in high detail, and use it for its own purposes.
(For a full interview: https://www.wired.com/story/artificial-intelligence-yuval-noah-harari-tristan-harris/).
When discussing the concept of hacking the human mind, we’re not talking about a hypnosis-like state mind-control as portrayed in sci-fi novels and Hollywood films.
The real danger is subtle, and subconscious exposure to carefully engineered messages, that can affect our worldviews, attitude towards products, topics, and groups of people.
For example, a recent Harward publication claims that our purchasing desicions are 95% subconsious (https://www.inc.com/logan-chierotti/harvard-professor-says-95-of-purchasing-decisions-are-subconscious.html).
Back in 2016, Facebook was caught conducting psychological experiments on it’s users (Facebook Tinkers With Users’ Emotions in News Feed Experiment, Stirring Outcry), which is showing the level to which those companies invest in understanding human psychology, willingness to temper with it, and the ease at which users can be manipulated unconsciously.
What can be done?
Critical thinking and fact-checking are extremely important to validate our beliefs, but as history and research indicate, people are far more susceptible to social engineering and emotional manipulation than we tend to think. We tend to and even the most logical of us can hold irrational, contradictory, and sometimes harmful worldviews.
Protecting our mind is hard, simply because we know less than our adversaries about the inner workings of our own brains and psychology.
A more practical approach is to use new kind of AI on the user’s side, to contra the AI that’s being used by social networks and search engines.
While the incentive of the companies AI is to increase ad revenue by any means, the user-side AI can monitor feed posts, suggestions and search results presented to the users, and take action when there’s a manipulation attempt on the user.
The Perspective Guard Project
Several months ago, after understanding what was happening in social media, Cambridge Analytica social engineering, new trends in AI, and how the future could evolve according to Yuval Noah Harari, Sam Harris and others, me and a couple of friends started to think about ways to create tools to protect ourselves and others from potential world views manipulation online, and so we started working on a side-project we called Perspective Guard.
We’ve spent some time researching theories of personality involved in Cambridge Analytica (especially 5 personality traits model: https://en.wikipedia.org/wiki/Big_Five_personality_traits), and the practical ways sociaty is getting more polerized and hostile towards oposing groups, focusing mainly on phenomena like the outrage culture, tribalism, hatespeech, fake news, and other threats.
What we came up with is an AI solution that runs inside the browser, analyzing posts, tweets, subreddits and other textual content that users are exposed to. The AI is tracking the nature of the content, making sure you’re not exposed to suspiciously high levels of content with negative influence on the mindset and world views.