How the mighty have fallen. A decade ago, social media was celebrated for sparking democratic uprisings in the Arab world and other countries. Now, the front pages are filled with stories about the role of these social platforms in misinformation, corporate conspiracy, misconduct and mental health risks. In a 2022 Pew Research Center survey, Americans blamed social media for the brutalization of our political discourse, the spread of misinformation, and the increase in partisan polarization.
Today, the hot topic is Artificial Intelligence. Like social media, it has the potential to change the world in many ways, some pro-democracy. But at the same time, it has the potential to cause harm to society. There is a lot we can learn about the unregulated evolution of social media over the past decade that directly applies to AI companies and technologies. These lessons can help us avoid making the same mistakes we made with social media. In particular, five fundamental attributes of social media have harmed society.
AI also has these same attributes. Note that they are not intrinsically evil. They are all double-edged swords, with the potential to do good or evil. The danger comes from those who use them. The danger comes from who wields the knife and in which direction it is brandished. This has been true for social media and will be true for AI. In both cases, the solution lies within the limits of technology use.
#1: Advertising
The role that advertising plays on the Internet came about more by accident than anything else. When commercialization first came to the Internet, there was no easy way for users to make micropayments to do things like view a web page. Furthermore, users were accustomed to free access and did not accept subscription models for services. Advertising was the obvious business model, although never the best. And it’s the model that social media is also based on, which leads it to prioritize engagement over everything else.
Both Google and Facebook believe AI will help them maintain their dominance over an 11-figure (yes, 11-figure) online ad market, and tech giants, which have traditionally relied less on advertising, like Microsoft and Amazon, believe that AI will help them capture a larger share of this market.
Big Tech needs something to persuade advertisers to keep spending on its platforms. Despite bombastic claims about the effectiveness of targeted marketing, researchers have long struggled to demonstrate where and when online ads actually have an impact. When big brands like Uber and Procter & Gamble recently reduced their digital ad spending by hundreds of millions, they claimed it didn’t affect their sales at all.
Industry leaders say AI-powered ads will be much better. Google guarantees that AI can adjust your ad copy in response to what users search for and that its AI algorithms will configure your campaigns to maximize success. Amazon wants you to use its image-generating AI to make its toaster product pages look cooler. And IBM is confident that Watson AI will make ads better.
These techniques border on manipulation, but the biggest risk to users comes from advertising on AI chatbots. Just as Google and Meta embed ads in their search results and feeds, AI companies will be pressured to embed ads in conversations. And because these conversations will be relational and human-like, they can be more harmful. While many of us have become great at skimming over ads on Amazon and Google results pages, it will be much more difficult to determine whether an AI chatbot is mentioning a product because it is a good answer to your question or because the developer of AI received a commission from the manufacturer.
#2: Surveillance
Social media’s reliance on advertising as the primary way to monetize websites has led to personalization, which has led to increased surveillance. To convince advertisers that social platforms can tailor ads to be as appealing as possible to each person, the platforms must demonstrate that they can collect as much information about those people as possible.
It’s hard to overstate the amount of spying that’s going on. A recent Consumer Reports analysis of Facebook — just Facebook — showed that each user has more than 2,200 different companies spying on their web activities. AI-based platforms that are supported by advertisers will face all of the same perverse and powerful market incentives that social platforms face. It’s easy to imagine that a chatbot operator could charge a higher price if it could claim that it could target users based on their location, preference data, or previous chat history and persuade them to purchase products. The possibility of manipulation will only increase as we depend on AI for personal services. One of the promises of generative AI is the prospect of creating a personal digital assistant advanced enough to act as your advocate to others and your butler. This requires more intimacy than you have with your search engine, email provider, cloud storage system, or phone. You’ll want him to be with you at all times, and to work most effectively, he’ll need to know everything about you. He will act like a friend and you are likely to treat him as such, mistakenly trusting his discretion.
Even if you choose not to voluntarily familiarize an AI assistant with your lifestyle and preferences, AI technology can make it easier for companies to learn about you. Early demonstrations illustrate how chatbots can be used to surreptitiously extract personal data by asking common questions. And with chatbots being increasingly integrated into everything from customer service systems to basic website search interfaces, exposure to this type of inferential data collection may become inevitable.
#3: Virality
Social media allows any user to express any idea with the potential for instant global reach. A great public speaker on a podium can deliver ideas to perhaps a few hundred people on a good night. A kid with the right amount of jokes on Facebook can reach a few hundred million people in just a few minutes.
A decade ago, technologists hoped this kind of virality would bring people together and grant access to suppressed truths. But as a structural matter, it’s in a social network’s interest to show you the things you’re most likely to click on and share, and the things that will keep you on the platform.
In fact, this usually means outrageous, scandalous, and provocative content. Researchers found that content expressing the most animosity toward political opponents gets the most engagement on Facebook and Twitter. And this encouragement of outrage drives and rewards misinformation. As Jonathan Swift once wrote, “Falsehood flies, and truth limps after it.” Academics seem to have proven this in the case of social media; people are more likely to share false information, perhaps because it seems more innovative and surprising. And unfortunately, this type of viral misinformation has been widespread.
AI has the potential to supercharge the problem because it makes content production and propagation easier, faster, and more automatic. AI-generative tools can fabricate an infinite number of falsehoods about any individual or topic, some of which go viral. And these lies can be driven by social accounts controlled by AI bots, which can share and launder original misinformation on any scale.
Incredibly powerful text generators and autonomous AI agents are already starting to make their presence felt on social media. In July, researchers at Indiana University revealed a botnet of more than 1,100 Twitter accounts (now X) that appeared to be operated using ChatGPT. AI will help reinforce viral content emerging from social media. It will be able to create websites and web content, user reviews and smartphone applications. It will be able to simulate thousands, or even millions, of fake personas to give the erroneous impression that an idea, a political position or the use of a product is more common than it really is. What we might perceive as a vibrant political debate could be bots talking to bots. And these resources won’t just be available to those with money and power; the AI tools needed for all of this will be easily available to all of us.
#4: Blocking
Social media companies try hard to make it difficult to leave their platforms. This doesn’t just mean you’ll miss out on conversations with your friends. They make it difficult for you to take your saved data — connections, posts, photos — and transfer it to another platform. Every moment you invest in sharing a memory, reaching out to an acquaintance, or curating your followers on one social platform adds a brick to the wall you would otherwise have to climb to move to another platform.
This concept of “lock-in” is not unique to social media. Microsoft has cultivated proprietary document formats for years to keep you using its flagship product, Office. Your music service or e-book reader makes it difficult to transfer the content you purchased to a rival service or reader. And if you switch from an iPhone to an Android device, your friends might make fun of you. But social media takes this to a new level. No matter how bad it is, it’s very difficult to leave Facebook if all your friends are there.
Coordinating everyone’s move to a new platform is incredibly difficult, which is why no one does it. Likewise, companies creating AI-powered digital personal assistants will make it difficult for users to transfer that customization to another AI. If AI personal assistants are able to save you a lot of time, it will be because they know the details of your life as well as a good human assistant; Would you like to give this up to start over with another company’s service? In extreme examples, some people have formed close, perhaps even familial, bonds with AI chatbots. If you think of your AI as a friend or therapist, this can be a powerful form of loyalty.
Lock-in is an important concern because it results in products and services that are less responsive to customer demand. The harder it is for you to switch to a competitor, the worse the company may treat you. In the absence of any way to force interoperability, AI companies have less incentive to innovate on features or compete on price, and fewer scruples about engaging in surveillance or other bad behavior.
#5: Monopolization
Social platforms often start out as great products that are actually useful and eye-opening for their consumers, before they begin to monetize and exploit those users for the benefit of their business customers. Then the platforms reclaim the value for themselves, turning their products into truly miserable experiences for everyone. This is a cycle that Cory Doctorow has written powerfully about and traced the history of Facebook, Twitter and, more recently, TikTok.
The reason for these results is structural. The network effects of technology platforms lead some companies to become dominant, and lock-in ensures their continued dominance. The incentives in the technology sector are so spectacularly, blindingly powerful that they have allowed six megacorporations (Amazon, Apple, Google, Facebook, Meta, Microsoft and Nvidia) to command a trillion dollars each in market value – or more. These companies use their wealth to block any significant legislation that could reduce their power. And sometimes they conspire with each other to gain even more weight. This cycle is clearly starting to repeat itself in AI. Just look at the example of OpenAI, the poster child of the industry, whose flagship offering, ChatGPT, continues to establish marks of acceptance and usage. A year after the product’s launch, OpenAI’s valuation skyrocketed to around $90 billion.
OpenAI once seemed like an “open” alternative to megacorporations — a common operator of AI services with a nonprofit social mission. But the debacle of Sam Altman’s firing and rehiring in late 2023, and Microsoft’s central role in Altman’s reappointment as CEO, simply illustrated how venture financing from the family ranks of the tech elite permeates and controls corporate AI. In January 2024, OpenAI took a big step toward monetizing this user base by introducing the GPT Store, where one OpenAI customer can charge another to use custom versions of OpenAI software; the company, of course, collects revenue from both parties. This sets into motion the cycle Doctorow warns about.
Amid this spiral of exploitation, little or no attention is paid to the externalities imposed on the general public — people who are not even using the platforms. Even after society has struggled with its harmful effects for years, monopolistic social networks have virtually no incentive to control the environmental impact of their products, the tendency to spread misinformation, or the harmful effects on mental health. And the government applied virtually no regulations for these purposes.
Likewise, there are few or no safeguards to limit the potential negative impact of AI. Facial recognition software that amounts to racial profiling, public opinions simulated and supercharged by chatbots, fake videos in political ads – all of this persists in a legal gray area. Even clear violators of campaign advertising law could, some argue, be exonerated if they simply did it with AI.
Mitigando os riscos
The risks that AI poses to society are surprisingly familiar, but there’s one big difference: it’s not too late. This time, we know it’s all coming. Fresh from our experience with the harm caused by social media, we have all the warning we need to avoid making the same mistakes. The biggest mistake we made with social media was leaving it as an unregulated space.
Even now — after all the studies and revelations of the negative effects of social media on children and mental health, after Cambridge Analytica, after the exposure of Russian intervention in our politics, after everything else — social media in the U.S. it remains largely an unregulated “weapon of mass destruction.” Congress will receive millions of dollars in contributions from Big Tech, and lawmakers will even invest millions of their own dollars in these companies, but passing laws that limit or penalize their behavior seems like a bridge too far.
We cannot afford to do the same thing with AI as the risks are even greater. The harm social media can cause stems from how it affects our communication. AI will affect us in the same way and in many others. If Big Tech’s trajectory is any sign, AI tools will be increasingly involved in how we learn and how we express our thoughts. But these tools will also influence how we schedule our daily activities, how we design products, how we write laws, and even how we diagnose diseases. The expansive role of these technologies in our daily lives gives for-profit companies opportunities to exert control over more aspects of society, which exposes us to risks arising from their incentives and decisions.
The good news is that we have a whole category of tools to modulate the risk that corporate actions pose to our lives, starting with regulation. Regulations can come in the form of activity restrictions, such as limitations on what types of companies and products can incorporate AI tools. These can come in the form of transparency rules, requiring disclosure of which datasets are used to train AI models or which new pre-production models are being trained. And they can come in the form of oversight and accountability requirements, allowing for civil penalties in cases where companies break the rules. The biggest point of influence governments have when it comes to technology companies is antitrust law. Despite what many lobbyists would have you think, one of the main functions of regulation is to preserve competition – not to make life difficult for companies. It’s not inevitable that OpenAI will become another Meta, an 800-pound gorilla whose user base and reach are several times that of its competitors. In addition to strengthening and enforcing antitrust law, we can introduce regulation that supports technology sector-specific competition standards, such as data portability and device interoperability. This is another key strategy for resisting monopoly and corporate control. Additionally, governments can enforce existing advertising regulations. Just as the US regulates which media can and cannot host advertisements for sensitive products such as cigarettes, and just as many other jurisdictions exercise strict control over the timing and form of politically sensitive advertising, the US could also limit engagement between media providers. AI and advertisers.
Finally, we must recognize that the development and delivery of AI tools need not be the sovereign domain of corporations. We, the people and our government, can do this too. The proliferation of open source AI development in 2023, successful to the point of scaring corporate players, is proof of this. And we can go further, asking our government to create public opinion AI tools developed with political oversight and accountability in our democratic system, where the dictatorship of the profit motive does not apply. It is possible to debate which of these solutions is more practical, more important or more urgently needed. We must have a vibrant social dialogue about whether and how to use each of these tools. There are many paths to a good result. The problem is that this is not happening now, especially in the USA. And with a presidential election looming, conflicts spreading alarmingly across Asia and Europe, and a global climate crisis, it’s easy to imagine that we won’t be able to deal with AI any faster than we (didn’t) manage social media. But it’s not too late yet. These are still early years for practical consumer AI applications. We must and can do better.
Nathan E. Sanders is a data scientist affiliated with the Berkman Klein Center at Harvard University. Bruce Schneier is a security technologist, fellow and professor at the Harvard Kennedy School.
( fonte: MIT Technology Review )