Lead or be led by AI?

One of the best recent cartoons about AI shows, on one side, a happy person, as “AI transforms a few topics into a large email that I appear to have written”, while, on the other side, the person receiving the email is happy, because there is an AI that “summarizes large emails that I pretend to have read into bullet points”. The cartoon is by Tom Fishburne, from Marketoonist, and reflects a reality of the era of Artificial Intelligence and the internet so far.

The beginning: websites and search engines

A common strategy in journalism is to start a report with a person’s story. An example would be “Maria da Penha, from Campinas, woke up early and, when she got up, she heard her clock radio suggest a coffee… This shows how a song can have an impact, etc.”

For readers, this brings the topic closer to everyday life, helping with understanding. However, in the anxiety of writing texts that appear more prominently in search engines, writers and agencies specialized in SEO (Search Engine Optimization) started to use optimization techniques that involve repeating the opening text, forcing links and increasing the reader’s time on the page. – not because of their quality, but because of the way they hijack attention and navigation.

What would be considered “wordy” or confusing in the world of people is taught as a ranking technique to manipulate search engines. The better they are exposed in search engines, the more clicks and users they retain, generating revenue for the author. Monetization does not occur through genuine interest or the quality of the content, but through techniques to please the machines.

Social networks and influencers

On social media, techniques such as the “growth formula” and other legends force content creators to post multiple times, tied to a guide that imprisons them to follow the algorithm. All because someone, theoretically, discovered the secret of the algorithm and now sells courses on the topic.

This algorithm is not in the service of informing, but of monetizing interactions with advertising.

Many content creators feel anxious and tired about having to follow such a manual. Dozens of influencers have given up in recent years, not because they are tired of trying to inform people, but because they are trying to please the algorithms.

Professional networks

To attract the attention of headhunters, companies, prospects, bosses, co-workers and clients, where more in-depth articles are left out, the guides preach formats, recommend photos, graphics, text in bullets, icons and comments. Everything to please the algorithm and expand your posts to more followers.

An example is including a link in the first line of a comment, instead of in the space of the original post, not to make life easier for readers, but to please the machine. It’s no surprise that people comment on other people’s posts simply with “commenting to increase reach”.

Another example is cutting content without context to justify popular content. To piggyback on the popular themes of SAAS (software as a service) and company value (“valuation”), a popular technology channel published, under the title “The end of SAAS”, a post comparing the variation in the value of these companies . As if there were a correlation, the authors make a 3-month record for shares, in which the values ​​of Adobe, Salesforce, Monday and ServiceNow shares fall between 5% and 20%. The post reaches thousands of people without investigating that the companies have very different businesses or that, in 2 years, Adobe saw its shares jump from 300 to 600 dollars, changing its model to SAAS.

The investigation – as a good journalist would do – was exchanged for the search for popularity via obedience to the algorithm.

Press office

One of the big businesses for press offices is writing and responding to executives on social media. Companies hire services to write on behalf of directors and presidents, moderate and respond. The themes are popular: sleep, sport, vacations, mistakes, what they learn from their young children, leadership, among others. Always avoiding blunt opinions, so as not to displease the platforms or harm popularity.

In the book Guardrails: Guiding Human Decisions in the Age of AI, Urs Gasser and Viktor Mayer-Schönberger investigate initiatives to ensure that AI is responsible, ethical and beneficial to society. The rules prevent the generation of harmful content, but they also limit the diversity of opinions.

The result of serving machines to gain popularity is the creation of content that mixes common sense with leadership and self-help books.

How Journalism forgot its readers

A large-circulation Brazilian newspaper, in its news app in the version for subscribers, brought, on the same day, the following titles (in summary): “University of Oxford opens admission in Brazil, see where”, “How long will the labor strike last?” subway? See what the union defined”, “Sírio opens college with simulators, find out what it will be like”, “Monkey extinction raises alert for another species, find out which one it is”.

In their eagerness to gain clicks, newspapers practice so-called clickbait, even for subscribers.

Prioritizing the metric of more pages viewed creates more banners and monetization possibilities. While they write to please themselves, they write not for the readers, but for the machine.

With the advent of generative AI, it is increasingly common to see texts that end with “text generated with the help of Artificial Intelligence”. Journalists exchanged investigation time for a robot filled with common sense, which certainly won’t leave the guardrails.

Opportunities for people to be followed by AI

AI is the union of computing and statistics. Although created over 40 years ago, it has achieved enormous success in the last 2 years with generative AI. This power opens up infinite possibilities and discussions in society.

Instead of following AI, people can have AI extend their capabilities.

These “bionic arms” do not need to follow Big Tech algorithms to monetize with ads nor are they based on the pursuit of popularity.

AI can enable people-to-people technology.

There are applications that use AI to track cell phone usage behaviors and identify depression. In Medicine, AIs accompany doctors and psychologists, discovering patterns and revealing anomalies.

A 2012 Israeli study of judges found that they gave harsher verdicts when they were hungry: while they denied about 35% of parole cases after breakfast, they denied more than 85% just before lunch. Using this information, courts and judges could create AIs that help them better understand their decisions and possible biases.

Marketing professionals can spend less on media buying by replacing their segmentations with personas (demographic characteristics) with propensity groups.

Also in Marketing, AI already helps answer complex questions, such as the relevance of investment in television in the conversion of e-commerce, as well as the influence of a physical store on digital purchases.

Analytics models with RBA (regression-based attribution) help Marketing professionals depend less on the last click made in search and understand the real value of media exposure in other places, such as video and influencers.

In Medicine, medicines can be tested quickly when AI simulates complex scenarios, “shortening the time” to understand their effects. The ideas belong to scientists, who create hypotheses and experiments. AI gives “bionic arms” even over time.

Yuval Harari, author of Sapiens: A Brief History of Humankind and Homo Deus: A Brief History of Tomorrow, said: “The danger is that if we invest too much in developing AI and too little in developing human consciousness, simple sophisticated Artificial Intelligence of computers may only serve to give power to the natural stupidity of humans.”

May we make better choices this time.

( fonte: MIT Technology Review )