Here’s our prediction for AI this year

In December, our small but mighty artificial intelligence reporting team was challenged by editors to make a prediction: What’s next for AI?

By 2024, technology has contributed to both Nobel Prize-winning advances in chemistry and a mountain of cheap content that few people asked for but still flooded the internet. Take “Jesus Camarão”, generated by these models, as an example. There was also an increase in greenhouse gas emissions last year, attributed in part to the advancement of energy-intensive AIs. Once the challenge was completed, our team began to reflect on how this will all unfold next year.

As we look ahead, some things are certain. We know that agents — AI models that do more than just talk to you and can actually perform tasks on their own — are a focus for many companies today. Building them will raise privacy questions about how much of our data and preferences we are willing to share in exchange for tools that (supposedly) save us time. Similarly, the need to make AI faster and more energy efficient is putting so-called compact language models into the spotlight.

We wanted, however, to focus on less obvious predictions. Mine included how AI companies—which previously avoided working in defense and national security—could be tempted by Pentagon contracts, as well as the fact that Donald Trump’s attitudes toward China could intensify the global race for the best semiconductors. Read the full list.

What is not evident from this story is that other predictions were not so straightforward. In this context, we wonder if 2025 will be the year of intimate relationships with chatbots, love triangles with AI or traumatic breakups with this technology. To witness the consequences of our team’s lively debates (and hear more about what didn’t make the list), you can join our next LinkedIn Live this Thursday, January 16th. I’ll discuss all of this with Will Douglas Heaven, our senior AI editor, and our news editor, Charlotte Jee.

There are a few other things I will also be watching closely this year. One of them is how little the big names in AI — such as OpenAI, Microsoft and Google — are disclosing about the environmental impact of their models. Much evidence suggests that asking an AI model like ChatGPT about known facts, like the capital of Mexico, consumes much more energy (and generates more emissions) than simply using a search engine. However, in recent interviews, Sam Altman of OpenAI has spoken positively about the idea of ​​ChatGPT replacing the “Google” habit – something we have all picked up over the past two decades. This is already happening, in fact.

The environmental cost of all this will be at the top of my concerns in 2025, as will the possible cultural burden. We will go from searching for information by clicking on links and (hopefully) evaluating sources, to simply reading the answers that AI search engines present to us. As our editor-in-chief Mat Honan said in his article on the subject: “Who wants to have to learn when they can just know?”

Deeper learning

What’s next for our privacy?

The US Federal Trade Commission (FTC) has taken a series of enforcement actions against data brokers, some of which tracked and sold users’ geolocation data in sensitive locations such as churches, hospitals and military installations, without explicit consent. While limited in scope, these actions may offer some new and improved protections for Americans’ personal information.

Why this matters: Consensus is growing that Americans need better privacy protections — and that the best way to achieve them would be for Congress to pass comprehensive federal privacy legislation. Unfortunately, that won’t happen anytime soon. Enforcement actions by agencies like the FTC, however, may be the best alternative at this time.

( fonte: MIT Technology Review)