Four trends that changed AI in 2023

2023 has been one of the craziest years in the AI ​​industry in a long time: endless product launches, boardroom scams, intense political debates about the destruction of Artificial Intelligence, and a race to find the next big thing. But we’ve also seen concrete tools and policies aimed at making the AI ​​industry behave more responsibly and hold powerful players accountable. This gives me a lot of hope for the future of AI.

Here’s what 2023 taught me:

Generative AI is out of the lab with a vengeance, but it’s unclear where it’s going next

Last year started with Big Techs going all-in on generative AI. The overwhelming success of OpenAI’s ChatGPT has caused every major technology company to release their own version. 2023 could go down in history as the year we saw the most AI releases: Meta’s LLaMA 2, Google’s Bard and Gemini, Baidu’s Ernie Bot, OpenAI’s GPT-4, and several other models , including one from a French open source challenger, Mistral.

But despite the initial enthusiasm, we haven’t seen any AI application become an overnight success. Microsoft and Google launched powerful AI research, but it turned out to be more of a failure than a killer app. The fundamental flaws of language models, like the fact that they often make things up, have led to some embarrassing (and, let’s be honest, hilarious) gaffes. Microsoft’s Bing often responded to people’s questions with conspiracy theories and suggested that a New York Times reporter leave his wife. Google’s Bard generated factually incorrect responses to its marketing campaign, which slashed the company’s stock price by $100 billion.

There is currently a frantic search for a popular AI product that everyone wants to adopt. Both OpenAI and Google are experimenting with allowing companies and developers to create custom AI chatbots and allowing people to create their own apps using AI, without the need for coding skills. Perhaps generative AI will eventually be incorporated into boring but useful tools to help us increase our productivity at work. It could take the form of AI assistants – perhaps with voice capabilities – and coding support. This year will be crucial in determining the real value of generative AI.


We’ve learned a lot about how language models actually work, but we still know very little

Even though technology companies are implementing major language models into products at a frantic pace, there’s still a lot we don’t know about how they work. They make things up and have serious gender and ethnic biases. In 2023 we also discovered that different language models generate texts with different political biases and that they are great tools for hacking people’s private information. Text-to-image templates can be asked to generate copyrighted images and photos of real people, and can be easily tricked into generating disturbing images. It’s been great to see so much research into the flaws of these models, as it could take us one step closer to understanding why they behave the way they do and ultimately fixing them.

Generative models can be very unpredictable, and this year there have been many attempts to make them behave as their creators want. OpenAI shared that it uses a technique called human feedback reinforcement learning, which uses user feedback to help guide ChatGPT toward more desirable responses. A study from AI lab Anthropic showed how simple natural language instructions can guide large language models to make their outputs less toxic. But unfortunately, many of these attempts end up being quick fixes rather than permanent ones. There are also misguided approaches, such as banning seemingly innocuous words like “placenta” from image-generating AI systems to prevent blood production. Tech companies create workarounds like these because they don’t know why the models generate the content they do.

We also got a better sense of AI’s true carbon footprint. Researchers at AI startup Hugging Face and Carnegie Mellon University found that generating an image using an advanced AI model consumes as much energy as fully charging your smartphone. Until now, the exact amount of energy that generative AI uses has been a missing piece of the puzzle. More research into this could help us change the way we use AI to be more sustainable.

 

The AI ​​apocalypse has gone mainstream

Talk of AI posing an existential risk to humans became familiar last year. Hundreds of scientists, business leaders, and policymakers spoke out, from deep learning pioneers Geoffrey Hinton and Yoshua Bengio to the CEOs of top AI companies like Sam Altman and Demis Hassabis, California Congressman Ted Lieu, and former President of Estonia Kersti Kaljulaid.

Existential risk has become one of AI’s biggest memes. The hypothesis is that one day we will build an AI that is much more intelligent than humans, and this could lead to serious consequences. It’s an ideology espoused by many in Silicon Valley, including Ilya Sutskever, chief scientist at OpenAI, who played a key role in the ouster of OpenAI CEO Sam Altman (and his reinstatement a few days later).

But not everyone agrees with this idea. Meta AI leaders Yann LeCun and Joelle Pineau said these fears are “ridiculous” and that the conversation about AI risks has become “unbalanced.” Many other leading AI players, such as researcher Joy Buolamwini, argue that focusing on hypothetical risks distracts from the real harm that AI is currently causing.

However, increased attention on the technology’s potential to cause extreme harm has sparked many important conversations about AI policy and encouraged policymakers around the world to act.

The Wild West days of AI are over

Thanks to ChatGPT, everyone from the US Senate to the G7 was talking about AI policy and regulation last year. In early December, European lawmakers capped off a year of eventful politics when they agreed to the AI ​​Act, which will introduce binding rules and standards on how to develop riskier AI more responsibly. It will also ban certain “unacceptable” applications of AI, such as the use of facial recognition by police in public places.

The White House, in turn, presented an executive order on AI, in addition to voluntary commitments from leading Artificial Intelligence companies. Their efforts aimed to bring more transparency and standards to AI and gave agencies a lot of freedom to adapt rules to suit their industries.

One concrete policy proposal that received a lot of attention was watermarks — invisible signs in text and images that can be detected by computers in order to flag AI-generated content. These tags can be used to track plagiarism or help combat misinformation, and we’ve seen research that has been able to apply them to AI-generated text and images.

It was not just the legislators who were busy, but also the lawyers. We saw a record number of lawsuits as artists and writers argued that AI companies had extracted their intellectual property without their consent and without compensation. In an exciting counteroffensive, researchers at the University of Chicago have developed Nightshade, a new data poisoning tool that allows artists to fight back against generative AI by messing up training data in ways that can cause serious damage to generative AI models. images. There is a resistance brewing, and I expect more grassroots efforts to shift the balance of power in technology in the coming year.

( source: MIT Technology Review )