An AI with generic jokes. Google DeepMind researchers asked 20 professional comedians to use popular AI language models (LLMs) to write jokes and comedy performances. Their results were mixed.
Comedians said the tools were useful in helping them “throw up” an initial draft that could be repeated, as well as helping them structure their routines. However, the AI was unable to produce anything that was original, stimulating or, most importantly, funny. My colleague Rhiannon Williams has the full story.
As Tuhin Chakrabarty, a computer science researcher at Columbia University who specializes in AI and creativity, told Rhiannon, humor is often based on being surprising and incongruous. Creative writing requires its creator to deviate from the norm, while LLMs can only imitate it.
And this is becoming very explicit in how artists are approaching AI today. I just got back from Hamburg, Germany, which hosted one of the biggest events for creatives in Europe, and the message I got from the people I spoke to was that AI is too flawed and unreliable to fully replace humans, and instead , it is best used as a tool to increase human creativity.
We are currently in a context where we are deciding how much creative power we are willing to give to AI companies and tools. After the boom began in 2022, when DALL-E 2 and Stable Diffusion first hit the scene, many artists raised concerns that AI companies were copying their copyrighted works without consent or compensation. Tech companies argue that anything on the public internet falls under fair use, a legal doctrine that allows the reuse of copyrighted material in certain circumstances. Artists, writers, imaging companies, and the New York Times have filed lawsuits against these organizations, and it will likely be years before we have a clear answer as to who is right.
Meanwhile, the court of public opinion has changed a lot in the last two years. Artists I interviewed recently said they were harassed and ridiculed for protesting the data collection practices of AI companies in 2022.
Now, the public is more aware of the harms associated with AI. In just two years, people have gone from being amazed by AI-generated images to sharing viral posts on social media about how to opt out of AI data collection – a concept that was foreign to most laypeople until very recently. recently. Companies also benefited from this change. Adobe has been successful in presenting its AI offerings as an “ethical” way to use technology without having to worry about copyright infringement. There are also several grassroots efforts to change AI power structures and give artists more control over their data. I wrote about Nightshade, a tool created by scientists at the University of Chicago that allows users to add an invisible attack of poison to their images so that they break AI models when scanning them. The same team is behind Glaze, a tool that lets artists hide their personal style from AI impersonators.
Glaze has been integrated into Cara, a new art portfolio website and social media platform that has been attracting interest from artists. Cara presents itself as a platform for art created by people, filtering AI-generated content. It got almost a million new users in just a few days.
All of this should be reassuring news for any creative person worried about losing their job to software. And the DeepMind study is a great example of how AI can really be useful for creatives. It can take on some of the boring, mundane and stereotypical aspects of the creative process, but it cannot replace the magic and originality that human beings bring. AI models are limited to their training data and will always only reflect the zeitgeist at the time of training. That gets old really fast.
( fonte: MIT Technology Review )