Generative AI models have excelled in conversation and creating images, videos and music, but they are still not very good at taking actions for us. This could change with AI agents, models designed with a specific purpose. There are two main types.
The first, called tool-based agents, uses natural language to perform digital tasks. Examples include Anthropic’s agent, launched in October, capable of filling out forms by browsing the web, and similar solutions from companies like Salesforce and OpenAI.
The other type is known as simulation agents. You can think of them as models designed to behave like humans. The first people to work on creating these agents were Social Science researchers. They wanted to conduct studies that would be expensive, impractical, or unethical with real people, so they used AI. This trend gained momentum following the publication of a 2023 study led by Joon Sung Park, a PhD candidate at Stanford, called “Generative Agents: Interactive Simulacra of Human Behavior.”
Recently, Park and his team published another study on arXiv titled “Generative Agent Simulations of 1,000 People.” In this work, researchers conducted two-hour interviews with 1,000 people using AI. Shortly thereafter, the team was able to create simulated agents that replicated each participant’s values and preferences with impressive accuracy.
There are two really important questions here. First, it’s clear that leading AI companies no longer consider it enough to create impressive generative AI tools; now they need to develop agents capable of carrying out tasks for people. Second, it is becoming easier than ever to get these AI agents to imitate the behaviors, attitudes, and personalities of real people. What were once two distinct types of agents—simulation agents and tool-based agents—may soon become one: AI models that not only mimic your personality, but also act on your behalf.
Research on this topic is ongoing. Companies like Tavus are working to help users create “digital twins” of themselves. However, the company’s CEO Hassaan Raza plans to go further by developing AI agents that can take the form of therapists, doctors and teachers.
If these tools become cheap and easy to create, many new ethical concerns will arise, but two in particular stand out. The first is that these agents could generate even more personal and potentially more harmful deepfakes. Imaging tools have already made it simple to create nonconsensual pornography using just an image of a person, but this crisis will only intensify if it is easy to also replicate someone’s voice, preferences, and personality. (Park told me that he and his team spent more than a year dealing with ethical issues like this in their last research project, involving many conversations with Stanford’s ethics board and drafting policies about how participants could withdraw their data and contributions .)
The second question is the fundamental question: do we have the right to know whether we are talking to an agent or a human? If you participate in an interview with an AI and send samples of your voice to create an agent that sounds and responds like you, do your friends or coworkers have the right to know when they are talking to him and not you? On the other hand, if you call your cell phone carrier or your doctor’s office and a cheerful customer service agent answers, do you have the right to know if you’re speaking to an AI?
This future seems distant, but it is not. It is possible that when we get there, even more urgent and relevant ethical questions will arise.
( fonte: MIT Technology Review)