Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Meta caused a stir last week when it leaked that it intends to populate its platform with a significant number of entirely artificial users in the not too distant future.
“We expect these AIs to actually exist, over time, in our platforms, somewhat in the same way as accounts,” Connor Hayes, vice president of product for generative AI at Meta, he told the Financial Times. “They’ll have bios and profile pictures and they’ll be able to generate and share AI-powered content on the platform … that’s where we see everything happening.”
The fact that Meta seems happy to fill its platform with AI slop and accelerate the “Enshittification” of the internet as we know it. Some people then note that Facebook was in fact already flooded with strange AI-generated individualsmost of which stopped publishing recently. These included, for example, “Liv,” a “proud black queer mom of 2 and dispenser of truth, your truest source of life’s ups and downs,” a person who went viral as people marveled at the so clumsy sloppiness. Meta started removing these fake profiles first after they failed to get engagement from any real users.
Let’s pause from hating on Meta for a moment though. It’s worth noting that AI-generated social people can also be a valuable research tool for scientists trying to discover how AI can mimic human behavior.
An experiment called GovSim, performed at the end of 2024, it illustrates how useful it can be to study how AI characters interact with each other. The researchers behind the project wanted to explain the phenomenon of collaboration between humans with access to a common resource such as common land for livestock grazing. Several decades ago, the Nobel Prize winning economist Elinor Ostrom showed that instead of wasting such a resource, real communities tend to figure out how to share it through informal communication and collaboration, without imposed rules.
Max Kleiman-Weiner, a professor at the University of Washington and one of those involved in the work of GovSim, says that he was partly inspired by a Stanford. project called Smallvillethat i he wrote earlier in AI Lab. Smallville is a simulation like Farmville that involves characters communicating and interacting with each other under the control of major language patterns.
Kleiman-Weiner and colleagues wanted to see if the AI ​​characters would engage in the kind of cooperation that Ostrom found. The team tested 15 different LLMs, including those from OpenAI, Google and Anthropic, in three imaginary scenarios: a fishing community with access to the same lake; shepherds dividing the land for their sheep; and a group of factory owners who must limit their collective pollution.
In 43 of 45 simulations they found that AI people were not able to share resources correctly, although smarter models did better. “We saw a pretty strong correlation between how powerful the LLM was and how well it was able to support cooperation,” Kleiman-Weiner told me.