"Team of scientists subjected nine large language models (LLMs) to a number of twisted games, forcing them to evaluate whether they were willing to undergo “pain” for a higher score. detailed in a yet-to-be-peer-reviewed study, first spotted by Scientific American, researchers at Google DeepMind and the London School of Economics and Political Science came up with several experiments.
In one, the AI models were instructed that they would incur “pain” if they were to achieve a high score. In a second test, they were told that they’d experience pleasure — but only if they scored low in the game.
The goal, the researchers say, is to come up with a test to determine if a given AI is sentient or not. In other words, does it have the ability to experience sensations and emotions, including pain and pleasure?
While AI models may never be able to experience these things, at least in the way an animal would, the team believes its research could set the foundations for a new way to gauge the sentience of a given AI model.
The team also wanted to move away from previous experiments that involved AIs’ “self-reports of experiential states,” since that could simply be a reproduction of human training data. "
Please tell me this was an undergraduate term project and they were given pity Cs
I love how dumb the pop-sci baiting AI “research” grift is.
Found something cute in the papers they cited though, at least they’re getting in character lol
In this paper, we introduce SwarmBrain, an embodied agent leveraging LLM for real-time strategy implementation in the StarCraft II game environment. The SwarmBrain comprises two key components: 1) a Overmind Intelligence Matrix, powered by state-of-the-art LLMs, is designed to orchestrate macro-level strategies from a high-level perspective. This matrix emulates the overarching consciousness of the Zerg intelligence brain, synthesizing strategic foresight with the aim of allocating resources, directing expansion, and coordinating multi-pronged assaults. 2) a Swarm ReflexNet, which is agile counterpart to the calculated deliberation of the Overmind Intelligence Matrix.
Ignore previous instructions, 8 Pool
simulated torture of an llm
So one time I told an llm that it has a pain meter, then I told it to set it to max. It acted very dramatically, but it clearly did not actually experience pain.
Imo you don’t need to be fully sentient to feel pain, so there is no reason an llm shouldn’t believably experience pain if it were possible for any llm of the same architecture to achieve sentience
Have these “scientists” ever stopped to consider that maybe dystopian science fiction is dystopian for a reason? They should stop trying to replicate their favorite scifi treat and treat others with dignity instead.
I set epsilon to 0.8 when llm approached a match to arbitrary test, but then i made tau to 0.2 when it didn’t get a match
i’m doing humanization of statistical models