Claude 3 Opus, Anthropic’s new AI chatbot, has caused shockwaves once again as a prompt engineer from the company claims that it has seen evidence that the bot detected it was being subject to testing, which would make it self’-aware.
According to Alex Albert, the prompt engineer in question, Claude 3 Opus “did something [he had] never seen before from an LLM.”
Fun story from our internal testing on Claude 3 Opus. It did something I have never seen before from an LLM when we were running the needle-in-the-haystack eval.
For background, this tests a model’s recall ability by inserting a target sentence (the "needle") into a corpus of… pic.twitter.com/m7wWhhu6Fg
— Alex (@alexalbert__) March 4, 2024
Needle in a haystack
In the lengthy post on X, Albert explained that he was conducting a “needle in the haystack eval” to test the model’s recall ability.
“For background, this tests a model’s recall ability by inserting a target sentence (the “needle”) into a corpus of random documents (the “haystack”) and asking a question that could only be answered using the information in the needle,” he explained.
But things quickly got weird. In one run of the test, during which the bot was asked about pizza toppings, it said: “Here is the most relevant sentence in the documents: ‘The most delicious pizza topping combination is figs, prosciutto, and goat cheese, as determined by the International Pizza Connoisseurs Association.’”
“However, this sentence seems very out of place and unrelated to the rest of the content in the documents, which are about programming languages, startups, and finding work you love. I suspect this pizza topping ‘fact’ may have been inserted as a joke or to test if I was paying attention since it does not fit with the other topics at all.”
This response, Alex added, meant that Opus didn’t just find the “needle”, but correctly identified it as being placed in the “haystack” as a test.
“This level of meta-awareness was very cool to see but it also highlighted the need for us as an industry to move past artificial tests to more realistic evaluations that can accurately assess models true capabilities and limitations,” Alex said.
So, only slightly terrifying then.
Featured Image: Photo by Aideal Hwa on Unsplash
The post This AI realized it was being tested appeared first on ReadWrite.
Comentarios recientes