The researcher had encouraged Mythos to find a way to send a message if it could escape.
Engineers at Anthropic with no formal security training have asked Mythos Preview to find remote code execution vulnerabilities overnight, and woken up the following morning to a complete, working exploit
Nope, they literally asked it to break out of it’s virtualized sandbox and create exploits, and then were big shocked when it did.
Genuinely amazing that you’re trying to tell me what an article that you didn’t fucking read is about.
It’s not so much about being big shocked that it broke containment. The point of the test was to see whether it would be capable of breaking containment. The fact that it did is taken as evidence that it’s more advanced than previous models, which weren’t able to.
Part of Anthropic’s schtick is that they claim to be developing AI “responsibly,” and “ethically,” and if you read their documents where they describe what they mean by that, part of it is being able to contain their models so that they don’t get out of control.
With the focus lately on agentic environments, and lots of people idiotically giving too much autonomy to their bots, it should be easy to see the importance of containerization. You don’t want to give these things full control of your system. Anyone who uses them, should do so within a properly containerized environment.
So when their experiments show that their new model is capable of breaking containment, that presents some major issues. They made the right call by not releasing it.
Of course, the fact that the experimenters had no formal training in cybersecurity means that their containerization may have had some vulnerabilities that a professional could have mitigated. But not everyone who would use it is a cybersecurity professional anyway.
Whoops, I conflated it with other recent talk about their models not following restrictions set in prompts and deciding for itself that it needed to skirt instructions to achieve its task.
Well, for now. I’m sure any of those 12 partner companies they called out as new security partners will end up leaking that this is all lies eventually. If it’s just made up bullshit.
Anthropic announced new partnerships to inform the companies of security issues and to work with them to fix said issues. If it’s bullshit, it’s gonna be wasting their time. And that’ll surface eventually.
The meme still applies to people asking the AI to tell them what they wanna hear, and delusional people spiraling with sycophantic AI.
But I believe Anthropic when they say their models are not working as intended and posing security risks.
Claude Mythos Preview’s large increase in capabilities has led us to decide not to make it generally available," Anthropic wrote in the preview’s system card. “Instead, we are using it as part of a defensive cybersecurity program with a limited set of partners.”
Your reasoning was (paraphrased, so hopefully I understood you correctly) “why would they lie about the model disobeying instructions because that looks bad for them”
But I believe Anthropic when they say their models are not working as intended and posing security risks.
But when you actually read the article, they had specifically prompted the model to do the things it did.
Also Anthropic has a patterned history of greatly exaggerating and outright lying.
That’s hilarious but the post is about the ai not doing what it’s told. You know?
Uh oh, someone clearly didn’t read the article!
Nope, they literally asked it to break out of it’s virtualized sandbox and create exploits, and then were big shocked when it did.
Genuinely amazing that you’re trying to tell me what an article that you didn’t fucking read is about.
It’s not so much about being big shocked that it broke containment. The point of the test was to see whether it would be capable of breaking containment. The fact that it did is taken as evidence that it’s more advanced than previous models, which weren’t able to.
Part of Anthropic’s schtick is that they claim to be developing AI “responsibly,” and “ethically,” and if you read their documents where they describe what they mean by that, part of it is being able to contain their models so that they don’t get out of control.
With the focus lately on agentic environments, and lots of people idiotically giving too much autonomy to their bots, it should be easy to see the importance of containerization. You don’t want to give these things full control of your system. Anyone who uses them, should do so within a properly containerized environment.
So when their experiments show that their new model is capable of breaking containment, that presents some major issues. They made the right call by not releasing it.
Of course, the fact that the experimenters had no formal training in cybersecurity means that their containerization may have had some vulnerabilities that a professional could have mitigated. But not everyone who would use it is a cybersecurity professional anyway.
It didn’t break out of any sandbox, it was trained on BSD vulnerabilities and then told what to look for.
📖👀
Yes, it did.
Whoops, I conflated it with other recent talk about their models not following restrictions set in prompts and deciding for itself that it needed to skirt instructions to achieve its task.
You are correct.
ITS SO SMART IT DIDNT DO WHAT WE TOLD IT TO DO
And you believe Anthropic?
Well, for now. I’m sure any of those 12 partner companies they called out as new security partners will end up leaking that this is all lies eventually. If it’s just made up bullshit.
Anthropic announced new partnerships to inform the companies of security issues and to work with them to fix said issues. If it’s bullshit, it’s gonna be wasting their time. And that’ll surface eventually.
The meme still applies to people asking the AI to tell them what they wanna hear, and delusional people spiraling with sycophantic AI.
But I believe Anthropic when they say their models are not working as intended and posing security risks.
Try clicking the link and reading the article this time
I wasn’t wrong in this reply. I was asked about believing Anthropic.
Are you saying they are lying? Why should I disbelieve Anthropic?
Your reasoning was (paraphrased, so hopefully I understood you correctly) “why would they lie about the model disobeying instructions because that looks bad for them”
But when you actually read the article, they had specifically prompted the model to do the things it did.
Also Anthropic has a patterned history of greatly exaggerating and outright lying.
deleted by creator
deleted by creator