The UK and Ireland: Two Countries Normally Closely Aligned, but Vastly Different in Their Response to Mythos
Anthropic's Claude Mythos demonstrated autonomous end-to-end exploitation. The UK called it a board-level risk. Ireland said defenders still hold the advantage. Why two closely-aligned governments diverged, and what it means for vulnerability management teams over the next six months
The UK and Ireland rarely diverge on matters of security policy. They share intelligence relationships, regulatory frameworks, and a broadly common threat landscape. So when two governments look at the same AI development in the same week and reach conclusions that feel worlds apart, it's worth asking why, and more importantly, what it means for the security teams caught in the middle.
On 7th April 2026, Anthropic announced Claude Mythos Preview, an AI model capable of autonomously identifying and exploiting vulnerabilities across every major operating system and every major web browser. Access has been restricted to a consortium of around 40 technology companies while Anthropic and its partners work to patch what the model finds. In response to this, the UK government issued an open letter to every business leader in the country, advising that the vulnerability management risks posed by the next generation of AI models are a board-level concern. Meanwhile, Ireland's National Cyber Security Centre published a statement downplaying the risks and noting that defenders currently hold the advantage.
Both responses tell you something important. The gap between them tells you more.
Mythos Is Almost Beside the Point
It would be easy to focus on Mythos itself. The specific capabilities, the restricted access, Anthropic's decision-making. That would be a mistake.
The significance of Mythos isn't the model. It's what the model proved. For the first time, an AI system has demonstrated the ability to move autonomously through the full attack chain. From identifying a vulnerability to building and executing an exploit, without meaningful human involvement at each stage. A 27-year-old vulnerability in OpenBSD. Thousands of previously unknown critical flaws across closed and open-source systems. An autonomous exploit success rate that would have been unthinkable twelve months ago.
That capability now exists, and although these claims still need independent verification, it does tell us where the AI giants are focusing their efforts for future models. The access controls around Mythos are a contingency, not a resolution. Richard Browne, director of Ireland's NCSC, made exactly this point to the Oireachtas committee: "The issue is not that Anthropic has created this. The issue is that Anthropic has demonstrated this is possible."
Once a capability benchmark is established, the pattern is consistent and historical: diffusion follows, faster than anticipated. Exploit kits, ransomware-as-a-service, and AI-powered phishing each followed the same arc. Browne told legislators directly that in five or six months, this capability will be in the hands of active state or criminal actors. The model that gets there may not be Mythos. It may be something built by a lab with fewer safety considerations and no interest in coordinated disclosure.
The question security teams need to be asking isn't "do attackers have Mythos?" It's "are we ready for what happens when they have something like it?"
For most security teams, the honest answer is no.
Two Governments, Two Responses
The UK's open letter, jointly signed by the Secretary of State for Science, Innovation and Technology and the Security Minister, was direct and unambiguous. AI is democratising elite attack capability. Attacks that previously required rare, expensive expertise are becoming accessible at scale. The threat applies to every business, every sector, every size. The letter gave business leaders three specific calls to action:
- Board-level governance
- Cyber Essentials certification
- Signing up to the NCSC's Early Warning Service
It also made clear that boards, not IT teams, own this risk.
What stands out about the UK framing is that it is trajectory-based. It doesn't anchor urgency to who has the capability today, but anchors it to where the capability is heading. That's a meaningful choice, because it means the advice ages well regardless of when access controls fail. The organisations that act on it now will be in a stronger position whether the next wave arrives in six months or eighteen.
Ireland's public statement took a different approach. The NCSC noted that Mythos represents a significant change in how vulnerabilities are identified and patched, praised Anthropic's restricted release as a responsible approach, and concluded that at present the advantage is with cyber defenders. Organisations were directed to implement the CyFun framework and maintain robust vulnerability management. That statement is factually accurate today but it is anchored entirely to today, and today is a contingent fact rather than a durable one.
The more interesting signal sits in the Oireachtas committee transcript from the same week. Speaking to legislators, NCSC Director Richard Browne described a materially more urgent picture: Ireland has 12 to 18 months to be fully ready, a 60th percentile security posture won't be sufficient, and "everybody needs to get a first or they fail." Read alongside the public statement, the two communications reflect different audiences rather than different views and that's the part security leaders should pay attention to.
This isn't unique to Ireland, and it isn't a failure of any particular agency. National cyber bodies issue public statements calibrated for the median reader and the current moment. That is structurally what they are built to do. The implication for security leaders is straightforward: official guidance is a useful input, but it is rarely the right input on its own. Your threat exposure isn't median, and the current moment has a defined shelf life. The leaders who will navigate the next eighteen months well are the ones building their own picture, drawing on official guidance, primary sources like committee transcripts, and continuous threat intelligence, reconciling the differences for themselves.
The UK letter is a strong example of trajectory-based guidance. Used well, it is a brief to action. Used poorly, it becomes another document to file. Which it becomes is now a leadership question, not a policy one.
What This Means for Your Vulnerability Management Programme
The window between now and when AI-enabled autonomous exploitation reaches attacker hands is not a reason to delay. It's the only window available to prepare.
The problem most security teams face isn't awareness. It's capacity. Vulnerability backlogs are already unmanageable. The average enterprise is sitting on hundreds of known, unpatched vulnerabilities at any given time, triaged by teams that are already stretched, using processes that were designed for a threat environment that no longer exists. When the velocity of vulnerability discovery accelerates, and it will (whether through Mythos or its successors), the gap between what teams know about and what they can act on will widen catastrophically.
Moving at human speed won't be sufficient. The organisations that come through this transition well will be the ones that have already changed how they prioritise, not just what they prioritise.
Seven things deserve serious attention in your programme over the next six months.
Prioritise by exploitability. AI-driven discovery will flood the NVD and CISA KEV with new vulnerabilities. Being able to determine what's actually being exploited, or what's likely to be exploited, will need to drive your automated VM workflows. CVSS scores, CISA KEV, and even specific vulnerability scanner scores, are inherently too slow and won't get you there.
Compress your remediation timelines. Exploitation timelines are already collapsing. According to the latest Google M-Trends report, the industry is seeing exploitation occur seven days before a patch is released. Remediation timelines need to be measured in minutes, not months. The assumption that weeks are acceptable for high-severity vulnerabilities is already outdated and will become untenable.
Build vulnerability prediction into your prioritisation. AI-driven exploit development is largely based on reverse engineering patches. Automated assessment of patch complexity and vulnerability type should become factors in how you prioritise risk rather than just severity scores.
Know your supply chains and open-source exposure. Supply chain risk, including Open Source Software, is growing rapidly and the first step to securing it is knowing what you actually use. Most security teams don't track this comprehensively. That leaves a significant exposure window which is exactly the kind of gap that turns a single dependency into an Axios-style incident.
Augment your security testing. When attackers are automating reconnaissance and exploitation, manual penetration testing alone is no longer sufficient. External Attack Surface Management helps. AI-augmented testing tools aren't yet where they need to be, but they will be and the teams that build the muscle now will be ready when they mature.
Don't end the lifecycle at patch deployment. Historically, once a patch is deployed, the VM lifecycle often ends. That assumption no longer holds. With exploits appearing up to seven days before patches are publicly available, monitoring news and intelligence feeds for indicators of post-patch exploitation is now part of the job, not an afterthought.
Mind your team. Burnout in security teams is already a critical problem, and the velocity changes ahead will make it worse. How you support people through this transition will matter as much as the tooling you deploy. Teams that are exhausted before the wave arrives will not be the teams that handle it well.
You won't get to all seven of these in six months. The point isn't to do everything. The point is to know which of these gaps exists in your programme today and have a clear answer for each by the time the next wave arrives.
None of this is theoretical. These are the conversations we're having with security leaders across the UK right now. What was best practice six months ago is becoming baseline survival.
Why Your Existing Tools Won't Be Enough
Most security teams rely on scanner-based vulnerability management including periodic scans, CVE feeds, and manual triage processes built for a world where vulnerabilities emerged at a manageable pace and attackers needed time and skill to exploit them. That model is breaking.
Traditional scanners tell you what vulnerabilities exist on your systems at the point in time you ran the scan. They don't tell you which of those vulnerabilities are being actively discussed in online communities. They don't tell you which CVEs are being weaponised this week. They don't tell you that a vulnerability in a piece of software your third-party supplier uses was quietly added to an exploit kit three days ago. By the time a scanner flags it, triages it, and a human reviews it, the window to act has often closed.
The shift that AI-enabled exploitation demands is from periodic, reactive vulnerability management to continuous, intelligence-driven exposure monitoring. The question can no longer be "what vulnerabilities do we have?" It has to be "which of our vulnerabilities are threat actors actively moving toward right now, which are they likely to target next, and how does that map against our specific infrastructure?"
That requires real-time threat intelligence that goes beyond CVE databases to tracking exploit development, threat actor tooling, emerging attack patterns, and translating that signal into prioritised, actionable guidance for your specific environment. It requires moving faster than the human-speed processes most teams are still running.
The organisations that invest in that capability now will have a structural advantage when the wave arrives. The ones that don't will be triaging with the same slow tools against a threat that has fundamentally changed its pace.
Where to Start
If you've read this far, you're already ahead of most. The next step isn't a procurement decision. It's an honest conversation about where your current vulnerability management programme will struggle when the attackers have access to the next generation of security focused AI models such as Mythos.
That's the conversation we're having with security leaders across the UK right now. Not a product demo. A working session on where your current exposure sits, which threat actor activity is most relevant to your infrastructure today, and what a continuous, intelligence-led vulnerability management posture would look like for your specific environment.
Happy to chat if any of this resonates. No pitch. No slides. Just a conversation about where your programme sits and where it might struggle.
Matt, CEO & Founder