OPINION AI vendors: "You need to use AI to fight AI threats (and do everything else in your corporate IT environment)." Also AI vendors: "That's not a security flaw; it's working as intended."

This pattern has become increasingly common as the digital hypemeisters tell businesses to use AI to do all the things, especially when it comes to detecting and blocking security issues. That is – until a security flaw exists in the AI itself, and then it's "expected behavior" or a "by-design risk."

Maybe, if we're lucky, the AI company at fault will quietly publish new security considerations in its documentation. But the root problem doesn't get fixed. In some cases – like prompt injection – the vendors can't really fix the flaw, even if they wanted to.

A couple of recent examples show how this plays out.

Researchers recently showed how three popular AI agents that integrate with GitHub Actions can be hijacked to steal API keys and access tokens. The three agents are Anthropic's Claude Code Security Review, Google's Gemini CLI Action, and Microsoft's GitHub Copilot, and all three vendors paid out bug bounties for the discoveries.

Anthropic paid a $100 bounty, upgraded the critical severity from a 9.3 to 9.4, and updated a "security considerations" section in its documentation. Google paid a $1,337 reward for the finding. And GitHub, after first calling this a "known issue" that they "were unable to reproduce," ultimately paid a $500 prize to the researchers for their disclosure.

None of the vendors assigned CVEs or published public security advisories.

Another bug-hunting team disclosed a design flaw baked into Anthropic's Model Context Protocol (MCP) that, they said, puts as many as 200,000 servers at risk of complete takeover.

They "repeatedly" asked Anthropic to patch the root issue, and were repeatedly told the protocol works as intended – despite 10 (so far) high- and critical-severity CVEs issued for individual open source tools and AI agents that use MCP.

A root patch, the bug hunters claim, could have reduced risk across software packages totaling more than 150 million downloads and protected millions of downstream users.

Anthropic's reasoning for not fixing the flaw? Expected behavior. "This is an explicit part of how MCP stdio servers work and we believe this design does not represent a secure default," the AI company told the researchers.

This means that, yet again, the very messy issue of securing these complex, non-deterministic AI systems gets pushed down the line to IT shops or end users. In this case, that includes any developer using Anthropic's official MCP software development kit in their apps or open source projects, plus any company bringing this open source code and AI tools into their environment.

Zooming out even further for a minute: it's worth noting the total lack of US federal AI regulations over restricting AI companies. That's despite the fact that one of them (Anthropic) last week warned its latest model is so skilled at finding security flaws that it would be much too dangerous to release it to the public. It's hard to imagine any other sector in which a company could tell people "our product puts everyone at grave risk" and still be allowed to operate with impunity.

Something I try to impart to my kids is the idea that being mature and earning trust means taking responsibility for their choices and actions, and owning their mistakes. That includes admitting when they are wrong, fixing their mistakes when possible, and making a course correction so that they can be better next time.

All of this "wasn't-me" behavior from AI companies saying security is someone else's problem to solve shows a total lack of maturity – or even common decency. One only wonders how long it will be before customers reach the same conclusion. ®