AI-Proof - Weekly AI Pulse
A concise summary of the week’s most important AI developments
Executive Summary
This was the week AI capability met market reality. The US software index is down 24% year-to-date. IBM lost tens of billions in a single session. Nassim Taleb warned of software bankruptcies. Markets are no longer pricing AI as potential. They are pricing it as disruption.
The triggers were concrete: live COBOL modernisation, autonomous vulnerability scanning, a $60 billion chip deal, and export control allegations against China’s leading AI lab. Meanwhile, the Pentagon summoned Anthropic’s CEO for a direct confrontation over military AI restrictions, and the 11 March deadline for the US Commerce Department to evaluate state AI laws approaches.
For businesses, the signal is clear: the potential gap between “AI can assist your work” and “AI can perform your work” narrowed materially this week, and markets are repricing accordingly.
Our newsletter is non-hype, so I don’t want to come across alarmist, so the word “potential” matters, markets are reacting to frontier innovations (new releases), in most businesses adoption is still and the co-pilot minute taker level of maturity. However, it is important not to ignore these events as they will eventually shape our business environment.
Why this matters to businesses
The software repricing is structural, not sentiment
The $2 trillion wiped from software stocks reflects a market recognising that AI agents can now perform work entire SaaS categories were built to support: vulnerability scanning, legacy code modernisation, contract review, expense processing. If your business model relies on selling seats for tasks an agent can handle, its time to conduct a risk assessment, the commercial pressure will come (AI adoption still lags well behind innovations).Vendor concentration risk is real and growing
Meta signed a $60 billion chip deal with AMD alongside its existing Nvidia commitment. Nvidia is investing $30 billion directly into OpenAI. The hyperscalers and model providers are locking into deep, exclusive partnerships. Businesses should audit their AI supply chain dependencies now, before vendor lock-in becomes vendor lock-out.Model IP is harder to protect than you think
The distillation exposé revealed that frontier model outputs can be systematically harvested and replicated through fraudulent API access at scale. If you are building differentiated products or services on top of model APIs, the competitive moat may be narrower than assumed. Assess how defensible your AI-enabled advantages actually are.Your action this week: Identify one incumbent software tool your team relies on where the core function can now be performed or enhanced by AI. Run a 48-hour parallel test. Not to replace it immediately, but to understand the gap. The businesses that move first on this will compound savings. The ones that wait will be paying legacy pricing for commodity capability.
This Week’s Policy & Regulation Brief
DeepSeek allegedly trained next model on banned Nvidia Blackwell chips
A senior Trump administration official told Reuters that DeepSeek's forthcoming V4 model was trained on Nvidia's most advanced Blackwell chips, likely housed in a data centre in Inner Mongolia, in potential violation of US export controls. The US is investigating. DeepSeek may strip technical indicators that reveal chip usage before release. The model is expected within days. If confirmed, this would be the most significant export control breach in the AI era.
Anthropic-Pentagon standoff escalates to direct confrontation
The dispute we flagged last week reached a new level. Defence Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to a Tuesday meeting officials described as "not a get-to-know-you meeting." The Pentagon wants unrestricted military use of Claude; Anthropic maintains red lines on autonomous weapons and mass domestic surveillance. Officials threatened to designate the company a "supply chain risk," a label normally reserved for hostile foreign actors. Claude remains the only AI model approved for the Pentagon's classified networks.
Anthropic calls out China's AI copycats
Anthropic published detailed evidence that DeepSeek, Moonshot AI, and MiniMax conducted 16 million fraudulent API exchanges through 24,000 fake accounts to extract Claude's reasoning, coding, and agentic capabilities. MiniMax alone ran 13 million exchanges. Anthropic traced activity to specific researchers at DeepSeek and explicitly framed the distillation as undermining the competitive advantage US export controls were designed to preserve.
Amazon’s AI tool Kiro blamed for AWS outages, raising safety and reliability concerns
Amazon’s internal AI automation caused at least two significant service disruptions, including a 13-hour outage, spotlighting early risks of AI integration in critical cloud operations. Despite attributing the incidents to user error, the episode stresses the importance of rigorous AI safety protocols.
DOJ AI preemption deadline looming at 11 March
The Commerce Department must publish its evaluation identifying "onerous" state AI laws by 11 March. States with non-compliant laws risk losing federal broadband funding. Colorado's AI Act is specifically named. This will be a pivotal moment in determining whether AI regulation in the US is driven federally or state-by-state.
Anthropic drops $20 million into pro-regulation super PAC
Anthropic committed $20 million to Public First Action, directly countering the $125 million Leading the Future PAC backed by OpenAI co-founder Greg Brockman and Andreessen Horowitz. Over $200 million is now committed to AI-related midterm election spending, making AI regulation a defining 2026 campaign issue.
Australia scraps AI Advisory Body
The Australian government abandoned plans for a permanent external AI Advisory Body after 15 months of work and approximately $200,000 spent, opting instead for an internal AI Safety Institute. Critics warn Australia risks repeating the regulatory failures seen with social media.
Model & Platform Updates
Claude Code Security and COBOL modernisation tools launch
Two Anthropic product launches dominated market headlines. Claude Code Security is an autonomous vulnerability scanner that reasons about codebases like a security researcher, tracing data flows and catching context-dependent vulnerabilities. Using Opus 4.6, the team discovered 500+ previously undetected bugs in open-source projects. Separately, Claude Code can now map dependencies across millions of lines of COBOL, document workflows, and migrate legacy code. The combined market impact: tens of billions wiped off IBM and cybersecurity stocks.
Enterprise agent programmes expand from both OpenAI and Anthropic
Both leading AI labs made enterprise moves this week. Anthropic unveiled pre-built agentic workflows for finance, legal, HR, and engineering through Cowork plugins, with enterprise admin controls and new connectors for Gmail, DocuSign, and Clay. OpenAI announced multi-year Frontier Alliances with Accenture, BCG, McKinsey, and Capgemini, embedding forward-deployed engineers alongside consulting teams. The battle for enterprise AI distribution is now fully joined.
Google ships Gemini 3.1 Pro with doubled reasoning performance
Gemini 3.1 Pro more than doubled reasoning performance over 3 Pro on ARC-AGI-2, scoring 77.1%, at unchanged pricing. The rollout went live across the Gemini app, Vertex AI, and Google AI Studio on the same day, combined with Alphabet's 4% stock jump on the enterprise rollout.
Samsung Galaxy AI expands multi-agent ecosystem
Samsung announced Perplexity as its second integrated AI agent for Galaxy S26 alongside Gemini, with broader multi-agent support allowing users to choose task-specific AI assistants. Nearly 80% of Samsung users now rely on two or more AI agents daily.
Google brings AI music generation to the mainstream with Gemini integration
Google’s Lyria 3 music model embedded in Gemini enables users to create AI-generated songs with lyrics and cover art, broadening creative accessibility. This expansion into AI music democratisation pairs generation with provenance measures like SynthID watermarks.
WordPress integrates an AI assistant directly into its site editor
Automattic's native AI assistant in WordPress.com uses Google’s Nano Banana models to offer in-editor content editing, layout suggestions, and image generation. This seamless AI infusion into a leading CMS signals major steps toward widespread AI-powered content creation.
DBS pilots system letting AI agents make payments autonomously
DBS Bank and Visa’s pilot enables AI agents to handle real-time payment transactions securely for routine purchases, marking a shift of AI from advisory roles into automated financial execution. The trial addresses key security and liability concerns with tokenised credentials and issuer-managed approvals.
Quick Hits
xAI co-founder exodus continues - Half of xAI's original 12 co-founders have now departed, with at least three forming a new stealth venture. Musk frames the departures as reorganisation for execution speed ahead of a planned 2026 IPO.
Bridgewater warns AI capex entering "more dangerous phase" - Big Tech set to spend $650 billion on AI infrastructure in 2026, up 59% from $410 billion in 2025. Co-CIO Greg Jensen warned AI leaders can no longer meet investor expectations "without posing existential threats to other sectors."
Perplexity’s retreat from ads signals a strategic shift toward enterprise focus - The AI search startup pivots away from consumer advertising to concentrate on sustainable enterprise sales..
How Silicon Valley has long ignored China’s looming Taiwan invasion and its chip supply impact - U.S. officials’ warnings to major chip customers underscore looming geopolitical risks for global AI semiconductor supply chains.






