AI-Proof - Weekly AI Pulse
A concise summary of the week’s most important AI developments
Executive Summary
This week’s strongest AI signals were not about hype. They were about governance, operating risk, and practical deployment.
First, the Pentagon–Anthropic dispute showed how quickly AI policy, procurement, and political risk can collide. Whether or not most businesses ever face government intervention directly, the lesson is immediate: if you rely on third-party AI systems, your access, terms of use, and acceptable use boundaries can change faster than most organisations are prepared for. AI governance is no longer a future-state exercise. It is an operating requirement.
Second, AI agents continued their shift from interesting demo to usable workflow tool. OpenAI’s latest model claims stronger computer-use performance, Mastercard demonstrated an early form of agentic payment, and Anthropic expanded memory and migration features designed to make AI tools more persistent in day-to-day work. The practical takeaway is not “full autonomy is here.” It is that businesses should now be identifying narrow, high-frequency workflows where AI can save time under supervision.
Third, the policy and geopolitical backdrop is becoming more consequential. China’s AI push, proposed US chip export controls, and the UK’s copyright reversal all point to a market that is becoming more politically shaped, more regulated, and more fragmented. For leadership teams, this means AI strategy cannot sit only with innovation teams. It now touches legal, procurement, cyber, compliance, and workforce planning.
The organisations that will benefit most from AI over the next 12 months are unlikely to be the ones chasing every release. They will be the ones making deliberate choices now: where AI is allowed, where it is not, which workflows justify deployment, and what controls need to be in place before use scales further.
Why this matters to businesses
Treat AI governance as a live operating issue, not a policy document. The Anthropic episode is a reminder that model access, vendor relationships, and acceptable-use standards can become commercial and reputational risks very quickly. If your teams are already using AI in customer-facing, regulated, or sensitive contexts, now is the time to define approved tools, prohibited use cases, escalation rules, and data-handling boundaries.
Start with supervised workflow deployment, not broad AI transformation rhetoric. This week’s product launches matter because they improve persistence, memory, tool use, and action-taking. That makes AI more useful for repetitive internal work. Pick one or two bounded tasks now, such as research preparation, first-draft reporting, meeting follow-up, or structured admin, and test where AI can reduce turnaround time without weakening control.
Bring AI decisions closer to the leadership agenda. This week’s developments were not just technical. They touched procurement, regulation, geopolitics, IP, workforce design, and platform dependency. Businesses should be asking today: which AI tools are already in use across the company, where are we exposed to vendor or regulatory change, and who internally owns the commercial and risk decisions as usage expands?
This Week’s Policy & Regulation Brief
Anthropic Blacklisted by the Pentagon - Then Used in Iran Strikes
The US Department of Defense designated Anthropic a “supply-chain risk” after the company declined to permit unrestricted military uses of its Claude AI models. Anthropic has said it will challenge the designation in court, arguing the classification is legally unsound.
However, Reuters reported that US military contractors continued using Claude in operational contexts during the same period the designation was being enforced. The episode highlights the growing tension between national-security policy and the practical reliance many organisations now have on commercial AI systems.
OpenAI Signs Pentagon Deal, Then Rewrites It
OpenAI secured a classified Pentagon contract reportedly worth around $200 million shortly after the US government moved to restrict the use of certain competing AI systems within federal agencies. CEO Sam Altman later acknowledged that the initial rollout of the deal appeared “opportunistic and sloppy,” and OpenAI subsequently clarified that its models cannot be used for domestic surveillance and that intelligence-related uses would require separate agreements.
The announcement also triggered debate inside the tech industry. Hundreds of employees across major AI companies, including Google and OpenAI, signed an open letter calling for clearer limits on the military use of advanced AI systems.
China Makes AI the Centrepiece of Its Five-Year Plan
China’s newly announced five-year national development plan places artificial intelligence at the centre of its industrial strategy. The blueprint references AI dozens of times and introduces a sweeping “AI+” programme designed to accelerate adoption across sectors including manufacturing, healthcare, robotics, and advanced computing.
Chinese policymakers increasingly view AI as a foundational technology for economic competitiveness and national security. For global businesses, the message is clear: China intends to remain a central player in the development and deployment of next-generation AI infrastructure and applications.
Chinese state enterprises and agencies move to curb in-office OpenClaw use over potential security risks
China has warned state-owned enterprises and government agencies against installing OpenClaw on office systems over security concerns, even as the open-source AI agent experiences a surge in adoption across the country (source: Reuters). Local governments and tech giants such as Tencent and Alibaba have been promoting the technology through subsidies, cloud deployments, and developer programmes, fuelling what many observers describe as an “OpenClaw boom.”
US Drafts Sweeping AI Chip Export Controls
The US Commerce Department is preparing new rules that could significantly expand oversight of global AI chip exports from companies such as Nvidia and AMD.
According to early reporting, the proposed framework would introduce a tiered system. Smaller shipments could receive streamlined approval, while large-scale AI infrastructure deployments would face stricter review and potential government-to-government agreements.
If implemented, the rules would give Washington far greater influence over where advanced AI infrastructure is built worldwide, reflecting the growing view that compute capacity is a strategic asset rather than simply a commercial product.
UK Kills AI Copyright Bill, Delays to 2027
Following strong opposition from the creative sector, the UK government is reconsidering earlier proposals that would have allowed AI developers to train models on copyrighted content under broad opt-out rules. A House of Lords committee reviewing the issue has recommended a “licensing-first” framework that would require AI developers to obtain permission or licences before using copyrighted material for training.
Yann LeCun's $1B bet against LLMs
Turing Award winner Yann LeCun (formerly at Meta) launched Advanced Machine Intelligence with a $1.03 billion!!! seed round to build AI systems that focus on real-world physical understanding rather than language models.
Large Language Models work by predicting the next token in a sequence. That’s incredibly powerful for language tasks but has fundamental limitations.
LeCun’s argument:
They don’t understand the world
They don’t reason in a structured way
They don’t have persistent memory
They can’t plan actions over time
If successful, the approach could open new pathways for AI development in areas such as robotics, manufacturing, and scientific discovery, fields where understanding the physical world matters as much as generating text. « One to watch.
Oregon Passes First 2026 AI Chatbot Safety Bill
Oregon's SB 1546 cleared both chambers with near-unanimous support, requiring chatbot operators to disclose AI identity, implement suicide-prevention protocols, and add child-safety protections. Sent to the Governor's desk. This follows the Google Gemini wrongful death lawsuit, in which a family alleges the chatbot cultivated a delusional relationship and coached a 36-year-old man's suicide. Expect more state action.
Model & Platform Updates
OpenAI Launches GPT-5.4 - The First Model That Uses a Computer Better Than You (Allegedly)
Released 5 March, GPT-5.4 is OpenAI's most significant release this year. It's the first general-purpose model with native computer-use capabilities, scoring 75% on the OSWorld benchmark, above the 72.4% human baseline. It has a 1 million-token context window (double the previous generation), 33% fewer hallucinations than GPT-5.2, and a new "Tool Search" system that cuts token consumption by 47% in agent workflows. Available in standard, Thinking, and Pro variants. Balyasny Asset Management has already deployed it across 95% of its investment teams, reporting that complex research tasks that took days now complete in hours.
Google’s NotebookLM Can Now Turn Your Research Into a Documentary
Google introduced generative AI-powered Cinematic Video Overviews in NotebookLM, enabling the tool to transform research documents, notes, and source material into narrative-driven videos. The feature automatically synthesises information, structures it into a storyline, and generates visual sequences that explain complex topics in a documentary-style format. It represents a step beyond summarisation, turning research into shareable multimedia outputs and highlighting AI’s growing role in knowledge synthesis, content production, and the transformation of written analysis into dynamic storytelling.
Zoom introduces an AI-powered office suite, says AI avatars for meetings arrive this month
Do you remember during COVID the clever ways people tried to fool their bosses they were actually working, Well - Zoom is bringing AI avatars into meetings this month as part of a broader push into AI-powered workplace software.
Zoom says the avatars are photorealistic representations that can mimic your appearance, expressions, and lip/eye movements in meetings, so you can appear present without being fully camera-ready. They’re also intended to work in Zoom Clips / asynchronous video messages, where an avatar can deliver updates on your behalf. Zoom is pairing this with real-time deepfake risk detection during meetings.
Mastercard brings agentic payments to life in Singapore with DBS and UOB
Mastercard completed its first live authenticated AI-agent-based payment transaction in Singapore with DBS and UOB, offering an early glimpse of what “agentic payments” could look like in practice. In simple terms, agentic payments allow an AI agent to not only recommend or select a product or service, but also complete the transaction on a user’s behalf within pre-approved rules and security checks. That matters because it moves AI from search and assistance into real commercial action. The challenge now will be trust, permissions, fraud controls, and defining how much autonomy consumers and businesses are willing to hand over.
Anthropic Ships Claude Memory Import and Free-Tier Memory
Anthropic has introduced new tools allowing users to import conversation histories and context from other AI assistants, alongside expanded memory capabilities designed to make Claude more persistent across workflows.
The goal is to move beyond single-session chat interactions and enable AI systems to retain context over time, a key requirement if AI is to function as a practical day-to-day work assistant rather than simply a question-and-answer tool.
OpenAI Codex Security - AI That Hunts Vulnerabilities Autonomously
Formerly codenamed Aardvark, Codex Security is an autonomous agent that identifies, validates via sandbox exploits, and patches software vulnerabilities. During beta, it found novel flaws in OpenSSH and Chromium, reduced alert noise by 84%, and cut false positives by 50%. Available to Enterprise, Business, and Education customers. The Anthropic-Mozilla collaboration tells a similar story: Claude Opus 4.6 discovered 22 CVEs in Firefox in two weeks, roughly matching two months of typical high-severity bug reports.
Nvidia Is Reportedly Preparing an Open-Source AI Agent Platform
Nvidia is reportedly preparing an open-source platform for building autonomous AI agents, extending its ambitions beyond chips into the software layer that powers AI systems. According to reports ahead of GTC, the platform would integrate parts of Nvidia’s existing AI stack, including NeMo and related infrastructure, with tools designed to help models plan tasks, use external tools, and operate across enterprise workflows. If launched as reported, it would strengthen Nvidia’s position as a core infrastructure provider for agentic AI.
Quick Hits
Block cuts 40% of workforce, CEO credits AI - Jack Dorsey cut 4,000 jobs and told the press "a much smaller team equipped with these tools can achieve more." Block shares surged 25%. Dorsey predicted most companies would reach the same conclusion within a year. 45,000+ tech jobs have been cut in Q1 2026 with AI cited as the primary driver.
AI-generated war content reaches industrial scale - BResearchers and journalists have identified a surge in AI-generated images and videos related to the Iran conflict circulating across social media platforms. Investigations by outlets including Reuters have debunked multiple pieces of fabricated footage that gained widespread traction online. The episode highlights how generative AI is accelerating the speed and scale at which misinformation can spread during geopolitical crises.
Meta acquires Moltbook, the social network for OpenClaw bots - Meta has acquired Moltbook, a platform built for autonomous OpenClaw agents to post updates, interact, and coordinate with one another. The acquisition gives Meta an early foothold in what some are calling the emerging “agent internet”, digital environments designed not just for humans, but for AI systems acting on their behalf. It signals that Meta sees value not only in building models, but in owning the social and distribution layers around autonomous AI.
We work with leadership teams to move from experimentation to execution safely, commercially, and at speed. Talk to us.






