AI-Proof - Weekly AI Pulse
A concise summary of the week’s most important AI developments
Executive Summary
This was the week the AI bull market met the income statement. On Wednesday Microsoft, Alphabet, Meta and Amazon all reported in a single afternoon, the busiest earnings day in tech history. The numbers were broadly strong (Microsoft’s AI run rate hit $37 billion, up 123% year on year, and Alphabet’s Cloud cleared $20 billion in a quarter for the first time), but the market punished Meta for raising 2026 capex guidance to $125–145 billion and Amazon for soft Q2 outlook. Apple reports today. The combined 2026 AI capex commitment across the four hyperscalers is now north of $650 billion, a number that only works if revenue keeps compounding.
It was also the week three structural pillars of the AI market shifted at once. The Wall Street Journal reported on Tuesday that OpenAI has missed internal revenue and weekly-active-user targets, prompting a sharp sell-off in Oracle, SoftBank and the AI chipmakers. China’s NDRC ordered Meta to fully unwind its $2 billion acquisition of agentic AI startup Manus, the first time Beijing has forced a non-Chinese cross-border AI deal to reverse. And Microsoft and OpenAI quietly dissolved their exclusive licensing arrangement on Monday, freeing OpenAI to sell across AWS and Google Cloud and ending Microsoft’s revenue share. AWS moved within 48 hours, putting GPT-5.5, GPT-5.4 and Codex into Bedrock.
On the product side, OpenAI shipped GPT-5.5 to all paid ChatGPT and Codex tiers, DeepSeek launched the open-source V4 family running natively on Huawei chips, Mistral released Medium 3.5 with autonomous “Vibe” coding agents, and Gemini gained native file export to Word, Excel and PDF. The practical takeaway for UK leaders this week is simpler than it looks: the cost of switching AI vendors has just dropped sharply, and the menu of usable tools has widened materially. Concrete next steps below.
What to Try This Week
You do not need to pick a side in the OpenAI versus Anthropic versus Google argument to benefit from this week. Three things are worth a serious hour of testing before Friday.
1. Test GPT-5.5 on a real piece of work. At Lighthouse this is back as our fave LLM. It launched on 23 April to Plus, Pro, Business and Enterprise users in ChatGPT and Codex, with a 400K context window and a noticeably faster response mode. Greg Brockman called it “a new class of intelligence”. The honest test is not the demo videos. Give it a strategy memo, a complicated spreadsheet analysis, a long board pack, or a multi-step research brief and judge it on output. API access is still listed as “coming very soon”, so for now this is a ChatGPT-only test.
2. Pilot DeepSeek V4 on a routine workflow. Finally - DeepSeek released V4-Pro and V4-Flash as fully open-weight models on 24 April. V4-Flash runs at roughly $0.28 per million output tokens, which is a fraction of what Western frontier models cost, and the quality is now in the same conversation as GPT-5.4 on coding and reasoning benchmarks (third-party verification still pending). For non-sensitive, high-volume tasks (document summarisation, first-draft research, simple agent loops) it is worth running V4-Flash in parallel against your current model and comparing on the cost-per-acceptable-output metric. This is the first credible “drop in for cheap” option in months.
3. Use Gemini’s new direct file generation. Google rolled this out globally on 29 April. You can now ask Gemini to produce a .docx, .xlsx, .pptx, PDF or Markdown file directly from a chat prompt and either download it or save it to Drive. This eliminates the “Prompt, then paste into Word, then reformat” workflow that most teams still run. For anyone using Google Workspace, it is a free productivity upgrade available right now. Test it on something real: a one-pager from a research note, a budget table from a transcript, a draft slide outline from a brief.
For leadership teams, there is a fourth thing worth doing. The end of OpenAI–Microsoft exclusivity means your AI procurement is now a genuine competitive process for the first time. If you signed an Azure-OpenAI commitment in 2024 or 2025, ask your team to model what a multi-cloud Bedrock or Vertex deployment looks like commercially. You almost certainly have more leverage on price and terms today than you did last week.This Week’s Policy & Regulation Brief
Big Tech earnings: $650 billion of AI capex meets the question of revenue
On Wednesday 29 April, Microsoft, Alphabet, Meta and Amazon all reported in the busiest single tech earnings session on record. Microsoft beat with $82.9 billion of revenue, Azure up 40%, and disclosed an AI business now running at $37 billion annualised, up 123% year on year. Alphabet beat on every line, Cloud cleared $20 billion in the quarter for the first time, and the stock rallied. Meta beat on revenue but raised 2026 capex guidance to $125–145 billion and lost roughly 6% after hours. Amazon beat on sales but missed Q2 operating-income guidance, also down after hours. Apple reports today. The combined 2026 AI capex commitment across the four hyperscalers is now over $650 billion, the largest single-year infrastructure build in commercial history. Source: CNBC, Bloomberg, Microsoft and Alphabet earnings releases.
OpenAI misses revenue and user targets, AI infrastructure stocks sell off
The Wall Street Journal reported on Tuesday 28 April that OpenAI has missed multiple internal monthly revenue targets and has fallen short of its 1 billion weekly active ChatGPT users goal, with CFO Sarah Friar warning internally that the company may struggle to fund committed compute contracts. Oracle fell more than 6%, SoftBank dropped roughly 10% in Asia and Nvidia, AMD and Broadcom all closed 3–5% lower on the day. Friar is also reportedly at odds with Sam Altman over the aggressiveness of the planned 2026 IPO timeline at the company’s $852 billion valuation. The story is significant for two reasons: it is the first crack in the OpenAI growth narrative, and it strengthens the negotiating position of any enterprise sitting in front of an OpenAI commercial team this quarter. Source: WSJ, CNBC, Reuters.
China blocks Meta’s $2 billion Manus deal in first cross-border AI forced unwind
On 27 April China’s National Development and Reform Commission ordered Meta to fully unwind its $2 billion acquisition of agentic AI startup Manus, registered in Singapore but founded by Chinese entrepreneurs. The order was issued under foreign-investment security rules, with the threat of penalties and individual criminal exposure for non-compliance. Manus employees are already integrated into Meta’s Singapore team and Tencent and HongShan Capital have already received deal proceeds, making the practical reversal extraordinarily complex. This is the first time Beijing has forced the unwind of a closed cross-border AI deal, and it sets a precedent any Western buyer of a Chinese-founded AI asset will now have to price in regardless of where the target is incorporated. Source: NYT, Bloomberg, Fortune.
Microsoft and OpenAI dissolve exclusive partnership
On Monday 27 April Microsoft and OpenAI announced a fundamental rewrite of their partnership. Microsoft’s licence to OpenAI IP runs to 2032 but is now non-exclusive. OpenAI can sell its products across AWS, Google Cloud and any other platform. Microsoft will no longer pay revenue share to OpenAI, and OpenAI’s outbound revenue share to Microsoft is capped through 2030. The AGI trigger clauses that previously could have altered the partnership are removed. AWS moved within 48 hours, putting GPT-5.5, GPT-5.4 and Codex into Bedrock as a limited preview, the first time Anthropic-aligned AWS has stocked OpenAI frontier models. For enterprise buyers this is the structural change of the week: the OpenAI–Azure lock-in is over, and procurement leverage just shifted. Source: Reuters, CNBC, NYT, AWS newsroom.
Google commits up to $40 billion to Anthropic; Cohere–Aleph Alpha merge into a $20 billion sovereign-AI play
Google announced on Friday 24 April that it will invest up to $40 billion in Anthropic ($10 billion immediately at a $350 billion valuation, $30 billion contingent on performance milestones), plus 5 gigawatts of compute starting 2027. Combined with Amazon’s $25 billion commitment the previous week, Anthropic has now secured roughly $65 billion of fresh hyperscaler capital in eight days against an annualised revenue run rate it says has crossed $30 billion. Separately, Canadian Cohere announced a merger with Germany’s Aleph Alpha, valuing the combined sovereign-AI champion at around $20 billion and backed by Schwarz Group’s $600 million Series E. The two deals confirm the same trend: enterprises and governments outside the US are willing to pay a meaningful premium for AI providers they perceive as data-sovereign. Source: Bloomberg, NYT, TechCrunch, Business Wire.
Meta and Microsoft cut roughly 17,000 jobs while AI capex doubles
On 23 April Meta confirmed cutting 8,000 roles, around 10% of headcount, with effect from 20 May, and froze a further 6,000 open roles. Microsoft simultaneously offered voluntary buyouts to about 8,750 US employees, around 7% of US headcount. Both companies tied the cuts explicitly to AI-driven efficiency gains and rising capital expenditure on infrastructure. Across the big four hyperscalers, more than 92,000 tech employees have been let go in 2026 to date. The labour signal is now hard to dismiss as one-off restructuring; it is becoming policy. UK boards should expect the question of “where could AI replace headcount” to land on the next operating-review agenda whether they raise it or not. Source: CNBC, BBC, Fortune.
Intel surges 20%+ on AI chip demand; Nvidia briefly tops $5 trillion
Intel reported Q1 revenue of $13.6 billion, up 7% year on year, with data centre and AI segment revenue up 22% to $5.1 billion. The stock jumped more than 20% the next day and traded above its dot-com era peak; the US government’s roughly 10% stake (acquired for $8.9 billion in 2025) is now worth around $35 billion. Tesla also confirmed Intel’s 14A process as the manufacturing partner for the AI5 chip in its $20–25 billion Terafab project with SpaceX and xAI. Nvidia, meanwhile, briefly crossed a $5 trillion market capitalisation on 24 April, the first company to do so, before giving back roughly 1% on the OpenAI revenue news. Source: CNBC, Intel newsroom, Reuters.
Musk versus Altman trial opens, with consequences for the OpenAI IPO
Opening arguments in Elon Musk’s lawsuit against OpenAI, Sam Altman and Greg Brockman began in Oakland federal court on Monday 28 April. Musk is seeking $134–150 billion in damages and the dissolution of OpenAI’s for-profit structure, claiming Altman betrayed the founding non-profit mission. Both Musk and Altman testified during the week; Satya Nadella is on the witness list. The trial is expected to run roughly four weeks. A Musk win would force structural changes at OpenAI and almost certainly delay the IPO targeted for late 2026; even a partial win could leak materially damaging internal documents. For enterprises sitting on multi-year OpenAI commitments, this is a litigation-risk line item that needs explicit board acknowledgement. Source: NYT, AP, NPR.
Anthropic Claude Mythos accessed by unauthorised users; supply-chain vulnerability exposed
Anthropic disclosed on 21–22 April that an unauthorised group had accessed Claude Mythos, its most powerful and tightly-gated cybersecurity model, via a third-party contractor’s credentials combined with data from a separate breach at AI recruiter Mercor. Mythos is the model Anthropic itself describes as capable of finding and exploiting vulnerabilities in every major operating system and browser, and which prompted an emergency US Treasury–Federal Reserve summit with bank CEOs earlier in the month. There is no evidence yet of malicious use, but the incident exposes a structural problem: even heavily restricted “red-team” access cannot be fully contained once distributed across vendors. For UK CISOs, the lesson is to ask not just which AI vendors you use, but which AI vendors your AI vendors use. Source: Bloomberg, BBC, SiliconAngle, WSJ.
Anthropic acquires Coefficient Bio for $400 million
On 28 April Anthropic confirmed an all-stock acquisition of stealth biotech AI startup Coefficient Bio for around $400 million. Coefficient has fewer than ten employees and was founded by two former Genentech computational drug-discovery scientists. The price tag for an eight-person scientific startup is the headline, but the strategic point is that Anthropic is following OpenAI (Novo Nordisk partnership, GPT-Rosalind) and Google DeepMind into vertical scientific AI, where the competitive moat is access to specialised data and domain expertise rather than raw model scale. UK life-sciences and biotech firms not yet engaged in conversations of this kind should expect the competitive gap to widen quickly. Source: LinkedIn (Steve Torso), Anthropic.
Model & Platform Updates
OpenAI ships GPT-5.5 to all paid ChatGPT and Codex tiers
OpenAI released GPT-5.5 on 23 April, available to Plus, Pro, Business, Enterprise and Education users in ChatGPT and Codex. The model has a 400K context window in Codex, a 1.5× faster “Fast mode”, reported scores of 84.9% on the GDPVal 44-occupation benchmark and 78.7% on OSWorld-Verified for autonomous computer use. Greg Brockman framed it publicly as “a new class of intelligence... way more intuitive”. API access is officially “coming very soon” but not yet live, and OpenAI has classified GPT-5.5 as “High” cybersecurity risk rather than “Critical”. For UK teams already paying for ChatGPT Business or Enterprise, this is a free upgrade worth testing on real work this week. Source: OpenAI blog, TechCrunch, CNBC.
DeepSeek V4 launches open-source, on Huawei silicon, at a fraction of frontier cost
On 24 April Chinese lab DeepSeek released V4-Pro (1.6 trillion parameter mixture-of-experts, 49 billion active, 1 million-token context) and V4-Flash (284 billion parameters total, 13 billion active) as fully open-weight models. Both are optimised to run natively on Huawei Ascend chips, support hybrid attention for long context, and ship at prices roughly 99% below comparable frontier APIs (Flash is around $0.28 per million output tokens). DeepSeek claims V4-Pro is now competitive with GPT-5.4 on coding and reasoning benchmarks. Independent verification is still pending and the political risk of routing data through DeepSeek is non-trivial, but for non-sensitive, high-volume workloads V4-Flash is the most credible cost-disruptive option since the original DeepSeek V3. Source: DeepSeek API docs, WSJ, TechCrunch.
Mistral releases Medium 3.5 with autonomous “Vibe” coding agents
On 29 April French lab Mistral launched Medium 3.5, a 128 billion parameter dense model in public preview, with a 256K context window and 77.6% on SWE-bench Verified. The release is paired with Vibe, a remote cloud coding agent that runs in isolated sandboxes, executes multi-step coding tasks asynchronously and can open GitHub pull requests autonomously. Pricing is $1.50 per million input tokens and $7.50 per million output. For development teams already evaluating Cursor, Codex or Claude Code as autonomous coding tools, Mistral now offers a credible European alternative with stronger data-sovereignty positioning, which matters for any organisation worried about EU AI Act exposure or UK public-sector procurement. Source: mistral.ai.
Gemini gets direct file generation; AWS Bedrock adds OpenAI; Anthropic ships persistent memory
Three smaller-but-immediately-useful product updates landed this week. Google rolled out direct file generation in Gemini globally on 29 April: users can now produce Word, Excel, PowerPoint, PDF and Markdown files directly from a chat prompt and save them to Drive. AWS, on 28 April, made GPT-5.5, GPT-5.4 and Codex available inside Bedrock with enterprise-grade IAM, PrivateLink, guardrails and CloudTrail logging, the first time AWS has stocked OpenAI frontier models. Anthropic moved Claude Managed Agents to public beta with persistent memory: agents now retain learning across sessions in exportable filesystem files, with multi-agent memory sharing, audit logs and rollback. Early enterprise adopters Netflix and Rakuten reported a 97% reduction in first-pass errors in document workflows. Source: Google blog, AWS newsroom, Anthropic.
Google TurboQuant: 6× memory reduction and 8× speedup at zero accuracy cost
Presented at ICLR 2026 in early April but gathering enterprise attention this week, Google’s TurboQuant algorithm uses PolarQuant vector rotation and Quantized Johnson-Lindenstrauss compression to quantise the KV cache to 3 bits with no measurable accuracy loss. Result: roughly 6× reduction in memory overhead and 8× speedup in attention computation. The implication for enterprise AI economics is significant. If TurboQuant-class techniques are deployed at scale across hyperscaler inference fleets, the unit cost of running frontier models could fall sharply over the next 12 months, which in turn changes the maths of every per-seat enterprise AI contract you have signed. Worth flagging to your CIO. Source: Google Research, ICLR 2026.Quick Hits
Nvidia briefly tops $5 trillion market capitalisation: First company in history to do so, on 24 April; the stock has risen 20% in April alone.
Taylor Swift trademarks her voice: Filed three US trademark applications on 28 April covering voice soundbites and Eras Tour imagery, the first attempt by an artist to use trademarked sound marks to defend against AI deepfakes; widely viewed as a test case for celebrity and executive voice IP.
Lovelace AI exits stealth: Founded by Andrew Moore (former head of Google AI), launched “Elemental”, a context-engine builder that sits between AI agents and enterprise data systems for high-stakes industries.
IBM launches Bob: A general-availability AI development partner spanning the full software development lifecycle, internally piloted with 80,000+ IBM employees and now sold as SaaS with a 30-day trial.
Lightelligence IPO debut: Photonics AI chipmaker priced and traded up nearly 400% on 28 April, raising around $310 million; signals investor appetite for optical-interconnect plays beyond the Nvidia trade.
Cursor opens TypeScript SDK: Public beta of an SDK that lets developers build custom coding agents using the same runtime, harness and frontier models that power Cursor itself.
We work with leadership teams to move from experimentation to execution safely, commercially, and at speed. Talk to us.






