🌟 Editor's Note: Recapping the AI landscape from 04/21/26 - 04/27/26.

🎇 Welcoming Thoughts

  • Welcome to the 40th edition of KPAI Weekly.

  • What’s included: company moves, a weekly winner, AI industry impacts, practical use cases, and more.

  • I switched to using Claude Code full time with regular Claude on the side.

  • This strategy lets me keep full context on everything I do at all times.

  • Meta partnered with a space startup to bring solar power into their data centers.

  • GPT is being investigated in a Florida shooting case.

  • Anthropic wants to train on song lyrics, will likely be spending more time in the courtroom.

  • We have a cross race investment this week.

  • ^ I am incredibly bullish on both Anthropic and Google,

  • For the most part a strong week across the board.

  • I need to determine if I want to change anything up once the initial race is over.

  • We’re getting close in the graphic down below.

Let’s get started—plenty to cover this week.

👑 This Week’s Winner: OpenAI // ChatGPT


OpenAI wins a product-heavy week. Between a new frontier model six weeks after the last one, autonomous workspace agents, and a rebuilt image engine, OpenAI covered consumer, enterprise, and developer in one sprint. Here's the recap:

  • GPT-5.5 Launch: OpenAI launched GPT-5.5 just six weeks after 5.4, outscoring Gemini 3.1 Pro and Claude Opus 4.7 on published benchmarks across agentic coding, computer use, and scientific research. Good pace, and positive reviews. Good combo.

  • Workspace Agents: Autonomous agents for Business and Enterprise users that automate tasks across Slack, Gmail, and SharePoint with scheduled runs. Free until May 6, then credit-based. This is interesting, going to try it out soon, curious how it compares with Claudes agent builder.

  • ChatGPT Images 2.0: Major image gen upgrade across all plans - thinking capabilities, 2K resolution, multilingual support. First OpenAI image model that reasons about what it's creating. DALL-E retiring May 12. Great reviews. One of the best image models out.

OpenAI also launched Codex Labs with Accenture, PwC, Cognizant, Infosys, and Capgemini to scale Codex (OpenAI’s Claude Code) in enterprises. ChatGPT for Clinicians went live as a free tool for verified U.S. physicians. And their Microsoft partnership got restructured: license goes non-exclusive through 2032, revenue share ends, OpenAI can now serve on any cloud.

From Top to Bottom: Open AI, Google Gemini, xAI, Meta AI, Anthropic, NVIDIA.

⬇️ The Rest of the Field

Who’s moving, who’s stalling, and who’s climbing: Ordered by production this week.

🔴 xAI // Grok

  • SpaceX $60B Cursor Option: SpaceX secured the option to acquire Anysphere (Cursor) for $60B later in 2026, or $10B for the collaboration. Cursor gets Colossus supercomputer access. This would be a fantastic move for xAI. Cursor is just behind Claude Code and Codex IMO.

  • Custom Timelines: Grok-curated 75+ topic feeds on X, letting Premium users pin topic timelines to their home tab. Grok reads every post and assigns labels. Grok weaving deeper into all facets of twitter.

  • Free Users Paywalled: Auto and Expert models grayed out for non-paying users, restricted to Fast model only. All NVIDIA5 seem to be cutting down on usage allowance. Will only continue over time.

🟠 Anthropic // Claude

  • Claude Code Postmortem: Anthropic admitted three engineering mistakes caused a monthlong decline - reasoning effort dropped, caching bug, and a response cap. All fixed. Usage limits reset for all subscribers. Usage seems fine for Claude Code now. Claude LLM still poor limits especially in long convos with lots of context. Part of the reason I’ve made a full switch to Claude Code.

  • Secondary Valuation ~$1T: Implied valuation on secondary markets nearly tripled from the $380B primary round, surpassing OpenAI ($880B) on Forge Global. IPO reportedly October 2026. Not a surprise.

  • Pentagon Thaw: Trump told CNBC Anthropic is "shaping up" and a DoD deal is "possible" after CEO Dario Amodei met White House officials. This is a positive.

🟣 Google // Gemini

  • $40B Anthropic Investment: Google committed $10B at a $350B valuation plus up to $30B on milestones, with 5 gigawatts of dedicated compute over five years. CROSS RACE INVESTMENT. I like it for both companies. More capital to Anthropic please.

  • Cloud Next '26: Google released its Gemini Enterprise Agent Platform, new eighth-gen TPUs, and announced a $750M partner fund. Sundar Pichai disclosed 75% of new code at Google is now AI-generated. Lots of agent builder platforms releasing across the NVIDIA5. Clear direction signal.

  • Merck $1B Deal: Merck committed up to $1B to make Google Cloud its primary AI anchor, deploying Gemini Enterprise across R&D, manufacturing, and commercial for 75,000 employees. Enterprise win for Gemini.

⚪️ NVIDIA

  • $5 Trillion Market Cap: First record close since October 2025. Shares +4.3% to $208.27, roughly $1T ahead of Alphabet. Triggered by Intel's blowout Q1. No surprises here.

  • Thinking Machines GB300: Former OpenAI CTO Mira Murati's startup became one of the first customers on NVIDIA's next-gen GB300 chip, reporting 2x speed improvements. Cool.

  • China H200 Blocked: Commerce Secretary Lutnick confirmed NVIDIA hasn't sold H200 into China. Beijing blocking imports despite Washington conditionally opening the door. Back and forth we go.

🔵 Meta // Meta AI

  • China Blocks Manus: China's NDRC ordered Meta to unwind its $2B acquisition of AI agent startup Manus on national security grounds, despite Manus relocating to Singapore. Manus is originally out of China so acquisition falls under their purview in some capacity.

  • Employee Keystroke Tracking: Meta installing tracking software on U.S. employee computers - mouse movements, clicks, keystrokes, screenshots - to train AI agents. No opt-out. This could be called overstepping or it called be called efficiency depending on who you ask. Good data for sure.

  • Huge AWS Deal: Meta signed a multi-year, multi-billion dollar deal with AWS for tens of millions of Graviton CPU cores. While GPU’s are necessary for AI training, lots of agentic task work only requires the older CPU’s.

🤖 Impact Industries 🎨

Robotics // Autonomous Patient Transport

BayCare Health System and Gainesville startup Rovex launched a working pilot at Morton Plant Hospital in Clearwater, FL, where autonomous robots are already moving patients between departments. First-of-its-kind U.S. clinical deployment aimed at offsetting the healthcare worker shortage. Not a demo or a press release — these are live in the hospital right now, navigating hallways and elevators autonomously.

Read the Story

Creative // Apple Music AI

Apple Music's Oliver Schusser publicly disclosed that AI-generated tracks now exceed 33% of all new uploads to the platform. That's a striking data point from a streamer that rarely speaks on the issue, and it mirrors Deezer's recently reported 44%. Schusser called for industry-wide consensus on what even counts as AI music before any platform action. The definition problem alone could reshape how streaming royalties work.

Read the Story

💻 Interview Highlight: Jensen Huang with Dwarkesh Patel

Interview Outline: Jensen Huang discusses the "Electrons to Tokens" manufacturing model, where NVIDIA acts as the production plant for digital reasoning. He explains the "Software Skyrocket" theory—the idea that AI agents will exponentially increase the value of existing software—and details the trillion-dollar supply chain moat required to sustain an agentic economy.

About the Interviewee: Jensen Huang is the founder and CEO of NVIDIA and the primary architect of the hardware-accelerated intelligence economy.

Interesting Quote: "The input is electrons, the output is tokens. In the middle is Nvidia. Our job is to do as much as necessary and as little as possible to enable that transformation."

Condensed Interview Highlight — Jensen Huang (The Dwarkesh Podcast)

1. Dwarkesh Patel: Is Nvidia fundamentally making software that other people are manufacturing, and if software gets commoditized, does Nvidia get commoditized?

Jensen Huang: In the end, something has to transform electrons to tokens. The way that you framed the question is my mental model of our company: The input is electrons, the output is tokens. In the middle is Nvidia. Our job is to do as much as necessary and as little as possible to enable that transformation to be done at incredible capabilities.

2. Dwarkesh Patel: Is Nvidia’s big moat really that you’ve locked up many years of these scarce components?

Jensen Huang: It’s one of the things that we can do that is hard for someone else to do. The fact is that Nvidia’s downstream demand is so large, they’re willing to make the investment upstream. Just as there's cash flow, there's supply chain flow. Nobody is going to build a supply chain for an architecture if the business churns are low.

3. Dwarkesh Patel: How will AI change the future for software tool makers and engineers?

Jensen Huang: I think the number of agents is going to grow exponentially, and the number of tool users is going to grow exponentially. Today we’re limited by the number of engineers. Tomorrow, those engineers are going to be supported by a bunch of agents. I think tool use is going to cause the software companies to skyrocket.

4. Dwarkesh Patel: If tokens are just a utility, won't they eventually be commoditized like any other manufacturing output?

Jensen Huang: Making one token more valuable than another is incredibly hard to completely commoditize. Making that token is like making one molecule more valuable than another molecule. The amount of artistry, engineering, science, and invention that goes into making that token valuable—obviously we’re watching it happen in real time.

5. Dwarkesh Patel: What are the next major bottlenecks or "frontiers" you are focused on?

Jensen Huang: We want to build new things like EVs and robots. We want to build AI factories. But you can’t build any of these things without energy. More chip capacity is a 2-3 year problem, but energy infrastructure takes a long time. That is the stuff that is downstream from us that worries me.

👨‍💻 Practical Use Case: Claude Hooks

Difficulty: Advanced

Hooks are if/then rules you set up inside Claude Code. Every time Claude Code takes an action - editing a file, running something, finishing a task - your hooks check in automatically. You set them once and forget about them.

Before actions: Block Claude from doing something you don't want. For example, prevent it from ever touching your production files or running a delete command. The hook catches it before it happens.

After actions: Automatically clean up after Claude. I use this to auto-format my code every time Claude edits a file - means I never have to think about it.

On finish: Get notified when Claude is done with a long task. Send yourself a Slack message, a desktop alert, whatever you want. Set it and walk away.

You configure hooks in a settings file and they apply to every session going forward. They're flexible - you can make them apply to everything Claude does, or only to specific types of actions. If you remember Skills from Issue 35, hooks are the other side of the coin. Skills tell Claude how to do something - hooks tell Claude what it can't do, and what should happen automatically when it does. Skills are instructions, hooks are guardrails.

The main catch is that hooks are local to your machine, so each person on a team has to set up their own. And you'll want to test them before relying on them - a bad hook can interrupt Claude mid-task.

Worth 10 minutes to set up. Start with an auto-formatter and build from there.

Learn More Below ⬇️

💻 Startup Spotlight

Mastra

Mastra AI — TypeScript framework for building production-ready AI agents.

The Problem: Most AI development happens in Python, which is great for research but creates a "wall" when web developers try to ship agents into real-world applications. Prototypes built in notebooks often break in production because they lack professional memory management, deterministic workflows, and the observability needed to scale.

The Solution: Mastra is a "batteries-included" TypeScript framework that gives developers the primitives to build, iterate, and deploy agents at scale. It includes Mastra Studio for observing and evaluating agent performance and Mastra Server for cloud deployment. Because it’s TypeScript-native (not a port of a Python library), it integrates seamlessly into the modern web stack used by the world’s largest engineering teams.

The Backstory: Founded in 2024 by the "Gatsby.js" founding team (Sam Bhagwat, Abhi Aiyer, and Shane Thomas), Mastra is their "second act." They originally set out to build an AI-powered CRM, but they were so frustrated by the existing tools that they built the framework instead. They recently closed a $22M Series A led by Spark Capital in April 2026, bringing their total funding to $35M.

My Thoughts: This is cool. I’m always looking for ways to try and build better, faster, cleaner. From what I hear, Mastra is used by YCombinator companies at the forefront of tech as well as legacy players like Paypal. I’ll be using it in the near future.

“It’s not likely you’ll lose a job to AI. You’re going to lose the job to somebody who uses AI”

- Jensen Huang | NVIDIA CEO

Till Next Time,

Noah from KPAI

Keep Reading