🌟 Editor's Note: Recapping the AI landscape from 04/28/26 - 05/04/26.

🎇 Welcoming Thoughts

  • Welcome to the 41st edition of KPAI Weekly.

  • What’s included: company moves, a weekly winner, AI industry impacts, practical use cases, and more.

  • Elon ranked the AI leaders as Anthropic > OpenAI > Google > Chinese open-source models.

  • He called xAI a much smaller company than the rest.

  • Lots of interesting bites from the OpenAI/xAI trial.

  • Slow news week across the board.

  • Considering updating the newsletter.

  • Will likely make some sections bi-weekly or monthly in order to make updates more meaningful.

Let’s get started—plenty to cover this week.

👑 This Week’s Winner: OpenAI // ChatGPT


Google is back on top. Between putting Gemini in millions of cars, shipping native file generation, and signing onto classified Pentagon networks, Google had a strong week. Here’s the recap:

  • Gemini Replaces Assistant in Cars: Google is rolling out Gemini to Chevy, Cadillac, Buick, GMC, Volvo, Polestar, and Renault vehicles, with ~4M GM cars eligible. Free-form conversation, real-time route updates, message dictation.

  • Gemini File Generation: Users can now generate downloadable Google Docs, Sheets, Slides, PDF, Word, Excel, CSV, Markdown, and LaTeX directly from a chat prompt. Been around for awhile in Claude, making its way around.

  • Pentagon Classified AI Deal: Google is one of seven vendors signed for AI deployment on DoD Impact Level 6/7 classified networks, alongside NVIDIA, OpenAI, Microsoft, AWS, SpaceX, and Reflection AI. Anthropic notable exclusion here.

Google also redesigned the Gemini iOS app with Apple Liquid Glass effects and announced Google I/O 2026 for May 19-20 at Shoreline Amphitheatre with a "Code the Countdown" developer campaign.

From Top to Bottom: Open AI, Google Gemini, xAI, Meta AI, Anthropic, NVIDIA.

⬇️ The Rest of the Field

Who’s moving, who’s stalling, and who’s climbing: Ordered by production this week.

🟠 Anthropic // Claude

  • $1.5B Enterprise AI Joint Venture: Anthropic teamed up with Blackstone, Goldman Sachs, and Hellman & Friedman to create a new company that will deploy Claude inside private equity portfolio companies. Should be huge for Claude Enterprise.

  • Claude for Creative Work: Claude can now plug directly into Adobe, Blender, Ableton, and other creative software. Use natural language to control design, 3D modeling, and music production tools. Cool but still needs work, advertised as a Claude feature but available to many AI tools via MCP.

  • Mythos Regulatory Scrutiny: The White House is weighing whether to lift its "supply-chain risk" designation on Anthropic, while the EU Commission opened its own dialogue about Mythos. Just release it.

🟢 OpenAI // ChatGPT

  • AWS / Bedrock Expansion: OpenAI's models and Codex are coming to Amazon's cloud platform for the first time. Biggest cross-cloud deal since the original Microsoft exclusive. Back to bullish on OpenAI, this should help with stability as well.

  • Advanced Account Security: New security features let users require physical security keys to log in and shorten active sessions. Mandatory for high-risk customers from June 1. Interesting.

  • Missed Internal Goals: Reuters/WSJ reported OpenAI missed internal targets for new users and revenue, prompting a selloff in AI-adjacent names. I’m not concerned, they’re doing fine.

🔵 Meta // Meta AI

  • Q1 2026 Earnings: Revenue $56.31B (+33% YoY), beat estimates. But raised capex to $125-145B. Stock fell ~10% on Apr 30, erasing ~$175B in market cap.

  • Acquired Assured Robot Intelligence: Meta bought a humanoid robotics startup and folded it into its Superintelligence Labs, signaling a push into physical AI. That’s cool, excited to see what Meta can do with physical AI.

  • Muse Spark Rollout: Meta's new AI model is now live across Facebook, Instagram, WhatsApp, Messenger, and the standalone Meta AI app. Usage reportedly up after the update.

⚪️ NVIDIA

  • Nemotron 3 Nano Omni: NVIDIA released an open-source AI model that can process text, images, audio, and video all in one system. Already adopted by Palantir, Foxconn, and Dell. Nice, the more powerful open source models the better.

  • Pentagon Classified Networks Deal: NVIDIA is one of seven companies now cleared to run AI on the Pentagon's most sensitive classified networks. Bloomberg: deal "gives far greater license" than prior agreements. Kinda surprised the gov’t seems to be moving quickly with AI adoption. A positive.

  • B300 Server Prices Spike in China: NVIDIA's latest AI servers are selling for ~$1M each on the Chinese black market as supply tightens under US export curbs. Probably no clarity on NVIDIA x China any time soon. Demand is still there.

🔴 xAI // Grok

  • Custom Voices + Voice Library: Developers can now clone a voice from about a minute of audio through Grok's API. 80+ pre-built voices across 28 languages also available. Pretty cool.

  • Musk v. OpenAI Trial: Musk conceded under cross-examination that xAI "partly" used OpenAI to train Grok. A filing also revealed Musk tried to settle days before trial with threatening texts. Not surprising, OpenAI was the runaway leader to start.

  • SpaceX IPO Could Create $75B: SpaceX's planned IPO could generate up to $75 billion, with the Grok/xAI integration central to the valuation pitch.

🤖 Impact Industries 🎓

Robotics // AI Takes Control of Your Body

MIT Media Lab researchers built Human Operator, an open-source tool that lets AI briefly take control of your fingers and wrist through electrical muscle stimulation. You speak a command, Claude's vision model interprets the task, and the system stimulates the right muscles to guide your hand through motions you couldn't do on your own. Think physical skill transfer, the AI does it through you, not for you.

Read the Story

Education // Most Students Use AI as a Shortcut

USC's Center for Generative AI and Society published a global, multi-survey study finding most students use ChatGPT-style tools as shortcuts rather than learning aids, unless professors actively guide thoughtful usage. The study flagged widening equity gaps between students with and without access to paid AI tools, and called for institutional frameworks before the gap becomes structural.

Read the Story

💻 Interview Highlight: Sam Altman (Nothing but Tech Pod)

Interview Outline: Sam Altman discusses the fundamental shifts in intelligence and the global economy. He explores why "prediction is very close to intelligence" and the massive impact of setting a model's personality. He also outlines OpenAI’s strategic pivot toward Personal AGI and robotics to avoid a "nightmare scenario" where humans act as physical actuators for digital systems.

About the Interviewee: Sam Altman is the CEO of OpenAI and a former president of Y Combinator. He has spent over 20 years focused on building artificial intelligence and startup ecosystems, positioning AI as the ultimate general-purpose technology to enable radical human agency.

Interesting Quote: "Probably the thing we do that has had the most impact on the world is how we set the ChatGPT personality."

Condensed Interview Highlight — Sam Altman (Nothing But Tech)

1. Interviewer: How do you define intelligence in the context of these predictive models?

Sam Altman: Ilya Sutskever once said a simple sentence that really stuck in my mind: "prediction is very close to intelligence". If you can compress all the information about the world into its smallest representation and then predict the thing that's going to happen next, you understand it in a deep way[cite: 773]. Through next-token prediction, these models are learning to reason—to make sense of the data they have seen and complete what comes next even if they haven't seen it before [cite: 775, 875-876].

2. Interviewer: How do you navigate the responsibility of shaping ChatGPT's "personality"?

Sam Altman: Probably the thing we do that has had the most impact on the world is how we set the ChatGPT personality[cite: 804]. Historically, the field hasn't treated this with the same rigor and scientific focus we have on other risks[cite: 808]. I’ve asked people from great spiritual traditions and clinical psychologists to write "instruction manuals" for how to behave to maximize people's fulfillment, personal growth, and accomplishment [cite: 814-815].

3. Interviewer: What do you think about the narrative that AI will wipe out 50% of jobs?

Sam Altman: I think that narrative is tone-deaf [cite: 852-853]. Jobs will change, but a thing someone said to me recently really stuck: they can use the new model to accomplish in an hour what would have taken weeks two years ago, yet they have never been busier in their life [cite: 853-855]. I don't think we're all going to sit around in a life without meaning; it’s just going to be different [cite: 859-860].

4. Interviewer: What are the three breakthroughs that will define AI’s future?

Sam Altman: First is accelerating research—scientific understanding across physics and biology [cite: 948-949]. Second is accelerating the economy—automated startups and making companies more productive[cite: 950]. Third is "Personal AGI"—a model working for me with my whole context and life spending compute to make my life better [cite: 952-954].

5. Interviewer: Why has robotics become such a high priority for OpenAI recently?

Sam Altman: We live in the physical world and need a factory of robots that can reconfigure itself[cite: 996, 1000]. A very sad future—a nightmare scenario—would be where computers can do incredible things, but because we didn't figure out robots, humans have to run around as the "physical actuators" for the AGI [cite: 997-998].

👨‍💻 Practical Use Case (Issue 24 Revisited): RAG - Retrieval Augmented Generation

Difficulty: Mid-level

RAG stands for Retrieval-Augmented Generation. It’s something we’ve touched on here and there but it’s never had its own Practical Use Case display. At a high level, RAG is a way to let an AI model answer questions using your own data instead of relying only on what it learned from the outside world. Before generating a response, the model first retrieves relevant information from a trusted source, then uses that context to produce a grounded answer.

In practice, RAG systems can scan through hundreds, or even thousands of your files, spanning from text documents, to PDF’s, to images and more.

RAG shows up most often in situations where accuracy matters and hallucinations are costly, such as:

  • Internal knowledge bases and SOPs

  • Customer support tools that need up-to-date answers

  • AI tools that need to reference policies, docs, or contracts

Think of RAG as a middle ground between a raw chatbot and a fully custom AI system. You’re not retraining the model, and you’re not pasting context manually every time. You’re giving the AI a way to look things up before it speaks.

This approach is becoming the default for enterprise AI applications because it keeps responses tied to real sources and reduces guesswork. If you’ve ever thought, “This would be useful if the AI actually knew our documents,” RAG is usually what’s missing behind the scenes.

Issue 41 Update: While still useful, the need for custom RAG systems has gone down significantly. Claude Projects, GPT Projects, and other tools let you upload documents and get context-aware answers without building anything. That covers a lot of RAG use cases from a year ago. Where RAG still wins: scale (thousands of documents, not dozens), real-time data that changes frequently, and multi-user systems. MCP also allows agents to pull from live sources on demand instead of pre-indexing into a vector database. Start with project files. If you outgrow them, that's when RAG earns its place.

Learn more below ⬇️

🐶 Startup Spotlight

Familiar Machines & Magic

Familiar Machines & Magic — The AI Pet of the future.

The Problem: Most home robots are either boring "chore-bots" like vacuums or short-lived toys that lose their charm in a week. They lack the physical presence and intelligence required to solve the "loneliness epidemic" or build a real emotional bond.

The Solution: A plush, four-legged robot designed for companionship. It uses a behavior engine to "read the room" responding to your facial expressions and gestures without needing to talk. It learns your habits and acts as a supportive, non-judgmental presence in the home.

The Backstory: Founded by iRobot co-founder Colin Angle (the man behind the Roomba). He assembled a team of Disney and MIT veterans to shift robotics from "functional tools" to "artificial life." The startup emerged from stealth on May 4, 2026, aiming to own the companion robot market.

My Thoughts: This is crazy, just get a dog!

“It’s not likely you’ll lose a job to AI. You’re going to lose the job to somebody who uses AI”

- Jensen Huang | NVIDIA CEO

Till Next Time,

Noah from KPAI

Keep Reading