🌟 Editor's Note: Recapping the AI landscape from 04/14/26 - 04/20/26.
🎇✅ Welcoming Thoughts
Welcome to the 39th edition of KPAI Weekly.
What’s included: company moves, a weekly winner, AI industry impacts, practical use cases, and more.
Interviews are back! (This week)
^ Good one today with a successful Cleveland founder.
I’ve been making templates and writing SOP’s all week.
Getting ready to bring in my first agentic employee.
AI Val Kilmer appeared in a movie trailer posthumously.
Tim Cook steps down from Apple, I wonder what the AI implications will be.
Meta is training and hiring people for data center jobs.
Lots of talk about agentic AI / API systems across the board.
Let’s get started—plenty to cover this week.
👑 This Week’s Winner: Anthropic // Claude
Anthropic doesn't let up. Between a $25B Amazon expansion, a new flagship model, and the launch of Claude Design, Anthropic closed the week looking less like a startup chasing OpenAI and more like the company setting the pace. Here's the recap: Here's the recap:
Amazon $25B Raise: Amazon will invest another $25B in Anthropic, $5B now at the $380B valuation, $20B tied to milestones. This is on top of the $8B previously invested. Anthropic committed to $100B+ in AWS spend over 10 years and up to 5 gigawatts of capacity. Thinking this will give Anthropic the capital they need to clean up product and usage!
Claude Opus 4.7: The new flagship model hit 87.6% on SWE-bench Verified and a leading 64.3% on SWE-bench Pro, beating GPT-5.4 and Gemini 3.1 Pro. Adds a new high effort tier. Mixed results on this so far. By no means as impressive a launch as anticipated. I’ve used it limited bc of usage limits.
Claude Design: Anthropic Labs launched Claude Design, a tool for generating prototypes, slides, and one-pagers from prompts. Powered by Opus 4.7 with Canva export. Figma stock dropped ~7% the same day. This is cool. Still think AI isnt there yet on design but this takes a noticeable step.
Anthropic also appointed Novartis CEO Vas Narasimhan to the board, tipping Long-Term Benefit Trust directors to a majority for the first time. Claude Code got cloud-hosted Routines plus a full desktop redesign with integrated terminal, file editor, and multi-session sidebar. Bloomberg also reported investor offers valuing the company at ~$800B, more than doubling February's $380B mark.

From Top to Bottom: Open AI, Google Gemini, xAI, Meta AI, Anthropic, NVIDIA.
⬇️ The Rest of the Field
Who’s moving, who’s stalling, and who’s climbing: Ordered by production this week.
🟢 OpenAI // ChatGPT
GPT-Rosalind Launch: OpenAI's first vertical frontier model, built for biology and drug discovery. Launched with Amgen, Moderna, Thermo Fisher, and the Allen Institute. Awesome! Top use case.
Codex "For Almost Everything": Codex now does background computer use on Mac, in-app browsing, 90+ plugins, and scheduled automations. 3 million weekly active developers. Codex is trying to gain even more market share on Claude Code. I think it’s succeeding.
Cerebras $20B+ Deal: OpenAI committed $20B+ to Cerebras over three years, with warrants convertible to a 10% stake if spending hits $30B. Also fronting ~$1B for data center buildout.
🟣 Google // Gemini
Gemini Mac App: First native Gemini desktop app for macOS, built 100% in Swift. Option+Space shortcut, screen-sharing, local file analysis. Last of the big three to ship a Mac app. Good move. I probably won’t use it.
Industrial Robotics: DeepMind's new embodied reasoning model with Boston Dynamics adds multi-view perception and the first-ever ability to read industrial gauges. Good for industrial robotics. Google will be a leader here.
Chrome Skills Library: Gemini in Chrome added Skills — save custom prompts as one-click reusable workflows across tabs. Nice, this is one of the most used features in Claude / Claude Code.
⚪️ NVIDIA
Jensen’s Roadmap: Huang's 103-minute Dwarkesh Patel interview laid out the roadmap: Vera Rubin in 2026, Ultra in 2027, Feynman in 2028, with ~10x token-cost reduction each generation. That’s encouraging. May spotlight this next week.
Quantum AI Models: First open-source quantum AI models, launched on World Quantum Day. Decoders run 2.5x faster and 3x more accurate than others. Harvard, Fermilab, and the UK's NPL are early adopters. Sweet, I still need to learn more about Quantum.
Cadence Physics Partnership: Cadence and NVIDIA integrating physics simulation with NVIDIA AI for robotics training. Cadence shares rose over 4%. Cool, mass simulations are proving to be a powerful AI learning use case.
🔴 xAI // Grok
App Store Near-Removal: NBC obtained a letter revealing Apple privately threatened to pull Grok in January over sexualized deepfakes. Apple rejected xAI's first fix before approving a later submission. Good, seems the issue has been mostly resolved.
Grok 4.3 Soft-Launch: Grok 4.3 quietly dropped to SuperGrok users with native PDF, slide, and spreadsheet generation plus video input. Most of this is available elsewhere, video input is unique. xAI may win on cost which shouldn’t be overlooked.
Grok Speech APIs: Standalone Speech-to-Text and Text-to-Speech APIs launched at $4.20 per million characters, roughly 86–92% below OpenAI and ElevenLabs. See above comment.
🔵 Meta // Meta AI
Broadcom MTIA Partnership: Meta expanded its custom-silicon deal with Broadcom through 2029, starting with 1GW and scaling to multi-gigawatts by 2027.
AI VP Departure: VP of AI Infrastructure Engineering left after nearly a decade, following Yann LeCun's November exit. A notable exit in the new era of Meta. Interesting, could be an old guard thing or the vision may just not be there at Meta. I’d lean toward the latter.
EU WhatsApp Ruling: EU regulators said Meta's WhatsApp fee structure for rival AI assistants appears anticompetitive, signaling interim measures to restore third-party access. Makes sense.
🤖 Impact Industries 📢
Robotics // 100 Million Plates Served
Chef Robotics hit 100 million food servings in production — an order of magnitude more than every other food robotics company combined. The milestone matters because food is one of the hardest domains for physical AI: organic, deformable, and impossible to simulate. Each real-world deployment feeds Chef's training data flywheel, making the models better with every serving. Customers include Amy's Kitchen, a top airline caterer, and one of the largest school lunch providers in the U.S.
Read the Story
Marketing // Answer Engine Optimization
HubSpot launched AEO (Same thing as GEO) on April 14, the first major CRM product that tracks how your brand appears inside ChatGPT, Gemini, and Perplexity. The backstory: organic traffic for HubSpot customers is down 27% year-over-year as buyers skip search entirely and ask AI instead. The tool uses first-party CRM data to predict what prompts your actual customers are typing, scores sentiment and competitor share of voice, and delivers recommendations. Available standalone for $50/month.
Read the Story
🎙 Weekly Interview: 10 Minutes With John Knific

John Knific
🏠 Background: John Knific is a Founding Partner at K2 Venture Partners and a serial entrepreneur who co-founded and exited two venture-backed SaaS companies, Wisr and DecisionDesk. Based in Cleveland, he holds a degree from Case Western Reserve University and has spent nearly 20 years at the intersection of product innovation and engineering execution.
💼 Work: At K2 Venture Partners, John serves as a "technical co-founder" for early-stage startups, specializing in zero-to-one product strategy and engineering execution. He is currently building out an "AI-First Office".
🚀 Quote: “We are a prototype of a full AI-first office where every employee, even if they're not a coder, is working out of repositories to create a shared office brain.”
Condensed Interview — John Knific (K2 Venture Partners)
1. John, tell me where you are using AI the most professionally? What’s one of your top use cases?
It’s hard to think of a use case that AI isn't touching for us right now. I use it to distill strategy meetings into market research and functional requirements. We are at a point where 90% of our code is generated with AI, but I treat it like a manufacturing line. I use AI as the machines along that assembly line, but I am checking the outputs and inputs between each one to ensure the context is correct.
2, What does your AI tool stack look like? Is it primarily Claude Code, or have you used Codex as well?
It is a combination of Claude Code and Codex. I personally use Claude Code in the terminal much more for production, but we also wrote our own product management suite called Meridian that gives clients a web interface to hook into that assembly line. We often do "dueling wizard," where we’ll do a few turns of planning through Claude Code and then have Codex audit it. Codex does a nice job as the "thinking adult in the room" for those audits.
3. What advice do you have for somebody entering the job market right now?
You have to find where you have an intersection of passion and a deep skill you are willing to be in the top 10% of. Then, figure out a way to amplify your impact by learning that skill with AI. K2 is actually hiring engineers right out of school because they are great systems thinkers and problem solvers who aren't "set in their ways". You can position your "eyes wide open" perspective as an advantage if you are hungry enough about it.
4. Talk to me about what excites you about the future of AI? What’s one thing you’re looking forward to seeing grow in the next five years?
I’m excited about the "AI-first office". We are prototyping a world where every employee, even non-coders, works out of repositories and shared agents to create an "office brain". We’re in this messy phase right now, but as organizations learn how to "adult" with this technology, it creates efficiencies that unlock the ability for more people to do strategic work—thinking *on* the business rather than *in* it.
5. Is there anything else you wanted to touch on today that we didn’t get to?
Just the importance of childlike curiosity. My 7-year-old used a prototyping app to build a video game and got it to connect to his Switch controller just by asking the AI questions. He had no prior context or skills; he was just curious. The biggest step is getting over the fear hurdle and simply being willing to try things. Everything else is downhill from there.
👨💻 Practical Use Case: Session Memory
Difficulty: Mid-Level
If you've used AI Agent assistants (Not LLM’s) for anything longer than a session, you've hit the wall: the model forgets everything between sessions. Your carefully built context vanishes the moment you close the chat. This is the session memory problem, and it's becoming a bigger deal as people move from casual use to running AI as a daily workflow tool.
Session memory is the practice of giving your AI assistant a persistent knowledge layer so it remembers who you are, what you're working on, and what decisions you've already made — across every conversation. Here's how it works:
Daily logs: A simple markdown file for each day that captures what happened — meetings, decisions, tasks, context from conversations. This is your raw record.
Long-term memory file: A curated synthesis of the daily logs covering key people, active projects, lessons learned, and standing decisions. Your assistant reads this at the start of every session to orient itself.
Cold start fix: Every API call and every new session starts from zero unless you build this layer. Point your assistant at the memory files and it carries context without you re-explaining everything.
This is particularly useful in Claude Code, where you can reference your memory files and carry context across coding sessions. But the same principle applies anywhere you're using APIs or firing up new sessions in an assistant-style role — automation pipelines, research workflows, personal agents.
I've been thinking about this more because usage in long chats has skyrocketed. One message in a long Opus conversation recently consumed 100% of my usage. That's pushing me toward running Claude via API on a more full-time basis, where session memory becomes essential. The beauty is that it's all flat markdown files. No database, no special tooling. Just structured text that any model can read.
Learn More Below ⬇️
🧠 Startup Spotlight

Hermes Agent
Hermes Agent — The AI Agent That Gets Smarter the Longer It Runs.
The Problem: Most AI agents start every session from scratch. They execute tasks but don't learn from what they've done. Yesterday's context is gone today, and customizing behavior means writing config files by hand.
The Solution: Hermes Agent is an open-source, self-hosted AI agent with a built-in learning loop. When it completes a task, it automatically writes a reusable skill, stores the outcome in memory, and adjusts its approach next time. Runs on your own server, works across 15+ platforms (Telegram, Slack, Discord, WhatsApp, email), and supports 200+ models with no lock-in.
The Backstory: Built by Nous Research, the lab behind the Hermes model family (33M+ downloads). Co-founded by Jeffrey Quesnelle, the team started as a Discord community in 2022 and spent years fine-tuning open-source models before building the agent on top. Launched February 2026, hit 50,000+ GitHub stars in 46 days — faster than OpenClaw. MIT licensed, fully open source.
My Thoughts: Going to try this out in the near future. It’s similar to OpenClaw, and the exact direction where AI is moving. Context and memory are incredibly important and at the forefront of the AI race. Especially as usage limits climb, finding ways to preserve context without overextending general LLM chats is key. Similar to what I described in the Practical Use Case as well. Open Source so it’s free outside of likely high API costs.
“It’s not likely you’ll lose a job to AI. You’re going to lose the job to somebody who uses AI”
- Jensen Huang | NVIDIA CEO
Till Next Time,
Noah from KPAI

