EP 21 Kimi k2 Thinking, The AI Bubble, Nvidia’s Future, and LangChain Experiments

Clip
AI Market and EconomyAdvanced AI EngineeringAI Tooling and ExperimentationNvidia CUDALangChainLangSmithMicrosoft CopilotThe Buildartificial-intelligenceai-developmentmachine-learningstartup-strategyinvestment-analysisnlpai-toolssoftware-engineering

Key Takeaways

Business

  • Michael Burry's short positions highlight skepticism in current AI market valuations involving Nvidia and Palantir.
  • Discussion on whether the AI hype is a speculative bubble or an evolution into a new economic paradigm.
  • The shift towards modular AI challenges the entrenched CUDA lock-in, potentially reshaping competitive advantages.

Technical

  • Google’s Nested Learning and Anthropic’s interleaved thinking demonstrate advances in AI cognitive architectures.
  • Building AI copilots and multi-channel processing servers (MCPs) showcases practical applications of complex AI systems.
  • LangSmith experiments with evaluators and continuous optimization illustrate ongoing improvements in AI model feedback loops.

Personal

  • Understanding complex AI concepts like nested learning broadens perspectives on AI capabilities and future directions.
  • Engaging with AI experimentation tools encourages continuous learning and adaptation in a rapidly evolving field.
  • Insight into market and engineering dynamics helps align personal development with industry trends.

In this episode of The Build, Cameron Rohn and Tom Spencer dig into Kimi K2 thinking, the AI bubble, Nvidia’s future, and hands-on LangChain experiments as they evaluate practical paths for builders in AI. They begin by unpacking AI agent development and architecture, contrasting agent orchestration with classic microservice patterns and highlighting LangChain and Langsmith as toolchains for prototyping complex agent workflows. The conversation then shifts to developer tools and workflows, where Vercel deployment patterns, Supabase for realtime databases and authentication, and MCP tools for observability and developer velocity are examined through concrete examples. They explore memory systems and authentication strategies, debating vector store placement, session-based memory, and trade-offs between local caches and managed services. They then move into building in public strategies, assessing how open source, community feedback, and transparent roadmaps accelerate iteration and monetization for early-stage startups. The discussion closes with entrepreneurship insights: product velocity, go-to-market choices, and how to align technical architecture with sustainable business models. The episode leaves developers and founders with a forward-looking mandate to iterate rapidly on architecture and tooling while public-facing builds guide product-market fit and long-term scale.