EP 21 Kimi k2 Thinking, The AI Bubble, Nvidia’s Future, and LangChain Experiments

AI Market DynamicsAdvanced AI EngineeringAI Tooling and ExperimentationNvidia CUDALangChainLangSmithMicrosoft CopilotThe Build Podcastartificial-intelligenceai-developmentmachine-learningsoftware-engineeringai-bubblestartup-strategylangchainnvidia

Key Takeaways

Business

  • Michael Burry's short positions on Nvidia and Palantir highlight market skepticism despite AI hype.
  • Discussion on whether the current AI frenzy represents a speculative bubble or a fundamental shift toward a new economic model.
  • Breaking Nvidia's CUDA lock-in could democratize AI development and reduce vendor dependency.

Technical

  • Exploration of Google's Nested Learning and Anthropic's interleaved thinking as novel AI training methodologies.
  • Building AI copilots and modular MCP servers shows advances in scalable AI assistant architectures.
  • LangSmith's continuous optimization and evaluators enable more efficient AI model iteration and experimentation.

Personal

  • Understanding complex AI market trends enhances strategic thinking for AI practitioners.
  • Experimentation with cutting-edge tools like LangChain encourages ongoing learning and adaptation.
  • Balancing hype with practical engineering fosters a grounded approach to AI development.

In this episode of The Build, Cameron Rohn and Tom Spencer dig into AI development, agent architectures, and building-in-public strategies. They begin by framing the conversation around Kimi k2 Thinking, the current AI bubble, and Nvidia’s future, noting intense product velocity and the pressure that creates for reliable authentication and memory systems. The conversation then shifts to developer tooling and practical workflows, with concrete references to LangChain experiments, Langsmith for orchestration and observability, and MCP tools for iterative agent testing. They explore technical architecture decisions next, comparing in-memory vs. persistent memory strategies, trade-offs in authentication models, and deployment patterns using Vercel and Supabase for serverless frontends and realtime backends. The hosts also discuss AI agent development, agent orchestration, and how various development approaches shape maintainability and scaling. Alongside technical depth, they address entrepreneurship insights: monetization strategies, building in public to gather feedback, open source community dynamics, and startup KPIs that matter. Finally, they synthesize developer-focused takeaways and close with a forward-looking statement urging builders to iterate publicly, instrument memory and auth carefully, and leverage tooling like Langsmith, Vercel, Supabase, and MCP tools to responsibly accelerate AI productization.