Eye On A.I.

Craig S. Smith
Eye On A.I.
Seneste episode

327 episoder

  • Eye On A.I.

    #327 Baris Gultekin: The Next Phase of AI - Agents That Understand Your Company's Data

    19.03.2026 | 42 min.
    This episode is sponsored by Modulate.
    Meet Velma, voice AI that detects tone, intent, and stress:http://preview.modulate.ai
     
    Baris Gultekin, Head of AI at Snowflake, breaks down how enterprise AI is actually being built, deployed, and scaled today. From running AI directly inside governed data environments to enabling natural language access across entire organizations, this conversation explores the shift from experimentation to real-world impact.
     
    You'll learn why Snowflake's core philosophy centers around bringing AI to the data, how data agents are transforming decision-making across teams, and what it takes to build trustworthy AI systems with governance, guardrails, and high-quality retrieval at the core.
     
    Baris also shares how leading companies are already saving thousands of hours through AI-driven automation, why culture and leadership determine AI success, and what the future looks like as agents move from pilots to full-scale production.
     
    If you want to understand where enterprise AI is actually headed and what separates hype from real execution, this episode breaks it down.
     
    (00:00) The Evolution of Snowflake AI
    (01:40) Baris Gultekin: Background & AI Mission
    (02:59) Why AI Must Run Next to Data
    (04:29) Inside Snowflake's AI Infrastructure
    (09:08) Model Choice vs Product Layer Strategy
    (12:16) Building Trust: Governance, Guardrails & Quality
    (16:01) How Enterprise Agents Are Built & Orchestrated
    (20:10) AI Adoption Across the Entire Organization
    (24:39) Reasoning vs Retrieval: What Matters More
    (27:43) Real Use Case: Faster Decision-Making with AI
    (31:44) AI as a Co-Pilot for Leaders
    (36:52) Preparing Data for AI at Scale
    (38:46) What the AI Data Cloud Really Means
  • Eye On A.I.

    #326 Zuzanna Stamirowska: Inside Pathway's Post-Transformer Architecture Designed for Memory and On-the-Fly Learning

    11.03.2026 | 1 t. 7 min.
    This episode is sponsored by tastytrade.
    Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature.
     Learn more at https://tastytrade.com/



    This episode dives into why Pathway's Baby Dragon Hatchling (BDH) might mark the beginning of the post-transformer era in AI.

    Zuzanna Stamirowska, Pathway's CEO and co‑author of BDH, explains why today's transformer-based LLMs hit a wall on long-horizon reasoning, how memory and synaptic plasticity are built directly into BDH's architecture, and what that means for continual learning, hallucinations, and "generalization over time."

    The conversation ranges from complexity science and brain-inspired computation to practical implications for real-world, small-data, and safety‑critical applications.
     
    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI

    (00:00) The Core Problem: Why Today's AI Lacks Memory
    (03:16) Pathway's Mission to Bring Memory Into AI
    (04:53) Zuzanna's Background in Complexity Science
    (10:30) Why Transformers Reset Like "Groundhog Day"
    (14:34) The Brain-Inspired Dragon Hatchling Architecture
    (23:59) How the Network Learns and Builds Connections
    (37:38) Performance vs Transformers on Language Tasks
    (49:37) Productizing the Technology With NVIDIA and AWS
    (54:23) Can Memory Solve AI Hallucinations?
  • Eye On A.I.

    #325 Phelim Brady: Why AI's Future Depends on Human Judgement

    09.03.2026 | 47 min.
    AI often looks fully automated. But behind the scenes, a huge amount of human judgment is shaping how these systems actually work.
     
    In this episode, Craig Smith speaks with Phelim Bradley, co-founder and CEO of Prolific, a platform that connects millions of real people with researchers and AI labs to evaluate and improve AI systems.
     
    They explore the hidden human layer behind modern AI, why traditional benchmarks are becoming less reliable, and why AI companies increasingly rely on real human feedback to measure model performance in the real world.
     
    Phelim also explains how demographic differences influence how models are evaluated, why human judgment remains critical even as AI improves, and how the collaboration between humans and AI will shape the next phase of development.
     
    This conversation reveals the human backbone behind today's AI systems.



    Stay Updated:
    Craig Smith on X: https://x.com/craigss
    Eye on A.I. on X: https://x.com/EyeOn_AI


     
    (00:00) Preview and Intro
    (02:45) Founding Prolific And Early Pain Points
    (06:30) From Mechanical Turk To Representativeness
    (09:55) Academic Research And AI Use Cases Split
    (13:40) Vetting Real Participants And Fighting Fraud
    (17:45) Scale, Community Growth, And Talent Mix
    (22:00) High-Complexity Projects Over Commoditised Labeling
    (26:40) Measuring Model Persuasion With Live Conversations
    (30:20) Demographic-Aware Model Preference Benchmarks
    (34:10) The Rise Of Human Evaluation Over Benchmarks
    (38:00) Enterprise Model Choice And Continuous Evaluation
    (42:00) Why Humans Won't Disappear From The Loop
  • Eye On A.I.

    #324 Sharon Zhou: Inside AMD's Plan to Build Self-Improving AI

    27.02.2026 | 46 min.
    AI is not just getting smarter. It is getting faster by learning how to optimize the hardware it runs on.

    In this episode, Sharon Zhou, VP of AI at AMD and former Stanford AI researcher, explains how language models are beginning to write and optimize their own GPU kernel code. We explore what self improving AI actually means, how reinforcement learning is used in post training, and why kernel optimization could be one of the most overlooked scaling levers in modern AI.

    Sharon breaks down how GPU efficiency impacts the cost of training and inference, why catastrophic forgetting remains a challenge in continual learning, and how verifiable rewards from hardware profiling can help models improve themselves. The conversation also dives into compute economics, synthetic data, RLHF, and why infrastructure may define the next phase of AI progress.

    If you want to understand where AI scaling is really happening beyond bigger models and more data, this episode goes under the hood.


    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) Preview and Intro
    (00:25) Sharon Zhou's Background and Transition to AMD
    (02:00) What Is Self-Improving AI?
    (04:16) What Is a GPU Kernel and Why It Matters
    (07:01) Using AI Agents and Evolutionary Strategies to Write Kernels
    (11:31) Just-In-Time Optimization and Continual Learning
    (13:59) Self-Improving AI at the Infrastructure Layer
    (16:15) Synthetic Data and Models Generating Their Own Training Data
    (20:48) AMD's AI Strategy: Research Meets Product
    (23:22) Inside the NeurIPS Tutorial on AI-Generated Kernels
    (30:59) Reinforcement Learning Beyond RLHF
    (39:09) 10x Faster Kernels vs 10x More Compute
    (41:50) Will Efficiency Reduce Chip Demand?
    (42:18) Beyond Language Models: Diffusion, JEPA, and Robotics
    (45:34) Educating the Next Generation of AI Builders
  • Eye On A.I.

    #323 David Ha: Why Model Merging Could Be the Next AI Breakthrough

    24.02.2026 | 57 min.
    This episode is sponsored by tastytrade.
    Trade stocks, options, futures, and crypto in one platform with low commissions and zero commission on stocks and crypto. Built for traders who think in probabilities, tastytrade offers advanced analytics, risk tools, and an AI-powered Search feature.

    Learn more at https://tastytrade.com/



    Artificial intelligence is reaching a turning point. Instead of building bigger and bigger models, what if the real breakthrough comes from letting AI evolve?

    In this episode of Eye on AI, David Ha, Co-Founder and CEO of Sakana AI, explains why evolutionary strategies and collective intelligence could reshape the future of machine learning. We explore model merging, multi-agent systems, Monte Carlo tree search, and the AI Scientist framework designed to generate and evaluate new research ideas. The conversation dives into open-ended discovery, quality and diversity in AI systems, world models, and whether artificial intelligence can push beyond the boundaries of human knowledge.

    If you're interested in AGI, evolutionary AI, frontier models, AI research automation, or how AI could start discovering science on its own, this episode offers a clear look at where the field may be heading next.

    Stay Updated:

    Craig Smith on X: https://x.com/craigss

    Eye on A.I. on X: https://x.com/EyeOn_AI


    (00:00) AI Should Evolve, Not Just Scale
    (03:54) David's Journey From Finance to Evolutionary AI
    (10:18) Why Gradient Descent Gets Stuck
    (18:12) Model Merging and Collective Intelligence
    (28:18) Combining Closed Frontier Models
    (32:56) Inside the AI Scientist Experiment
    (38:11) Parent Selection, Diversity and Innovation
    (49:25) Can AI Discover Truly New Knowledge?
    (53:05) Why Continual Learning Matter

Flere Teknologi podcasts

Om Eye On A.I.

Eye on A.I. is a biweekly podcast, hosted by longtime New York Times correspondent Craig S. Smith. In each episode, Craig will talk to people making a difference in artificial intelligence. The podcast aims to put incremental advances into a broader context and consider the global implications of the developing technology. AI is about to change your world, so pay attention.
Podcast-websted

Lyt til Eye On A.I., Prompt og mange andre podcasts fra hele verden med radio.dk-appen

Hent den gratis radio.dk-app

  • Bogmærke stationer og podcasts
  • Stream via Wi-Fi eller Bluetooth
  • Understøtter Carplay & Android Auto
  • Mange andre app-funktioner