David Duvenaud - What are Humans Even Good For in Five Years? [Early Experience of AGI - Episode 1]
This is an interview with David Duvenaud, Assistant Professor at University of Toronto, co-author of the Gradual Disempowerment paper, and former researcher at Anthropic.This is the first episode in our new “Early Experience of AGI” series - where we explore the early impacts of AGI on our work and personal lives.This episode referred to the following other essays and resources:-- Closing the Human Reward Circuit: https://danfaggella.com/reward-- Gradual Disempowerment: http://www.gradual-disempowerment.aiWatch the full episode on YouTube: https://youtu.be/XPpg89K3ULM See the full article from this episode: https://danfaggella.com/duvenaud1...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
--------
1:55:59
Kristian Rönn - A Blissful Successor Beyond Darwinian Life [Worthy Successor, Episode 9]
This is an interview with Kristian Rönn, author, successful startup founder, and now CEO of Lucid, and AI hardware governance startup based in SF.This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.This episode referred to the following other essays and resources:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy-- Kristian's "Darwinian Trap" book: https://www.amazon.com/Darwinian-Trap-Evolutionary-Explain-Threaten/dp/0593594053Watch this episode on The Trajectory YouTube channel: https://www.youtube.com/watch?v=fSnYeCc_C6ISee the full article from this episode: https://danfaggella.com/ronn1...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
--------
1:47:40
Jack Shanahan - Avoiding an AI Race While Keeping America Strong [US-China AGI Relations, Episode 1]
This is an interview with Jack Shanahan, a three-star General and former Director of the Joint AI Center (JAIC) within the US Department of Defense. This the first installment of our "US-China AGI Relations" series - where we explore pathways to achieving international AGI cooperation while avoiding conflicts and arms races. This episode referred to the following other essays and resources: -- The International Governance of AI – We Unite or We Fight: https://emerj.com/international-governance-ai/-- Potentia and Potestas: Achieving The Goldilocks Zone of AGI Governance: https://danfaggella.com/potestas Watch to this episode on The Trajectory YouTube Channel: https://youtu.be/kwFu_hrAM4k See the full article from this episode: https://danfaggella.com/shanahan1 ... There three main questions we cover here on the Trajectory: 1. Who are the power players in AGI and what are their incentives? 2. What kind of posthuman future are we moving towards, or should we be moving towards? 3. What should we do about it? If this sounds like it's up your alley, then be sure to stick around and connect: -- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw -- YouTube Channel: https://www.youtube.com/@trajectoryai
--------
1:41:56
Richard Ngo - A State-Space of Positive Posthuman Futures [Worthy Successor, Episode 8]
This is an interview with Richard Ngo, AGI researcher and thinker - with extensive stints at both OpenAI and DeepMind.This is an additional installment of our "Worthy Successor" series - where we explore the kinds of posthuman intelligences that deserve to steer the future beyond humanity.This episode referred to the following other essays and resources:-- A Worthy Successor - The Purpose of AGI: https://danfaggella.com/worthy-- Richard's exploratory fiction writing - http://narrativeark.xyz/Watch this episode on The Trajectory YouTube channel: https://youtu.be/UQpds4PXMjQ See the full article from this episode: https://danfaggella.com/ngo1...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory -- X: x.com/danfaggella -- LinkedIn: linkedin.com/in/danfaggella -- Newsletter: bit.ly/TrajectoryTw-- Podcast: https://podcasts.apple.com/us/podcast/the-trajectory/id1739255954
--------
1:46:15
Yi Zeng - Exploring 'Virtue' and Goodness Through Posthuman Minds [AI Safety Connect, Episode 2]
This is an interview with Yi Zeng, Professor at the Chinese Academy of Sciences, a member of the United Nations High-Level Advisory Body on AI, and leader of the Beijing Institute for AI Safety and Governance (among many other accolades). Over a year ago when I asked Jaan Tallinn "who within the UN advisory group on AI has good ideas about AGI and governance?" he mentioned Yi immediately. Jaan was right. See the full article from this episode: https://danfaggella.com/zeng1 Watch the full episode on YouTube: https://youtu.be/jNfnYUcBlmM This episode referred to the following other essays and resources: -- AI Safety Connect - https://aisafetyconnect.com -- Yi's profile on the Chinese Academy of Sciences - https://braincog.ai/~yizeng/...There three main questions we cover here on the Trajectory:1. Who are the power players in AGI and what are their incentives?2. What kind of posthuman future are we moving towards, or should we be moving towards?3. What should we do about it?If this sounds like it's up your alley, then be sure to stick around and connect:-- Blog: danfaggella.com/trajectory-- X: x.com/danfaggella-- LinkedIn: linkedin.com/in/danfaggella-- Newsletter: bit.ly/TrajectoryTw-- YouTube: https://www.youtube.com/@trajectoryai
--------
1:14:19
Flere Teknologi podcasts
Trendige Teknologi podcasts
Om The Trajectory
What should be the trajectory of intelligence beyond humanity?The Trajectory pull covers realpolitik on artificial general intelligence and the posthuman transition - by asking tech, policy, and AI research leaders the hard questions about what's after man, and how we should define and create a worthy successor (danfaggella.com/worthy). Hosted by Daniel Faggella.