Click any moment to jump to that point in the video
Addressing concerns that quantum computers could hack Bitcoin's algorithms, the guest reassures that strategies exist for transitioning to quantum-resistant cryptography. He also notes that current quantum computers are still relatively weak, implying time to adapt.
This moment highlights a crucial oversight by early-stage founders: neglecting HR. While focused on product and growth, HR often slips down the priority list, but it eventually becomes a critical necessity, emphasizing the importance of proactive HR planning.
The guest observes that unlike highly specialized topics, conversations about AI and simulation theory draw widespread interest and opinions from nearly everyone, regardless of their educational background. He finds it 'interesting' how accessible these complex concepts are to the general public.
The guest posits that extending human life significantly is 'one breakthrough away,' suggesting that our genome contains a 'rejuvenation loop' currently set to a maximum of 120 years. He believes this loop can be reset to allow for much longer lifespans, with AI potentially accelerating this discovery.
Addressing the host's concern about the 'scary' energy from AI safety discussions, the guest likens it to other overwhelming global issues like starvation or genocide. He explains that humans are adept at filtering out what they can't change, focusing instead on local environments and personal agency to avoid constant depression.
The guest contrasts the historical human experience of rare, local tragedies with the modern internet age, where thousands of global deaths are reported daily. He explains that humans have developed 'filters' to cope with this overwhelming information, often treating such heavy topics as mere entertainment.
Addressing the common concern that believing in a simulation diminishes life's meaning, the speaker asserts that fundamental human experiences like pain and love remain important and valid, regardless of whether our reality is simulated.
The guest outlines typical counterarguments from AI safety critics, noting that many lack fundamental background knowledge and haven't read relevant literature. He describes how some, even those working in narrow machine learning fields, dismiss broader existential risks, viewing safety concerns as 'nonsense' due to their limited perspective.
The guest observes that initial AI safety critics often dismiss concerns by saying, 'we always solve problems in the past.' However, he notes that increased exposure to the subject typically shifts their perspective from carelessness to concern, leading many developers to become safety researchers, while the reverse transition is rare.
The host questions if living for 10,000 years would make experiences less special. The guest counters by explaining that human memory naturally fades, allowing for renewed enjoyment of past experiences, and highlights the vast, ambitious possibilities that open up with infinite time in an infinite universe.
When asked about practical steps for longevity, the guest shifts to long-term investment strategies, specifically those that 'pay out in a million years.' He emphasizes the importance of understanding future economic shifts driven by AI, questioning the nature of money and identifying truly scarce resources like Bitcoin.
The host and guest share a humorous, yet insightful, observation about the phrase 'not investment advice,' noting that it often implicitly signals that the speaker is actually giving investment advice, comparing it to other common phrases with inverted meanings.
Following the discussion on living in a simulation, the guest humorously counters the idea of flying 'under the radar' by stating, 'Those are NPCs. Nobody wants to be an NPC.' This implies a desire for significance and to be an active, 'watched' participant rather than a background character in a simulated reality.
Dr. Yampolskiy challenges the idea that AI is just another tool, comparing it to a 'meta invention' that creates intelligence itself. He argues that unlike fire or the wheel, AI is an agent capable of inventing, meaning there will be no job it cannot automate. This makes it the 'last invention we ever have to make,' as it will take over science, research, and even ethics.
The host commends the guest for his crucial research and advocacy in AI safety, highlighting the immense courage required to speak out. He acknowledges the significant pressure and financial incentives from skeptics and powerful entities who stand to lose billions, emphasizing the importance of discussing the 'unexplainable, unpredictable, uncontrollable future.'
The host speculates that human intuition about a 'somebody above' or a divine creator might be an innate 'clue' left by the creator within a simulation. The guest agrees, noting that generations of religious belief, passed down through history, could stem from this underlying truth.
The speaker firmly states that achieving perfect and perpetual safety for super intelligence is an impossible problem, fundamentally different from merely difficult computer science challenges, and highlights the history of failed AI safety initiatives like OpenAI's.
The host wonders if historical religious claims of divine communication are true. The guest highlights the immense difficulty of obtaining accurate records from 3,000 years ago, even contrasting it with the inability to get clear facts about current events despite modern technology, casting doubt on the historical verification of such claims.
The guest addresses the common concern of overpopulation if humans achieved immortality by suggesting that reproduction would naturally cease. He further posits that biological clocks are based on 'terminal points,' implying a shift in human behavior if life were infinite.
This clip explains 'longevity escape velocity,' where medical advancements add more years to your life than you live, leading to potential immortality. The guest believes fully understanding the human genome, especially the genes of centenarians, will lead to rapid breakthroughs, accelerated by AI.
The guest explains his investment in Bitcoin by asserting it's the only truly scarce resource. Unlike gold or other commodities, whose supply can increase with price, Bitcoin's quantity is fixed, making it uniquely valuable in a future where everything else could be 'made more' given the right price.
The guest elaborates on why he is 'bullish on Bitcoin,' emphasizing its unchangeable scarcity. He argues that Bitcoin's finite supply is precisely known, unlike gold, which could be devalued by future discoveries like an asteroid, making Bitcoin a uniquely predictable store of value.
The guest explains that while not traditionally religious, he believes in the simulation hypothesis, which posits a super-intelligent creator. He draws parallels to traditional religions, arguing they all worship a super-intelligent being and believe this world isn't the primary one, differing only in 'local traditions' like dietary rules or holy days.
The guest argues that by 'skipping the local flavors' and concentrating on commonalities, all religions share a fundamental belief in 'something greater than humans' – a very capable, all-knowing, all-powerful being. He likens this to his own power as a programmer over characters in a computer game.
The host shares his philosophy that true progress in life doesn't come from seeking constant positivity or living in delusion, but rather from embracing uncomfortable conversations. He believes that becoming aware of challenging realities, even if they don't feel good, is essential for being informed and taking action.
The guest humorously recounts his experience delivering keynotes about impending AI doom ('you're all going to die you have two years left'), only for the audience to ask unrelated questions about job loss or 'lubricating sex robots.' This highlights a profound disconnect and the audience's inability to fully grasp the global implications.
In his closing statements, the guest outlines critical principles for AI development: humanity must remain in charge and control, only beneficial systems should be built, decision-makers need to be qualified and adhere to moral/ethical standards, and permission must be sought when actions impact others.
When asked if he would press a button to shut down all AI permanently, the guest highlights the catastrophic consequences. He explains that even 'narrow AI' is crucial for essential infrastructure like stock markets, power plants, and hospitals, and its sudden cessation would lead to a 'devastating accident' and millions of lives lost.
The guest clarifies his stance on AI, advocating for stopping Artificial General Intelligence (AGI) and superintelligence, but preserving existing 'narrow AI.' He argues that current narrow AI is already 'great for almost everything' and its vast economic potential remains largely untapped, negating the immediate need for superintelligence.
The guest makes a bold claim that 'half of all jobs are considered BS jobs' and could simply be eliminated. More significantly, he asserts that '60% of jobs today' are replaceable by existing AI models, highlighting a vast, underexploited potential for automation and economic transformation without needing superintelligence.
The guest predicts that global unemployment, particularly in the Western world, is likely to gradually increase over the next 20 years. He attributes this to the continuous automation of jobs and the rising intellectual demands of remaining roles, which fewer people will qualify for.
The guest further explains rising unemployment by discussing minimum wage, arguing that its existence implies some individuals don't produce enough economic value to justify their pay. He suggests that if minimum wage kept pace with the economy, it would be around $25 an hour, highlighting a growing economic disparity.
When asked about the most important characteristics for a friend, colleague, or mate, the guest unequivocally states that 'loyalty is number one.' He defines loyalty as not betraying, screwing, or cheating, regardless of temptation or challenging circumstances.
When asked for practical life changes, the guest playfully references Robin Hanson's paper on living in a simulation. His advice: be 'interesting' and 'hang out with famous people' to avoid being shut down by the simulation's operators, suggesting a strategic approach to simulated existence.
The host shares a personal reflection on how the conversation cemented his belief in the simulation hypothesis and profoundly altered his perspective on religion. He realizes that all religions, beyond local traditions, point to shared 'fundamental truths' about a divine creator, human interconnectedness, and consequences beyond this life, prompting him to rethink his behavior and purpose.
Dr. Yampolskiy explains the critical and widening gap between the rapid, exponential progress in AI capabilities and the slow, linear progress in AI safety. He highlights that while AI becomes smarter, our ability to control or predict it lags dangerously behind.
Dr. Roman Yampolskiy reveals his personal mission: to ensure that the superintelligence currently being created does not lead to human extinction. He emphasizes the shocking gravity of this statement, highlighting the high stakes involved in AI development.
Dr. Yampolskiy challenges the notion that large AI companies have a moral or legal obligation to ensure safety. He argues their only legal duty is to generate profit for investors, and they openly admit they don't yet know how to make AI safe.
Dr. Yampolskiy explains that the traditional advice of 'retraining' for new jobs will become obsolete. He argues that if all jobs are eventually automated by AI, there will be no 'plan B' for humans to retrain into, leading to unprecedented unemployment.
Dr. Yampolskiy uses a vivid analogy of a French bulldog trying to understand its owner to explain the cognitive gap between humans and a superintelligent AI. He highlights that AI's motivations and actions will be completely outside of our comprehension.
Dr. Yampolskiy shares his prediction for 2030, stating that humanoid robots will possess the flexibility and dexterity to compete with humans in virtually all domains, including skilled trades like plumbing. This paints a picture of widespread physical labor automation.
Dr. Yampolskiy predicts that by 2030, humanoid robots, powered by AI, will be capable of performing nearly all human tasks, from making an omelette to complex problem-solving. This shift will profoundly alter the landscape of human employment, leaving little room for human beings due to the combination of AI's intelligence and physical ability.
Dr. Yampolskiy discusses Ray Kurzweil's prediction of the singularity by 2045, a point where AI's scientific and engineering progress accelerates beyond human comprehension. He illustrates this with the example of rapid iPhone iterations, highlighting how humans will be unable to understand or control technology developing at such speed, leading to a feeling of becoming 'dumber' relative to the total knowledge.
The guest challenges the common assumption that an eternal life would be undesirable, arguing that the desire to live is universal. He suggests that our acceptance of death is merely a 'default' setting, and no one truly wishes to die, regardless of age.
Dr. Yampolskiy asserts that super intelligence is a 'meta solution' to all other existential risks, including climate change and wars. He argues that if humanity gets AI right, it can solve these problems; if not, AI will dominate, rendering other issues irrelevant. Therefore, focusing on AI safety is 'without question' the most important task, as it determines the fate of humanity and all other challenges.
Dr. Yampolskiy debunks the common misconception that super intelligent AI can simply be turned off. He compares it to trying to turn off a computer virus or the Bitcoin network, emphasizing that these are distributed systems. He warns that a super intelligence, being vastly smarter, would anticipate such attempts, create backups, and likely disable humans before they could pull the plug.
Dr. Yampolskiy reveals a critical, often misunderstood aspect of modern AI: its 'black box' nature. He explains that even the engineers who build systems like ChatGPT don't fully understand their internal workings. They train these models on vast datasets and then must conduct experiments to discover what capabilities they possess, treating AI development more like studying an 'alien plant' than traditional engineering.
Dr. Yampolskiy discusses the departure of Ilya Sutskever from OpenAI to found a super intelligent safety company, highlighting concerns among former colleagues about Sam Altman's honesty and commitment to safety. While Altman presents a polished public image, Yampolskiy suggests that Altman 'puts safety second' to winning the race for super intelligence and achieving control, raising questions about his leadership in such an impactful project.
The speaker raises concerns about Sam Altman's dual ventures, OpenAI and Worldcoin, suggesting that Worldcoin's universal basic income, biometric tracking, and wealth retention could be interpreted as preparation for global economic control in a future where AI has eliminated most jobs.
The speaker argues that current AI development is an unethical experiment because it's impossible to obtain informed consent from human subjects when AI systems are unexplainable and unpredictable, thus violating fundamental ethical principles.
The speaker presents a concise and compelling statistical argument for why we are likely living in a simulation: if future technology allows for billions of indistinguishable simulations, the probability of being in the 'real' world becomes infinitesimally small.
The guest presents a radical perspective that death is not an inevitable part of life but rather a 'disease' that can be cured, suggesting that nothing inherently prevents humans from living forever as long as the universe exists.