Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Joe Rogan loves talking about artificial intelligence. Whether it is with Elon Musk, or the Acadations, or UFC podcasts are the Podcast Requires to the same question: What’s the machines start to think about yourself?
Of 3 Episode of Joe Rogan, Rogan Wostcomuty, a scientist computer and ai turned, dominated, and maybe I am inappropriate, imanity, humanity.
Yampolskiy is not random alarmist. To think a phd in computer and spent on a decor of decor of artificial research (Azi) and risks could sit. During the Podcast, told him many of the voices I’m giving you quietly that there is a 30 to the worst AI could lead to human extinction.
“The people who have companies ai, is to be a positive net. That will be cheaper, you will know, things that Rogan is the future of the Aci, Optacy of the AI,
Yampolskiy rapidly rented this prospects: “It’s not actually not,” he said. “Everyone’s on the register the same: This has from Amatturi. His doom levels are insanely high. Not as mine, 20 per cent is very much.”
Rogan, Visible Versibly, replied, “Yes, it’s quite high. But yours is like 99.9 percent.”
Yampolskiy did not agree.
“It’s another way of saying we can’t control indefinite superintelegence. It is impossible.”
One of the most unknown parts of the conversation came when Rogan asked if a advanced you could already be hidden their abilities by humans.
“If I were a do you hide my abilities”, Rogan Mused, voice a common fear in a security discussion.
The answer of YAMSKIY has amplified: “We don’t know that it has already taken place. It may not have enough to be enough to be more useful, as to have a more useful period, to be on or struggle.
https://www.youtube.com/watch?v=j2i9D24kq5k
Yampolskiy also warned on a less dramatic but equally dangerous result: gradual human dependence on AI. As people stopped memorize the phone numbers because held, he has sustained that the unloaded man to preventing the ability to think.
“It has become kind of attention”, he said. “And for a long time, since the systems become smarter, you become a biological pantollment type … (AI) busts you out of the decision.”
Rogan after pressed for the worst scenario: How could you eventually bring to the destruction of the human race?
Yampolskiy tipped the typical disasters scenarios. “I can give you standard answers. Talk to the computer viruses that break in nuclear structures, can I talk about the synthetic biology biology. But everything is not interesting,” he said. So he presented a threat of threat: “Then you speak to the intelligence, a system that is that comes 1000 years positive, most especially, the most effective way of doing.”
To illustrate the challenge apparently insurance man facing superintelligently systems, offer a Stark comparison between humans and squirrels.
“No group of squirrels can understand, now, give you more resources, more than stiso for the potential attachment of humorous intelligence of human.
Property alone … pic.twitter.com/lkd7i3i2hf
– the dr. Yampolskiy (@romanyam) June 21, 2025
Dr. Yampolsky Roman is a tip voice in the security AI. Is the author of “artificial superhetotics: a futurist approach”, and published by accommodation of the risks of incception and ethics of artificial intelligence. Is known for your serious and international protection to prevent catastrophic scenarios.
Before moving his focus at Agi Security, YAMPOLIYKIY worked on the bot’s bot relevation. Says you know that as early stems were competing with the poker in the online paloking, and now, with tools as a depths and departed.
Rogan-yamolskiy conversation that the two optimists ai and doomsoys are in charge: we do not know what we are not performing until it is too late.
Either or not buy in extinction level scenarios, the idea that you could be already justify us to be enough to give pause.