The AI apocalypse
Ezra Klein talks to a pioneer of A.I. safety research, Eliezer Yudkowsky, about P(doom) ...
It's hard to be a fan of science fiction and not have some worries about AI going bad, taking over society, and destroying us.
From Terminator ...
To The Matrix ...
I actually do think AIs have the potential to be very dangerous. They are human intelligence raised to an inhuman level, while untethered to the stuff in human beings that exerts control, and unpredictable in ways we, well, can't predict.
It's hard to be a fan of science fiction and not have some worries about AI going bad, taking over society, and destroying us.
From Terminator ...
To The Matrix ...
I actually do think AIs have the potential to be very dangerous. They are human intelligence raised to an inhuman level, while untethered to the stuff in human beings that exerts control, and unpredictable in ways we, well, can't predict.


0 Comments:
Post a Comment
<< Home