Lenny Rachitsky (00:00:00):
You wrote somewhere that creating powerful AI might be the last invention humanity ever needs to make. How much time do we have, Ben?
Benjamin Mann (00:00:06):
I think 50th percentile chance of hitting some kind of superintelligence is now like 2028.
Lenny Rachitsky (00:00:12):
What is it that you saw at OpenAI? What'd you experience there that made you feel like, okay, we got to go do our own thing?
Benjamin Mann (00:00:17):
We felt like safety wasn't the top priority there. The case for safety has gotten a lot more concrete, so superintelligence is a lot about how do we keep God in a box and not let the God out?
Lenny Rachitsky (00:00:26):
What are the odds that we align AI correctly?
Benjamin Mann (00:00:29):
Once we get to superintelligence, it will be too late to align the models. My best granularity forecast for could we have an X-risk or extremely bad outcome is somewhere between 0 and 10%.
Lenny Rachitsky (00:00:40):
Something that's in the news right now is this whole Zuck coming after all the top AI researchers,
Benjamin Mann (00:00:45):
We've been much less affected because people here, they get these offers and then they say, well, of course I'm not going to leave because my best case scenario at Meta is that we make money and my best case scenario at Anthropic is we affect the future of humanity.
Lenny Rachitsky (00:00:59):
Dario, your CEO recently talked about how unemployment might go up to something like 20%.
Benjamin Mann (00:01:04):
If you just think about 20 years in the future where we're way past the singularity, it's hard for me to imagine that even capitalism will look at all like it looks today.
Lenny Rachitsky (00:01:13):
Do you have any advice for folks that want to try to get ahead of this?
Benjamin Mann (00:01:15):
I'm not immune to job replacement either. At some point it's coming for all of us.
Lenny Rachitsky (00:01:20):
Today, my guest is Benjamin Mann. Holy moly. What a conversation. Ben is the...