InvestSMART

Portfolio Doom: Investing in the AI Era

Steve Sammartino looks at the existential risks posed by artificial intelligence.
By · 13 Jun 2023
By ·
13 Jun 2023 · 5 min read
comments Comments
Upsell Banner

Imagine for a minute you were given an investment opportunity which could be life-changing, enough to make you better off than you are now, maybe even seriously rich. But the price of entry was that you had to put every single asset you owned into it and there was a small chance it would go to absolute zero — a 5-10 per cent probability that this would occur.

Would you do it?

I wouldn’t consider it, even for a second. Yet, these are the risks we are being asked to take with our live Generative AI experiment, at least according to some revered experts in the space.

If you think P(Doom) sounds worrying, trust your instincts. P(Doom) is the term AI researchers are using to describe the probability that Artificial Super Intelligence will emerge and become an existential risk for humanity. (What a wonderful little moniker.)

Here is the crazy thing. Many learned AI researchers now have this probability sitting as high as 20-50 per cent. Kinda scary. Even a 5 per cent chance to wipe out our species is a worry... and there is almost no one who has this ratio smaller than this. Sam Altman, the CEO of Open AI, who created ChatGPT, has publicly said the risk is real and he has a P(Doom) at around 5 per cent.

It’s at this point that we must remember that we are not talking about something bad happening here, like, say, a recession, or a war, a hurricane, or even a pandemic. All of these we’ve faced before and, with a lot of pain, have overcome, collectively. We are talking about the end of humanity – it doesn’t get any heavier than that.

Says Who?

Some of those with a P(Doom) at worryingly high levels are not fear-mongering crackpots, but those deeply involved in giving birth to AI. Here are some of the worried AI researchers and their P(Doom) percentages.

• Michael Tontchev, a former Microsoft software developer and current senior software lead at Meta has his at 20 per cent.

• Paul Christiano, a former OpenAI researcher who also holds a Ph.D. in theoretical computer science from UC Berkeley has his at 50 per cent.

• Eliezer Yudkowsky, a renowned AI researcher and decision theorist has his at 50 per cent.

• Geoffrey Hinton, known as the godfather of AI and a recent Google employee has his at 50 per cent.

It’s a bit like Warren Buffett warning us of potential economic doom.

Cold War 2.0 & AI Bunkers

The emergent threat of AI seems to be bubbling up in everyday discussion. As a keynote speaker on the future and AI, it’s now the most common question I get asked at an event. It’s not even close. And just like at the height of the Cold War, those with their fingers on the button seem to be the only ones with a bunker they can run to if things go wrong.

In 2016, Altman said in an interview he was prepping for survival in the event of catastrophe such as a rogue AI, claiming to have guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defence Force, and a big patch of land in Big Sur that he can fly to.

Altman’s doomsday vision of AI gone wrong is not uncommon in Silicon Valley. No tech billionaire worth his salt doesn't have a post-apocalyptic contingency plan and remote bunker. Both Peter Thiel and Google co-founder Larry Page have snapped up land in New Zealand and built bunkers. I imagine their private jets are at the ready.

The AI Saviour?

Readers will know that I’m typically positive when it comes to the emancipating power of technology. And the last thing I want to be accused of is fear-mongering – especially given this is really reporting of opinions provided by realm experts. But there is a counter-argument to the worries about the AI threat and that is that we may not be able to survive without it.

It seems to me that the probability of our species surviving other existential risks is greater than most experts' AI P(Doom). The nuclear threat is still very real, and possibly greater than it ever was during the Cold War. While we theoretically control it, we can only count ourselves lucky that a crazed suicide bomber or rogue terrorist group hasn’t secured and deployed nuclear weapons.

Likewise, despite our progress with renewable energy, I can’t see progress by any large nation-state, which gives me the confidence to believe we can reduce our global emissions needed before we reach a point of no return. We are still largely addicted to fossil fuels, GDP, economic growth, and consumption.

Maybe the thing we actually need is an all-omniscient benevolent AI to save us from ourselves: an AI which can uncover new forms of yet-to-be-discovered highly available energy, or ways to ensure we circumvent nuclear disaster via ‘Artificial Diplomacy’, an AI which can help us navigate species-level dangers which are already clear and present.

Risk and Reality

Investing and risk are bedfellows we must tolerate to increase our capital. Upside doesn’t happen without potential downside. But on average, over the long term, with a balanced portfolio, the real risk of losing is almost non-existent. This is why we keep the high-risk investments, the kind that leads to 10x or zero as a tiny percentage in our super funds and portfolios. I always try to remind myself that a total loss of capital in an investment requires 10 successful investments with an average return of 10 per cent just to get back to parity.

It turns out that the live AI experiment we are running around the globe comes with risks as well. Maybe it is time governments, and AI researchers looked to investors, like us, to understand and navigate risks beyond our portfolios.

Google News
Follow us on Google News
Go to Google News, then click "Follow" button to add us.
Share this article and show your support
Free Membership
Free Membership
Steve Sammartino
Steve Sammartino
Keep on reading more articles from Steve Sammartino. See more articles
Join the conversation
Join the conversation...
There are comments posted so far. Join the conversation, please login or Sign up.