It is interesting that very early Christianity was imminent apocalyptic starting with Jesus himself. And certainly Paul 30 years later. And probably the Church in Jerusalem lead by James brother of Jesus If affected their motives and actions to great extent. But you can see the belief being denuded in gospels written later (nobody knows the time - it will come life a thief in the night) and in Paul' later epistles. The weasel words were creeping in.
An interesting follow-up article would be why doomers (and others) express beliefs that they don't act on? It would seem to be that the expression of such beliefs provides social rewards. What are those rewards? How does it work?
People say stuff they don't **really, really** believe all the time. Revealed preferences tell us so. It's a not-insignificant problem. Because of the deliberate deception on it's own, but also because lies allow them to climb our social hierarchies, joining hands with other liars in the process. Then they get to command huge common resources, and usually direct them towards ruin, harming us all in the process.
Bet-on-it is one easy way to deal with that where possible. Forcing someone to put even symbolic $10 of their own money where their mouth is, concentrates the mind. If they can put something more dear to them at risk, that's even better. The part of trading I particularly like is that people bet on their opinions, put their money where their mouth is.
But most of the time assignment of credit/blame is not that straightforward. So yeah, it's hard keeping people honest.
I agree there's something really off about a 99% estimate of doom and also planning for the future like any other middle class person.
But I think there's a 1-10% chance of AIs destroying everything in the next 50 years. At that level, it makes sense to still put money in the retirement account and maybe even my kids college fund (do do, but question it about every week). But I still would agree with the 99%er that it would be better to regulate AI training to reduce risk, as a precaution.
Consider the Millerites, a Xian sect who predicted the end of the world one year, so didn’t plant crops. They ended up very hungry and morphed into the 7th Day Adventists. Read about them in Being Wrong by Kathryn Schulz
I don't believe there are many true 99% doomers. I also don't believe many doomers even present themselves with so much confidence?
I imagine most doomers are more of 20-80% p(doom). In this range, they are certainly predicting massive potential future outcomes. However, maintaining the status quo for their own personal behavior can still be consistent with genuinely believing in that level of doom, given that they're still assigning a decent chance to the no-doom scenario.
If I believed in an 80% p(doom), I would likely still contribute to my retirement in case that doesn't happen. It would be most rational to decrease my long-term investments somehow compared to a lower p(doom), but if mindlessly burning money doesn't make me much happier, then it could be rational for me to only slightly reduce my long-term investments.
While a very interesting post, I'm not sure Jewish messianism is applicable to "doomerism": There's a big discussion in the talmud (tractate Shabbat 63.), and one of the options (later also endorsed by Maimonides) is that the only difference between the time of the messiah and today is the "yoke" of foreigners on Jews. So they do not believe, necessarily, that their world is about to end. It is about to get better (presumably, the consider it a closer connection to god), but books and school and everything else should continue.
I'm not sure, Dr. Bloom. People who claim to believe in "heaven" sure cling to life tenaciously. It seems easy to have cognitive dissonance. Or we're just really good at lying to ourselves.
I think humans, in general, need some type of "religion" to give their lives meaning. W/o "standard" religions (and no "civilization-level" struggle - Nazis, Commies), believing in a certain type of doom gives them community and a sense of importance.
For me it's what I imagine living with cancer in remission would be like. You don't know whether the cancer will ever come back, but there's always the possibility of tragedy striking at some point in the future.
I'm a semi-doomer ("arguments seem very strong but what do I know" kind of thing): my worry ebbs and flows and I never feel genuine visceral panic, but when I think about it a lot I can feel the worry in my stomach.
To be fair, you have absolutely bo idea what it’s like to be a doomer. Also, it depends on what is doomed.
It’s a bona fide meaning crisis that renders longterm planning surreally moot. It is also usually accompanied by ostracism.
The lost telling sign is whether they see any potential: are they trying to build something? Is there a future to plan and live for?
Even then, they might prefer doing it anyway, for the simple reason that life is a journey, not the destination. Even is they’re 99% sure the world is ending.
Again, doomerism is a state of mind one cannot know without trying it. It is positively mind-melting.
Most 'AI doomers' have a pretty high confident that
(1) _IF_ the AI industry keeps trying to build Artificial Superintelligence, without any effective public backlash, they'll likely do so, sooner or later, and
(2) _IF_ ASI gets built, it may impose a moderate to high (e.g. 30-70%) chance of human extinction within a decade or so. Suppose the net p(doom) is about 10-20% (which is roughly my view).
How exactly should doomers like me change their behavior in ways that demonstrate that we take our stated concerns seriously?
Many of us ('public outreach safety people') put a huge amount of thought, energy, and time into communicating our concerns to others, so we can reduce probability (1) above, by nudging public opinion towards pausing dangerous AI development. That's what I've spent a lot of the last 10 years doing, what I devoted my whole 2024 sabbatical to, and what I give talks about (e.g. at the recent HBES meeting).
Others ('technical AI safety people') put a huge amount of thought, energy, and time into trying to solve ASI alignment, control, corrigibility, and interpretability problems, to reduce probability (2) above.
Many of us are (allegedly, so I hear) active preppers who are doing everything we can to protect our families and communities in case of an AGI/ASI-imposed catastrophe that falls short of outright extinction. But, the first rule of prepping is, you don't talk about prepping. Serious prepper respect the principles of OpSec (operational security), so they're not going to tell you about their guns and ammo, their food and water stocks, their rural retreats, their alternative power sources, etc. I guess you could do anonymous polls to figure out what proportion of 'doomers' are also 'preppers'.
Beyond that, what exactly do you expect from people whose p(doom) is somewhere in the 10-50% range, and who think the real ASI dangers are roughly 5-30 years away?
Should we quite our jobs, so we lose our houses, can't pay for daycare or good, and ruin our families?
Should we sink into profound despair, so we're terrible spouses who get divorced, and terrible parents who neglect our kids?
Should we spend more money on short-term conspicuous consumption? Spend more energy seeking short-term mates? Manage our careers just for short-term payoffs? Given that all of these -- consumption, casual sex, careerist status-seeking -- feel vacuous and hollow given the looming ASI risks?
Please, Paul, let us know what specific behaviors would convince you that we REALLY MEAN IT when we say that ASI extinction risk is a very serious likelihood if we don't stop ASI development.
The point of my post was to provide a test for whether doomers believe what they say, and to argue that some of them don’t pass the test. You point out that YOU pass the test— your life has been substantially changed by your belief that there is a reasonable chance that AI will kill us all in the next 5 to 30 years. Makes sense. So what are we disagreeing about?
I was pointing out a lot more than just 'Well I personally have been affected by my concerns about AI extinction risk'.
I was also pointing out that (1) many people concerned about AI safety have re-organized their entire lives around this issue, e.g. going into AI safety advocacy or technical AI safety careers, instead of doing other more lucrative & more socially validated things, and (2) many of the ways that economists have argued that 'AI doomers' should have changed their lives, based on their p(doom) being above 10%, simply don't make sense -- e.g. consuming a lot more, saving a lot less; giving up on mating, parenting, and social life; publicly becoming a prepper, etc.
The general tone of your piece was quite dismissive of AI risks, and implied that most 'AI doomers' are hypocritical, and don't really believe what they say. I was making the counter-argument that, in fact, we do believe what we say -- but the implications of that belief for how we run our lives are not at all obvious to people who haven't lived with this concern for years.
Fun counter-example: I recently saw someone ask a prominent AI safety person something like “Are you considering reducing your saving for retirement?” And the AI safety person got to say that they’d mostly stopped contributing to their 401k around 2020 when they started to believe in this stuff
I do think you’re right though that there’s some flinch response away from truly internalizing the worldview. It feels very sad in some sense to live as if the world is ending, even if that may indeed be the right approach
Revealed preference, as economists like to say. Actions speak louder than words as everyone else puts it.
It is interesting that very early Christianity was imminent apocalyptic starting with Jesus himself. And certainly Paul 30 years later. And probably the Church in Jerusalem lead by James brother of Jesus If affected their motives and actions to great extent. But you can see the belief being denuded in gospels written later (nobody knows the time - it will come life a thief in the night) and in Paul' later epistles. The weasel words were creeping in.
Most of life is based on the known-incorrect assumption that you'll live forever.
When covid came round people said "it only kills people who are going to die anyway" (hence not us and no problem!). So we all think we're immortals.
Even the most religious people sin.
There's actually no evidence for the temporal continuity of awareness, it's just conventient to believe the illusion without questioning it.
There are things that you believe intellectually and gut instincts that you live by.
An interesting follow-up article would be why doomers (and others) express beliefs that they don't act on? It would seem to be that the expression of such beliefs provides social rewards. What are those rewards? How does it work?
People say stuff they don't **really, really** believe all the time. Revealed preferences tell us so. It's a not-insignificant problem. Because of the deliberate deception on it's own, but also because lies allow them to climb our social hierarchies, joining hands with other liars in the process. Then they get to command huge common resources, and usually direct them towards ruin, harming us all in the process.
Bet-on-it is one easy way to deal with that where possible. Forcing someone to put even symbolic $10 of their own money where their mouth is, concentrates the mind. If they can put something more dear to them at risk, that's even better. The part of trading I particularly like is that people bet on their opinions, put their money where their mouth is.
But most of the time assignment of credit/blame is not that straightforward. So yeah, it's hard keeping people honest.
Reminds me of the solipsist philosopher who thought it was strange that there weren't more solipsists among other philosophers.
I agree there's something really off about a 99% estimate of doom and also planning for the future like any other middle class person.
But I think there's a 1-10% chance of AIs destroying everything in the next 50 years. At that level, it makes sense to still put money in the retirement account and maybe even my kids college fund (do do, but question it about every week). But I still would agree with the 99%er that it would be better to regulate AI training to reduce risk, as a precaution.
Really interesting stories, presented really well. thank you for sharing.
Consider the Millerites, a Xian sect who predicted the end of the world one year, so didn’t plant crops. They ended up very hungry and morphed into the 7th Day Adventists. Read about them in Being Wrong by Kathryn Schulz
I don't believe there are many true 99% doomers. I also don't believe many doomers even present themselves with so much confidence?
I imagine most doomers are more of 20-80% p(doom). In this range, they are certainly predicting massive potential future outcomes. However, maintaining the status quo for their own personal behavior can still be consistent with genuinely believing in that level of doom, given that they're still assigning a decent chance to the no-doom scenario.
If I believed in an 80% p(doom), I would likely still contribute to my retirement in case that doesn't happen. It would be most rational to decrease my long-term investments somehow compared to a lower p(doom), but if mindlessly burning money doesn't make me much happier, then it could be rational for me to only slightly reduce my long-term investments.
While a very interesting post, I'm not sure Jewish messianism is applicable to "doomerism": There's a big discussion in the talmud (tractate Shabbat 63.), and one of the options (later also endorsed by Maimonides) is that the only difference between the time of the messiah and today is the "yoke" of foreigners on Jews. So they do not believe, necessarily, that their world is about to end. It is about to get better (presumably, the consider it a closer connection to god), but books and school and everything else should continue.
Thank you -- that's very interesting.
I'm not sure, Dr. Bloom. People who claim to believe in "heaven" sure cling to life tenaciously. It seems easy to have cognitive dissonance. Or we're just really good at lying to ourselves.
I think humans, in general, need some type of "religion" to give their lives meaning. W/o "standard" religions (and no "civilization-level" struggle - Nazis, Commies), believing in a certain type of doom gives them community and a sense of importance.
https://www.mattball.org/2023/05/doom-force-that-gives-life-meaning.html
For me it's what I imagine living with cancer in remission would be like. You don't know whether the cancer will ever come back, but there's always the possibility of tragedy striking at some point in the future.
I'm a semi-doomer ("arguments seem very strong but what do I know" kind of thing): my worry ebbs and flows and I never feel genuine visceral panic, but when I think about it a lot I can feel the worry in my stomach.
To be fair, you have absolutely bo idea what it’s like to be a doomer. Also, it depends on what is doomed.
It’s a bona fide meaning crisis that renders longterm planning surreally moot. It is also usually accompanied by ostracism.
The lost telling sign is whether they see any potential: are they trying to build something? Is there a future to plan and live for?
Even then, they might prefer doing it anyway, for the simple reason that life is a journey, not the destination. Even is they’re 99% sure the world is ending.
Again, doomerism is a state of mind one cannot know without trying it. It is positively mind-melting.
To be fair, everyone is a doomer on a subjective level: you’re heart is not going to beat forever.
Most 'AI doomers' have a pretty high confident that
(1) _IF_ the AI industry keeps trying to build Artificial Superintelligence, without any effective public backlash, they'll likely do so, sooner or later, and
(2) _IF_ ASI gets built, it may impose a moderate to high (e.g. 30-70%) chance of human extinction within a decade or so. Suppose the net p(doom) is about 10-20% (which is roughly my view).
How exactly should doomers like me change their behavior in ways that demonstrate that we take our stated concerns seriously?
Many of us ('public outreach safety people') put a huge amount of thought, energy, and time into communicating our concerns to others, so we can reduce probability (1) above, by nudging public opinion towards pausing dangerous AI development. That's what I've spent a lot of the last 10 years doing, what I devoted my whole 2024 sabbatical to, and what I give talks about (e.g. at the recent HBES meeting).
Others ('technical AI safety people') put a huge amount of thought, energy, and time into trying to solve ASI alignment, control, corrigibility, and interpretability problems, to reduce probability (2) above.
Many of us are (allegedly, so I hear) active preppers who are doing everything we can to protect our families and communities in case of an AGI/ASI-imposed catastrophe that falls short of outright extinction. But, the first rule of prepping is, you don't talk about prepping. Serious prepper respect the principles of OpSec (operational security), so they're not going to tell you about their guns and ammo, their food and water stocks, their rural retreats, their alternative power sources, etc. I guess you could do anonymous polls to figure out what proportion of 'doomers' are also 'preppers'.
Beyond that, what exactly do you expect from people whose p(doom) is somewhere in the 10-50% range, and who think the real ASI dangers are roughly 5-30 years away?
Should we quite our jobs, so we lose our houses, can't pay for daycare or good, and ruin our families?
Should we sink into profound despair, so we're terrible spouses who get divorced, and terrible parents who neglect our kids?
Should we spend more money on short-term conspicuous consumption? Spend more energy seeking short-term mates? Manage our careers just for short-term payoffs? Given that all of these -- consumption, casual sex, careerist status-seeking -- feel vacuous and hollow given the looming ASI risks?
Please, Paul, let us know what specific behaviors would convince you that we REALLY MEAN IT when we say that ASI extinction risk is a very serious likelihood if we don't stop ASI development.
The point of my post was to provide a test for whether doomers believe what they say, and to argue that some of them don’t pass the test. You point out that YOU pass the test— your life has been substantially changed by your belief that there is a reasonable chance that AI will kill us all in the next 5 to 30 years. Makes sense. So what are we disagreeing about?
I was pointing out a lot more than just 'Well I personally have been affected by my concerns about AI extinction risk'.
I was also pointing out that (1) many people concerned about AI safety have re-organized their entire lives around this issue, e.g. going into AI safety advocacy or technical AI safety careers, instead of doing other more lucrative & more socially validated things, and (2) many of the ways that economists have argued that 'AI doomers' should have changed their lives, based on their p(doom) being above 10%, simply don't make sense -- e.g. consuming a lot more, saving a lot less; giving up on mating, parenting, and social life; publicly becoming a prepper, etc.
The general tone of your piece was quite dismissive of AI risks, and implied that most 'AI doomers' are hypocritical, and don't really believe what they say. I was making the counter-argument that, in fact, we do believe what we say -- but the implications of that belief for how we run our lives are not at all obvious to people who haven't lived with this concern for years.
Fun counter-example: I recently saw someone ask a prominent AI safety person something like “Are you considering reducing your saving for retirement?” And the AI safety person got to say that they’d mostly stopped contributing to their 401k around 2020 when they started to believe in this stuff
I do think you’re right though that there’s some flinch response away from truly internalizing the worldview. It feels very sad in some sense to live as if the world is ending, even if that may indeed be the right approach