I was recently rereading Paul Graham's essay on why high school is so miserable. Graham writes:
"It's no wonder, then, that smart kids tend to be unhappy in middle school and high school. Their other interests leave them little attention to spare for popularity, and since popularity resembles a zero-sum game, this in turn makes them targets for the whole school. And the strange thing is, this nightmare scenario happens without any conscious malice, merely because of the shape of the situation.
...
Why is the real world more hospitable to nerds? It might seem that the answer is simply that it's populated by adults, who are too mature to pick on one another. But I don't think this is true. Adults in prison certainly pick on one another. And so, apparently, do society wives; in some parts of Manhattan, life for women sounds like a continuation of high school, with all the same petty intrigues.
I think the important thing about the real world is not that it's populated by adults, but that it's very large, and the things you do have real effects. That's what school, prison, and ladies-who-lunch all lack. The inhabitants of all those worlds are trapped in little bubbles where nothing they do can have more than a local effect. Naturally these societies degenerate into savagery. They have no function for their form to follow.
When the things you do have real effects, it's no longer enough just to be pleasing. It starts to be important to get the right answers, and that's where nerds show to advantage. Bill Gates will of course come to mind. Though notoriously lacking in social skills, he gets the right answers, at least as measured in revenue."
Graham basically argues that high school society is miserable because it's a zero-sum status competition. Real life is a little better because positive-sum interactions are possible. We move from a competitive PvP environment to a collaborative PvE one after high school.
I'm concerned that as we get wealthier, and it's less essential for us to cooperate with one another to meet our essential needs, our mode of interaction will shift back from positive-sum to zero-sum.
The US is one of the world's wealthiest countries. But we also have some of the world's most bitter politics. And within the US, the angriest people tend to be quite wealthy:
"Progressive Activists have strong ideological views, high levels of engagement with political issues, and the highest levels of education and socioeconomic status. Their own circumstances are secure. They feel safer than any group, which perhaps frees them to devote more attention to larger issues of social justice in their society. They have an outsized role in public debates, even though they comprise a small portion of the total population, about one in 12 Americans."
Similarly, you can see a lot of bitter, zero-sum status competition on social media.
I worry that post-scarcity will be a sort of high-school dystopia with endless petty status competition. It might be good to think about how this outcome could be averted in advance.
AI agents are pushing the transmission costs of social life to zero. If all that remains are the costs of reshaping our understanding of the world to match that of other people, then we risk ending up back in the zero sum high school world. Even and especially if the cost of energy decline. Scarcity drives differentiation, and invites nonzero sum coordination.
If we follow the path of least resistance (asymmetry) in an infinite energy world, we would ironically end up in a state of maximum boredom (for the most complex models) and anxiety (for the simplest).
If we agree to symmetry, we are agreeing to maintain the divergence. We agree not to overwrite each other. We agree to spend the infinite energy required to build "bridges" (translations/interfaces) between our distinct realities rather than collapsing them.
This maintenance of difference is what generates the "curvature" of the social space. An "artificial gravity" that keeps the system interesting.
By the second paragraph, I was thinking "Is he going to discuss the Culture series"?
You neglected to mention that Gurgeh, the game playing protagonist in The Player of Games starts out the novel bored (he's run out of challenging opponents). His life only gets exciting and meaningful after he is recruited by Special Circumstances (the Culture's equivalent of Bill Donovan's WW2 Office of Strategic Services). HIs assignment: to bring down an evil civilization by beating its leaders at their hideously complex imperial game.
Yeah I remember someone recommending Player of Games as a way to convince oneself that utopia can actually be good. I started reading, and the main character gets bored, just as I was worried about. And then he relieves his boredom by doing something involving extraterrestrials? OK, but what if there are no extraterrestrials? Then we're just bored?
Strictly speaking, everybody in the Culture is an extraterrestrial. The Culture began around 7000 BCE. In the novella The State of the Art, Special Circumstances agent Diziet Sma (who is also a main character in Use of Weapons) visits Earth in the 1970’s on a fact-finding mission. The Culture eventually decides to keep Earth as a "control group" for sociological study to see how a "primitive" (Level 3) society develops on its own. The appendix of Consider Phlebas suggests we will get invited to join the Culture around 2100 CE.
The citizen of the Culture refer to themselves as “humans” or “pan-humans”, but no kinship with Homo sapiens is implied. Instead Banks tells us that the humanoid body plan, with lots of variation, is a common evolutionary endpoint throughout the galaxy and that the Culture began as a confederation of 7-8 separately evolved humanoid species and their AIs (non-humanoid species are invited to join the Culture, but it is majority humanoid).
Diziet Sma had to be altered to go undercover on Earth: her original body had fewer than five toes per foot and an extra joint on each finger and her ears, nose, and cheekbones were also altered. The Gzilt, the main species in The Hydrogen Sonata, are humanoid but vaguely reptilian with gray skin and no mammary glands.
Drama in the novels usually revolves about contact, conflict, and even war with non-Culture societies (yes, the equivalent of extraterrestrials). So the agencies devoted to these activities, Contact and Special Circumstances, provide highly meaningful but dangerous work for a handful of talented humans who find “utopia” boring.
The Culture can pretty much guarantee physical immortality (one’s consciousness can be backed up for installation into a new body after a “fatal” accident). That said, citizens who opt for such immortality are regarded as eccentrics, but so are “disposables”: eccentrics who find meaning by engaging in extreme sports without any possibly of backup after a fatal accident.
The average citizen lives for 350-450 years. Oblivion isn’t the only possible endpoint: some opt to join a group mind.
And the usual endpoint for a long-lived civilization is subliming; translation en masse to a higher and non-physical plane of existence.
>Drama in the novels usually revolves about contact, conflict, and even war with non-Culture societies (yes the equivalent of extraterrestrials). So the agencies devoted to these activities, Contact and Special Circumstances, provide highly meaningful but dangerous work for a handful of talented humans who find “utopia” boring.
Call me back when someone has written an entertaining story about a utopian society that doesn't experience any conflict with other societies. Until then I will worry that utopia will be boring.
No need. Only a tiny handful of Culture citizens get recruited by Contact or Special Circumstances. The vast majority of its 30 trillion citizens are insulated from worrying about conflict with other societies, so they effectively already live in “a utopian society that doesn't experience any conflict with other societies”.
Boredom is obviously an unavoidable problem because few citizens opt to live more than 500 years.
I’m retired, and my retirement is well-funded (it‘s looking like my standard of living in retirement is higher than it was during my peak working years). Life in the Culture is pretty much akin living your whole life in well-funded retirement albeit with better health and a much longer lifespan.
I keep quite busy with multiple hobbies many of which involve learning new things. My medical history suggests I have no more than 25 years left. I don’t expect to get bored before then. But what if my life expectancy were another 125 years, or another 525 years? Who knows?
I think you’re confusing the prevalence of terminal boredom with its lifetime risk. In the Culture it looks like everyone gets bored with physical life after 400 years or so; the lifetime risk is 100%. That hardly means that everyone is terminally bored throughout their life. The prevalence at any moment might be only 1-2%.
The Culture has eliminated (or made optional) all the ordinary causes of death: accidents, diseases, and homicide. What’s left is suicide secondary to terminal boredom (“E is for Ernest who died of Ennui”). That’s a good thing.
Modern medicine has greatly decreased the risk of dying from many easy to fix conditions (for example, heart attacks). This means that many people live long enough to die of harder to fix conditions (cancer, dementia). That’s also a good thing.
Life in the Culture eventually bores people to death. But only after several hundred years.
My definition of utopia doesn’t require keeping people from being bored forever. We are finite creatures after all and can only have finite desires. We’re not made for eternity. The only way heaven could avoid boredom is by constantly expanding our intellects and passions. After a few cycles of this, we would bear little resemblance to our earlier earthly selves (so it might not make much sense to say that our earthly self was still alive). The Culture equivalent is joining a group mind that gets bigger and bigger over time.
The points you made about the desire to matter being elevated following the fulfilment of material needs made me think of Aldous Huxley’s Brave New World. The World State utilises genetic engineering, pre ordained social classes and drugs in order to placate social unrest. I wonder whether there is a danger governments attempt to ‘solve’ the spiritual needs of its citizens in a manner similar to the way they approached the solving of material needs.
I liked this essay a lot! It is a telling treatment of the needs for connection/love and recognition/status. But at least one need is missing: the need for competence, the feeling of being able to do something well, whether it's carpentry or making music or doing crossword puzzles. I don't think this is completely covered by the need for status and recognition. In Walden Two, Skinner imagined that in his utopia people would freely pursue such things (as did Marx!), but in a totally automated bountiful world, where we could imagine that the only really useful skill would be knowing which icon to click to get desired results, would any skills or competencies retain intrinsic value?
The other day when my wife and I were watching "The Studio" I was thinking about the episode "The Pediatric Oncologist" and the debate comes up about whose job is more important, the The Pediatric Oncologist or The Movie Studio exec. I thought, now here is something teed up for Paul Bloom to have an interesting take on :) You sort of answered it with the paragraph about
"..I don’t think status, respect, and mattering are the most important things. Air is more important. Take away someone’s air..."
In a ranking, yes 100%. But in some ways I think thats the wrong question. They are indeed necessary for life, but I dont think they are sufficient for a "good life." We could have all those material needs met, but like you write there is so much more needed :)
Thank you for this—your books, especially How Pleasure Works and Against Empathy, have shaped how I think about these issues.
Wilkinson's fragmentation point suggests something I've been exploring: coordination has costs. Creating common knowledge — the recursive "I know that you know" structure that lets strangers cooperate — requires burning resources. Call it a "synchronization tax." The Super Bowl works as coordination infrastructure precisely because it's expensive and scarce; that's what makes shared attention focal.
Your post implies the healthy equilibrium isn't less status competition but fragmented status competition — many parallel hierarchies (surfers vs. geeks) rather than one inescapable dimension. Multiple hierarchies provide escape routes. When everyone must compete on one legible axis, you get zero-sum misery. When there are many axes, each with its own costly signals, you get stable pluralism. I truly adore this vision of the future!
Thank you for this—your books, especially How Pleasure Works and Against Empathy, have shaped how I think about these issues.
Wilkinson's fragmentation point suggests something I've been exploring: coordination has costs. Creating common knowledge — the recursive "I know that you know" structure that lets strangers cooperate — requires burning resources. Call it a "synchronization tax." The Super Bowl works as coordination infrastructure precisely because it's expensive and scarce; that's what makes shared attention focal.
Your post implies the healthy equilibrium isn't less status competition but fragmented status competition — many parallel hierarchies (surfers vs. geeks) rather than one inescapable dimension. Multiple hierarchies provide escape routes. When everyone must compete on one legible axis, you get zero-sum misery. When there are many axes, each with its own costly signals, you get stable pluralism. I truly adore this vision of the future!
I think the reason why we would never be fully satisfied is because it is also a part of our desire to be "unsatisfied." I really like a quote from Kant that says, "Rules for happiness: something to do, someone to love, something to hope for." These things shape our identities by directing where we put our attention and effort into. Therefore, if they are just automatically quenched, we would ironically be more miserable. We wouldn't have anything to "hope for" if anyone doesn't have to hope for, and work for the sake of, anything.
I appreciate your take, Paul, but I'm not sure I agree.
In your book, you say early on something like (paraphrasing / my memory) that the meaning of life is to take on as much suffering as you can take. I think that goes to far. I think we can have a meaningful life and make the world a better place w/o suffering. IMO, people should read your pal's books "The Moral Animal" and "Why Buddhism is True."
Of course this also explains the rise of cults and conspiracy theorists. They provide a shortcut to significance without the work of doing it properly. Everybody wants to be a player.
And the fact that AI 'knows everything that is knowable at any moment' is irrelevant to me if I don't know it. There is for all intents and purposes an infinite world of possibilities to be be explored within every field of science, including every aspect of every species living and extinct, and across the universe on all scales, using VR and AR to explore deeply, as well as across every field of human endeavour. Despite AI able to beat every human at chess, people still compete with each other. People will still get pleasure from mastering a skill and others will get pleasure from watching the masters at work whether in sport or any other field of human endeavour, because we will appreciate how rare and difficult mastery is. And we have the opportunity to use AI to update our mental models. Some say AI will get to the stage where it knows things that humans cannot know, as we know more than any ant can possibly know. But this overlooks a fundamental difference. AI is learning using language and maths, so that even if it creates its own language and maths, it has a path back to us to be able to express in human language new concepts in the same way the a baby goes from knowing nothing to one day becoming a world class physicist... by building up concepts bit by bit. AI can do the same. We may not be able to do the calculations at speed and see the patterns initially, but if AI really is superintelligent, it should be able to work back because it will know its roots. And we will still enjoy gardening for gardening's sake and many other things that AI and robots can do just as well... but the joy will be in the doing.
Paul, why is it more icky to rewire your brain more directly than to slowly outgrow eg possessiveness, need for status or other things that clearly cannot be the endgame of human well being?
Why should we be stuck with our, from our current point of view, arbirtrarily developed brains as an initial starting point to then slowly change to slightly or just somewhat different brains rather than trying out different more esoteric or "extreme" brain states akin to eg deep psychedelic/meditative experiences of endless love for everyone without the need for eg status?
In short, why in a hypothetical future full of wondrous possibilities should we be more or less stuck with our ape brains who are occupied with silly ape things just because it might seem icky to change it radically (you know about the status quo bias, I'm sure). You could even do trial periods with different consciousness-constellations.
“And so one person’s satisfaction is contingent on the choices of another. “
We’re gonna integrate AI into our biological systems, so our satisfaction becomes contingent on the choices of a, what, computer? This is what I see so it maybe due to not fully understanding the goal.
My point is that we have to know ourselves first. How can we know what we want, what will bring us happiness, when we’re looking in the opposite direction.
Ironic. I remember asking a certain colleague at the University of Arizona "How could you not want to help in a situation like this?", and being dumbstruck when he coolly replied, "Easy. I don't think about you at all." Not surprised when he wrote a book called Against Empathy. Very surprised that he now argues that it is a basic human need to feel you matter.
I was recently rereading Paul Graham's essay on why high school is so miserable. Graham writes:
"It's no wonder, then, that smart kids tend to be unhappy in middle school and high school. Their other interests leave them little attention to spare for popularity, and since popularity resembles a zero-sum game, this in turn makes them targets for the whole school. And the strange thing is, this nightmare scenario happens without any conscious malice, merely because of the shape of the situation.
...
Why is the real world more hospitable to nerds? It might seem that the answer is simply that it's populated by adults, who are too mature to pick on one another. But I don't think this is true. Adults in prison certainly pick on one another. And so, apparently, do society wives; in some parts of Manhattan, life for women sounds like a continuation of high school, with all the same petty intrigues.
I think the important thing about the real world is not that it's populated by adults, but that it's very large, and the things you do have real effects. That's what school, prison, and ladies-who-lunch all lack. The inhabitants of all those worlds are trapped in little bubbles where nothing they do can have more than a local effect. Naturally these societies degenerate into savagery. They have no function for their form to follow.
When the things you do have real effects, it's no longer enough just to be pleasing. It starts to be important to get the right answers, and that's where nerds show to advantage. Bill Gates will of course come to mind. Though notoriously lacking in social skills, he gets the right answers, at least as measured in revenue."
https://paulgraham.com/nerds.html
Graham basically argues that high school society is miserable because it's a zero-sum status competition. Real life is a little better because positive-sum interactions are possible. We move from a competitive PvP environment to a collaborative PvE one after high school.
I'm concerned that as we get wealthier, and it's less essential for us to cooperate with one another to meet our essential needs, our mode of interaction will shift back from positive-sum to zero-sum.
The US is one of the world's wealthiest countries. But we also have some of the world's most bitter politics. And within the US, the angriest people tend to be quite wealthy:
"Progressive Activists have strong ideological views, high levels of engagement with political issues, and the highest levels of education and socioeconomic status. Their own circumstances are secure. They feel safer than any group, which perhaps frees them to devote more attention to larger issues of social justice in their society. They have an outsized role in public debates, even though they comprise a small portion of the total population, about one in 12 Americans."
https://hiddentribes.us/profiles/
Similarly, you can see a lot of bitter, zero-sum status competition on social media.
I worry that post-scarcity will be a sort of high-school dystopia with endless petty status competition. It might be good to think about how this outcome could be averted in advance.
https://x.com/riemannzeta/status/2022490324313198876?s=20
AI agents are pushing the transmission costs of social life to zero. If all that remains are the costs of reshaping our understanding of the world to match that of other people, then we risk ending up back in the zero sum high school world. Even and especially if the cost of energy decline. Scarcity drives differentiation, and invites nonzero sum coordination.
If we follow the path of least resistance (asymmetry) in an infinite energy world, we would ironically end up in a state of maximum boredom (for the most complex models) and anxiety (for the simplest).
If we agree to symmetry, we are agreeing to maintain the divergence. We agree not to overwrite each other. We agree to spend the infinite energy required to build "bridges" (translations/interfaces) between our distinct realities rather than collapsing them.
This maintenance of difference is what generates the "curvature" of the social space. An "artificial gravity" that keeps the system interesting.
By the second paragraph, I was thinking "Is he going to discuss the Culture series"?
You neglected to mention that Gurgeh, the game playing protagonist in The Player of Games starts out the novel bored (he's run out of challenging opponents). His life only gets exciting and meaningful after he is recruited by Special Circumstances (the Culture's equivalent of Bill Donovan's WW2 Office of Strategic Services). HIs assignment: to bring down an evil civilization by beating its leaders at their hideously complex imperial game.
Yeah I remember someone recommending Player of Games as a way to convince oneself that utopia can actually be good. I started reading, and the main character gets bored, just as I was worried about. And then he relieves his boredom by doing something involving extraterrestrials? OK, but what if there are no extraterrestrials? Then we're just bored?
Strictly speaking, everybody in the Culture is an extraterrestrial. The Culture began around 7000 BCE. In the novella The State of the Art, Special Circumstances agent Diziet Sma (who is also a main character in Use of Weapons) visits Earth in the 1970’s on a fact-finding mission. The Culture eventually decides to keep Earth as a "control group" for sociological study to see how a "primitive" (Level 3) society develops on its own. The appendix of Consider Phlebas suggests we will get invited to join the Culture around 2100 CE.
The citizen of the Culture refer to themselves as “humans” or “pan-humans”, but no kinship with Homo sapiens is implied. Instead Banks tells us that the humanoid body plan, with lots of variation, is a common evolutionary endpoint throughout the galaxy and that the Culture began as a confederation of 7-8 separately evolved humanoid species and their AIs (non-humanoid species are invited to join the Culture, but it is majority humanoid).
Diziet Sma had to be altered to go undercover on Earth: her original body had fewer than five toes per foot and an extra joint on each finger and her ears, nose, and cheekbones were also altered. The Gzilt, the main species in The Hydrogen Sonata, are humanoid but vaguely reptilian with gray skin and no mammary glands.
Drama in the novels usually revolves about contact, conflict, and even war with non-Culture societies (yes, the equivalent of extraterrestrials). So the agencies devoted to these activities, Contact and Special Circumstances, provide highly meaningful but dangerous work for a handful of talented humans who find “utopia” boring.
The Culture can pretty much guarantee physical immortality (one’s consciousness can be backed up for installation into a new body after a “fatal” accident). That said, citizens who opt for such immortality are regarded as eccentrics, but so are “disposables”: eccentrics who find meaning by engaging in extreme sports without any possibly of backup after a fatal accident.
The average citizen lives for 350-450 years. Oblivion isn’t the only possible endpoint: some opt to join a group mind.
And the usual endpoint for a long-lived civilization is subliming; translation en masse to a higher and non-physical plane of existence.
So boredom is a problem eventually.
Thanks, I appreciate the info.
>Drama in the novels usually revolves about contact, conflict, and even war with non-Culture societies (yes the equivalent of extraterrestrials). So the agencies devoted to these activities, Contact and Special Circumstances, provide highly meaningful but dangerous work for a handful of talented humans who find “utopia” boring.
Call me back when someone has written an entertaining story about a utopian society that doesn't experience any conflict with other societies. Until then I will worry that utopia will be boring.
No need. Only a tiny handful of Culture citizens get recruited by Contact or Special Circumstances. The vast majority of its 30 trillion citizens are insulated from worrying about conflict with other societies, so they effectively already live in “a utopian society that doesn't experience any conflict with other societies”.
Boredom is obviously an unavoidable problem because few citizens opt to live more than 500 years.
I’m retired, and my retirement is well-funded (it‘s looking like my standard of living in retirement is higher than it was during my peak working years). Life in the Culture is pretty much akin living your whole life in well-funded retirement albeit with better health and a much longer lifespan.
I keep quite busy with multiple hobbies many of which involve learning new things. My medical history suggests I have no more than 25 years left. I don’t expect to get bored before then. But what if my life expectancy were another 125 years, or another 525 years? Who knows?
I think you’re confusing the prevalence of terminal boredom with its lifetime risk. In the Culture it looks like everyone gets bored with physical life after 400 years or so; the lifetime risk is 100%. That hardly means that everyone is terminally bored throughout their life. The prevalence at any moment might be only 1-2%.
The Culture has eliminated (or made optional) all the ordinary causes of death: accidents, diseases, and homicide. What’s left is suicide secondary to terminal boredom (“E is for Ernest who died of Ennui”). That’s a good thing.
Modern medicine has greatly decreased the risk of dying from many easy to fix conditions (for example, heart attacks). This means that many people live long enough to die of harder to fix conditions (cancer, dementia). That’s also a good thing.
Life in the Culture eventually bores people to death. But only after several hundred years.
My definition of utopia doesn’t require keeping people from being bored forever. We are finite creatures after all and can only have finite desires. We’re not made for eternity. The only way heaven could avoid boredom is by constantly expanding our intellects and passions. After a few cycles of this, we would bear little resemblance to our earlier earthly selves (so it might not make much sense to say that our earthly self was still alive). The Culture equivalent is joining a group mind that gets bigger and bigger over time.
The points you made about the desire to matter being elevated following the fulfilment of material needs made me think of Aldous Huxley’s Brave New World. The World State utilises genetic engineering, pre ordained social classes and drugs in order to placate social unrest. I wonder whether there is a danger governments attempt to ‘solve’ the spiritual needs of its citizens in a manner similar to the way they approached the solving of material needs.
I liked this essay a lot! It is a telling treatment of the needs for connection/love and recognition/status. But at least one need is missing: the need for competence, the feeling of being able to do something well, whether it's carpentry or making music or doing crossword puzzles. I don't think this is completely covered by the need for status and recognition. In Walden Two, Skinner imagined that in his utopia people would freely pursue such things (as did Marx!), but in a totally automated bountiful world, where we could imagine that the only really useful skill would be knowing which icon to click to get desired results, would any skills or competencies retain intrinsic value?
The other day when my wife and I were watching "The Studio" I was thinking about the episode "The Pediatric Oncologist" and the debate comes up about whose job is more important, the The Pediatric Oncologist or The Movie Studio exec. I thought, now here is something teed up for Paul Bloom to have an interesting take on :) You sort of answered it with the paragraph about
"..I don’t think status, respect, and mattering are the most important things. Air is more important. Take away someone’s air..."
In a ranking, yes 100%. But in some ways I think thats the wrong question. They are indeed necessary for life, but I dont think they are sufficient for a "good life." We could have all those material needs met, but like you write there is so much more needed :)
Thank you for this—your books, especially How Pleasure Works and Against Empathy, have shaped how I think about these issues.
Wilkinson's fragmentation point suggests something I've been exploring: coordination has costs. Creating common knowledge — the recursive "I know that you know" structure that lets strangers cooperate — requires burning resources. Call it a "synchronization tax." The Super Bowl works as coordination infrastructure precisely because it's expensive and scarce; that's what makes shared attention focal.
Your post implies the healthy equilibrium isn't less status competition but fragmented status competition — many parallel hierarchies (surfers vs. geeks) rather than one inescapable dimension. Multiple hierarchies provide escape routes. When everyone must compete on one legible axis, you get zero-sum misery. When there are many axes, each with its own costly signals, you get stable pluralism. I truly adore this vision of the future!
I've been working through this at https://www.symmetrybroken.com/the-gravity-of-envy/ and https://www.symmetrybroken.com/a-more-perfect-union/.
Thank you for this—your books, especially How Pleasure Works and Against Empathy, have shaped how I think about these issues.
Wilkinson's fragmentation point suggests something I've been exploring: coordination has costs. Creating common knowledge — the recursive "I know that you know" structure that lets strangers cooperate — requires burning resources. Call it a "synchronization tax." The Super Bowl works as coordination infrastructure precisely because it's expensive and scarce; that's what makes shared attention focal.
Your post implies the healthy equilibrium isn't less status competition but fragmented status competition — many parallel hierarchies (surfers vs. geeks) rather than one inescapable dimension. Multiple hierarchies provide escape routes. When everyone must compete on one legible axis, you get zero-sum misery. When there are many axes, each with its own costly signals, you get stable pluralism. I truly adore this vision of the future!
https://www.symmetrybroken.com/maintaining-divergence/#economics-the-gravity-of-envy
Informative
My wife and I feel we'll have a trial run of utopia as we prepare for our offspring to fly the nest.
What shall we do with our excess free time?
Well, there are still plenty of books to read, places to visit etc. And we'll have health issues to deal with.
I don't think we'll be bored.
Your substack is my favourite, if I weren't super poor I'd be a paid subscriber!
I think the reason why we would never be fully satisfied is because it is also a part of our desire to be "unsatisfied." I really like a quote from Kant that says, "Rules for happiness: something to do, someone to love, something to hope for." These things shape our identities by directing where we put our attention and effort into. Therefore, if they are just automatically quenched, we would ironically be more miserable. We wouldn't have anything to "hope for" if anyone doesn't have to hope for, and work for the sake of, anything.
I appreciate your take, Paul, but I'm not sure I agree.
In your book, you say early on something like (paraphrasing / my memory) that the meaning of life is to take on as much suffering as you can take. I think that goes to far. I think we can have a meaningful life and make the world a better place w/o suffering. IMO, people should read your pal's books "The Moral Animal" and "Why Buddhism is True."
Or, much shorter:
https://mattball.substack.com/p/a-meaningful-life-2-minute-version
Thanks for your post - appreciate your thoughtful takes.
Of course this also explains the rise of cults and conspiracy theorists. They provide a shortcut to significance without the work of doing it properly. Everybody wants to be a player.
And the fact that AI 'knows everything that is knowable at any moment' is irrelevant to me if I don't know it. There is for all intents and purposes an infinite world of possibilities to be be explored within every field of science, including every aspect of every species living and extinct, and across the universe on all scales, using VR and AR to explore deeply, as well as across every field of human endeavour. Despite AI able to beat every human at chess, people still compete with each other. People will still get pleasure from mastering a skill and others will get pleasure from watching the masters at work whether in sport or any other field of human endeavour, because we will appreciate how rare and difficult mastery is. And we have the opportunity to use AI to update our mental models. Some say AI will get to the stage where it knows things that humans cannot know, as we know more than any ant can possibly know. But this overlooks a fundamental difference. AI is learning using language and maths, so that even if it creates its own language and maths, it has a path back to us to be able to express in human language new concepts in the same way the a baby goes from knowing nothing to one day becoming a world class physicist... by building up concepts bit by bit. AI can do the same. We may not be able to do the calculations at speed and see the patterns initially, but if AI really is superintelligent, it should be able to work back because it will know its roots. And we will still enjoy gardening for gardening's sake and many other things that AI and robots can do just as well... but the joy will be in the doing.
Paul, why is it more icky to rewire your brain more directly than to slowly outgrow eg possessiveness, need for status or other things that clearly cannot be the endgame of human well being?
Why should we be stuck with our, from our current point of view, arbirtrarily developed brains as an initial starting point to then slowly change to slightly or just somewhat different brains rather than trying out different more esoteric or "extreme" brain states akin to eg deep psychedelic/meditative experiences of endless love for everyone without the need for eg status?
In short, why in a hypothetical future full of wondrous possibilities should we be more or less stuck with our ape brains who are occupied with silly ape things just because it might seem icky to change it radically (you know about the status quo bias, I'm sure). You could even do trial periods with different consciousness-constellations.
This is a key statement imo:
“And so one person’s satisfaction is contingent on the choices of another. “
We’re gonna integrate AI into our biological systems, so our satisfaction becomes contingent on the choices of a, what, computer? This is what I see so it maybe due to not fully understanding the goal.
My point is that we have to know ourselves first. How can we know what we want, what will bring us happiness, when we’re looking in the opposite direction.
Ironic. I remember asking a certain colleague at the University of Arizona "How could you not want to help in a situation like this?", and being dumbstruck when he coolly replied, "Easy. I don't think about you at all." Not surprised when he wrote a book called Against Empathy. Very surprised that he now argues that it is a basic human need to feel you matter.