Here is a short talk I gave early this summer in Wales, at the HowTheLightGetsIn Festival. I had a wonderful time, and I particularly enjoyed the engaged and enthusiastic audience.
Discussion about this post
No posts
Here is a short talk I gave early this summer in Wales, at the HowTheLightGetsIn Festival. I had a wonderful time, and I particularly enjoyed the engaged and enthusiastic audience.
No posts
Thanks, Dr. Bloom. The people who make a difference are those who ask "How *can* things be?" not "How *should* things be?"
E.g.
https://www.mattball.org/2025/08/socialism-and-capitalism-and-history.html
This is a great talk with very intuitive points on the unattainability of true equality (I am a little bit disappointed that the communist states are not mentioned as examples of failed equal societies). It is fascinating to consider how that will evolve with the potential for personal AI, where an AI agent could act as an effective extension of you in the world of other agents. This is not a "doomer's" scenario, but it does raise big questions. Our evolutionary need for relative advantage in intelligence and on many other fronts would become tied to the power of the AI we use to represent us. I mean, I want mine to be smarter than yours and be able to manipulate yours for my interests.
Are we evolutionarily prepared for this? I wonder if having guns or other methods of physical subjugation prepare us for this. We can relate to physical threats, but how adaptive are we to an intelligence inequality that we may not even comprehend? And how much of our innate competitive drive will we offload to these AI agents? What would that mean for the future of both human and AI evolution?
Of course, this all assumes we are free to choose our agents, not where a well-wishing state or technology company decides for us how "equal" we shall be.