Monday, February 6, 2006
- The Singularity will come thirty years from now, for any given value of "now"
- Mastery is a function of intimacy with failure
- "We are all in Hell and there is no way out" is not repulsive.
- "Foolish, well-meaning people may go to Hell simply by inattention to their relationship with God" is not repulsive.
- Only "foolish, well-meaning people may go to Hell simply by inattention to their relationship with God, but so what? We're okay" is repulsive.
- Safety third! 1
1: Burning Man saying, via Glynda Cotton
Posted by benrosen at February 6, 2006 11:06 AM
| Up to blog
I enjoy your aphorism interludes. I do my bit to spread around "the Singularity happened in 1494" and was just thinking the other day about what extrapolations you can get from "progress requires scarcity; scarcity can be manufactured."
Thanks for another batch.
So what extrapolations did you come up with?
Both Vernor Vinge and Ray Kurzweil -- who use "Singularity" to refer to an occurrence that is actually singular, and not one of many -- give specific timeframes in which it will take place; Vinge says by 2030, Kurzweil says by 2045.
And Dan, I don't think you need to manufacture scarcity. There will always be a scarcity of status.
Is your journal supposed to be 9 months ahead of us all? Are you trying to tell us something, Future Man?
They do indeed, Ted, and I'd like some of what they're smoking. :-)
Put more politely, the aphorism's prediction is that in 2030 you will name two extropian luminaries, and their specific timeframes for the Singularity will be 2055 and 2070.
I think it's likely that there will always be a scarcity of status, but that's also in some sense manufactured. There are contexts in which status is abundant; egalitarian, non-heirarchical, small groups, for instance, in which everyone's diverse accomplishments are prized. I'm sure you've been in contexts like that; I know I have (Wiscon comes to mind).
There's a big difference also in the degree of status scarcity between Switzerland and America. America is about winning big, and it often seems everyone who isn't a tycoon/bestselling writer/rock star/politician/etc is simply waiting for their opportunity to become one -- or resigned to their low status, but experiencing it as a lack. In Switzerland there's much more the sense that each profession is its own, separate status world. Another aphorism would be "there are no waiters in America" -- only rock stars, actors, etc waiting tables. But there are waiters in Switzerland, with status as waiters. Partly this is due to the education system (about which I could talk at length, but I'll spare us), and partly to the fact that the top in Switzerland is simply so much lower; it's such a small country that the biggest rock star in Switzerland probably eats at the same restaurant you do (and only the top three rock stars in Switzerland can make a living at it anyway, and it's about a doctor's living).
I'm not claiming there is *no* status differential in Switzerland (or at wiscon); nor even that there is no status scarcity (there can be a differential without a scarcity); merely that the fact that different environments have different degrees of status scarcity imply that that scarcity is not inevitable (though biology may weight us towards it).
Just to be clear, it's not that I think none of Vinge's and Kurzweil's predictions will ever come to pass, though I think their timelines are very optimistic; rather, it's that I think things will tend to always seem smooth and continuous to participants. Only from a historian's perspective is there a "knee of the curve". An exponential curve does not actually *have* a singularity -- there is no x for which y is undefined or infinite, in y = n^x. Things change, then they change some more, then they change some more. The body is malleable but ineluctable, because "the body" is really "that which is us, but beyond our will's control".
See, this is what I like about the aphorism interludes. Conversation! (But let's not revisit the 'define conversation' thread just yet.)
Promoting "'the body' is really 'that which is us, but beyond our will's control'" to aphorism stautus, can we then tie in some formulations from that last big thread of contention (you know, the one that prompted you to add the postmodern materialist option to the comments submission form) and say that the existence of other wills puts an upper bound on extropian transcendence of the body? Or, shorter, if the Singularity is the Rapture of the Nerds, then Nerd Hell is other people?
Getting back to your question, I'm afraid that, much like with the Fermi/Wittgenstein aphorism, I'm managed to get myself stuck on definitions. Is there just one Scarcity, or are there individual scarcities? If so, does the aphorism hold true for any particular scarcity? The idea of currency is to to average local and specific surplus and scarcity over time and space, leaving only systemic scarcities. Not that it works, but.
My impression of the scarcity aphorism is that the scarcity necessary for progress must exist at the scale that you want the progress. Scarcity of status may be sufficient spur for innovation for individual people or sub-groups of a population, but there's no lack of status at the level of a whole society.
Oh, fudge. I just trapped myself into having to prove that status is at least zero-sum, and I haven't even wandered into the definitional morass that is Progress.
While I take your point about the knee of the curve, I think there's something to be said about the absolute scale imposed on the time axis by the length of the human generation, which has a pretty solid lower bound (pre-Singularity, natch) and is currently trending longer in developed countries. With the time axis more-or-less fixed, the relative scale of the change axis becomes meaningful.
Is the unmodified human brain infinitely adaptable? I would guess not, which means that at some point we'll hit one or more of three options: human resistance to change will put a brake on the exponentiality of the subjective progress (there's that word again) curve, brain augmentation will become necessary to participate in the most-changed cultures, or the Singularity will be (or was, in 1494) not be as notable to human beings as to some human-created or human-constituted process.
Just to be clear, it's not that I think none of Vinge's and Kurzweil's predictions will ever come to pass, though I think their timelines are very optimistic; rather, it's that I think things will tend to always seem smooth and continuous to participants.
You're conflating two different things. Your aphorism asserts that the Singularity is like the horizon, something relative to the observer and having no intrinsic distinguishing traits. Vinge and Kurzweil are not using the term that way; they are referring to the creation of a superhuman, self-improving machine intelligence, an event whose date can be fixed in absolute terms. Saying that Vinge and Kurzweil have got their dates wrong is different than saying that the Singularity will not be perceptible to those living at the time.
(There's also the idea that there are multiple Singularities, like the invention of the limited-liability corporation and the steam engine and the internet. This is different from the aphorism's assertion, which -- if taken literally -- suggests that any given moment can be considered a Singularity, and not just the dates of certain influential inventions.)
the fact that different environments have different degrees of status scarcity imply that that scarcity is not inevitable (though biology may weight us towards it).
Well, how much really is inevitable? I don't think that walking upright is inevitable -- feral children don't do it readily -- but I wouldn't say that the desire to walk upright is manufactured.
Pretty much all social mammals have differences in status between individuals, and competition for positions of higher status within the group. So I don't think that the human desire for status needs to be manufactured, although it can certainly be increased or decreased by societal forces.
Vinge's certainly talking about the emergence of such an entity, in his original paper on the Singularity. He's talking, at least in the initial parts, about a "hard-takeoff" singularity, something that happens in a few years, months, or "perhaps in the blink of an eye": a machine (or cybernetic, or biological-posthuman) intelligence is created, and quickly creates a better machine/etc intelligence, which creates even a better one, an intelligence which "would not be humankind's "tool" -- any more than humans are the tools of rabbits or robins or chimpanzees", so that "the human era is ended".
It's worth noting that his prediction was not "2030" -- it was "I'll be surprised if this event occurs before 2005 or after 2030". So we are already in fair-game territory for the Vingean Singularity.
I think this is a seminal, critical, beautiful essay. It originates one of the notions which has been most inspiring to me as a science fiction writer. I also think it's largely bunk. (It's brilliant the way a brilliant, well-intentioned, unwitting Le Guin villain is brilliant -- someone with great intellectual talents for whom certain things are simply invisible.)
Perhaps a Vingean way to say this would be that we live in the Slow Zone -- one reason I think that A Fire Upon the Deep and A Deepness In The Sky have a much richer, deeper, truer take on the Singularity than his paper does (a very common phenomenon in my experience -- we are often led by the structure of writing fiction to say much truer things than we can speak plainly).
What Kurzweil are you referring to? What I've read of him is The Age of Spritual Machines, and my reading of that is that it does *not* suggest a hard-takeoff, singular-event, blink-of-the-eye Singularity, but rather precisely the kind of exponential growth I'm talking about where there is no undefined y for any x. The book is full of funny little interludes where Kurzweil interviews a fictional character living at various different decades. By 2100 she has merged with the entity which in 2010 was her PDA and in 2080 her lover, or something like that. Kurzweil keeps asking her things like "but hasn't some cataclysm transformed anything now? Aren't you slaves or emperors now?" and she keeps saying "Ha ha -- whatever. Shit, I'm so behind on my work, and I have all these patent litigations I'm embroiled in... sigh. I'll have to instantiate several extra bodies this weekend or I'll never get *anything* done..."
To her, it looks like a smooth curve. She didn't notice the Singularity. If anything she's still looking forward to one, even if her specific criteria might be different from the ones Kurzweil would offer if forced to pick a date.
I'm familiar with the multiple Singularities mostly from Charlie Stross, and as I recall they were something like spoken language, written language, agriculture and pastoralism, modern capitalism, digitalization, and (coming soon to a universe near you) P=NP. The issue here is not "influential inventions" -- it's ratcheting technologies creating irreversable changes in modes of human life. Once you have integrated one of these technologies fully into your society, you can't abandon it without killing off 90% of your population.
But dating them is arbitrary. At the moment that early Fertile Crescent merchants started doing preliterate inventory tracking with symbolic ceramic items, or that someone symbolized *those* into marks in wax or clay, or that someone started broadening the use of them from pure financial recordkeeping to make more general notes in the margins, perhaps choosing a set of the symbols to represent a human's name, or that the king decreed that the set of symbols representing *his* name be engraved on a tablet, no irreversable change had yet occurred. The wheel and the steam engine were not on Charlie's list, but Incan wheeled toys and Greek trick steam engines show that those weren't irreversable singularities at the moment of invention either.
Much like the digital age, the integration of such technologies into society is a long, slow, locally random process. And these singularities (the aphorism asserts) overlap.
The metaphor here is not Singularity as asymptote, a moment which changes everything (that thing which no exponential curve has one of). It's really more the Schwarzchild radius than the center of the black hole -- it's the point at which you can't go back.
So that's one sense in which the aphorism might be true (and you do know, right, that I come up with the aphorisms first and then figure out how they're true after you all pick on them?) -- there is always some irreversable technological change in the process of being dreamt of, iteratively developed, integrated and consolidated. It's only on history quizzes in school that fundamental technologies have singular inventors and dates.
So let me go do some day job work now, and then I'll talk some about why I think we live in the Slow Zone.
the existence of other wills puts an upper bound on extropian transcendence of the body
Sure, as does the second law of thermodynamics (and "Nerd Hell is other people" works well aphorismically). But it's not just that; that's not really what I'm talking about. Even all of *you* cannot be subject to your will -- not if you are interesting.
Vinge quotes Good:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever.
Do those intellectual activities include dreaming, babbling, ranting, getting the joke, surprising oneself with one's own ideas, wrestling with moral ambiguity, being someone worthy of trust?
If they do, I would offer the notion that you need a dark self, a hidden self, a self you don't know and can't master. I think building such a self will be a crucial breakthrough (or else an unexpected side effect) in the project of building creative machines, machines that are interesting to talk to, machines we can trust and love the way we trust and love one another.
To the extent that that's an odd notion, it shows how far we are from building such a machine, and how much more there is to do than simply gather together the processing power.
But actually it's not all that odd: just look at the move away from propositional logic and towards emergent systems like neural nets. I think it's safe to say that if a neural net has something we would recognize as a will, it will also have in it that which resists that will.
Ted, I'll concede that scarcity of status is the default, and you don't need to go to any particular trouble to manufacture it.
It's still worth noting that scarcity can be manufactured, though, since we live in a world where most scarcity is manufactured. Scarcity of iPods, vacations in Cancun, frappucinos, kiwi fruit in Canada in January, international fame -- these scarcities are acutely felt by huge numbers of people, who act on these feelings daily -- and not only are these scarcities manufactured, there is an enormous amount of economic effort invested in maintaining them.
I'm actually not sure any more how I feel about "progress requires scarcity". Maybe it's only true in the tautological sense that you cannot "progress" towards something you don't have a scarcity of.
(Let's say you run a mile in seven minutes.
Would being able to run a mile in six minutes be progress?
Let's say you eat lunch in seven minutes.
Would being able to eat lunch in an hour be progress?
Doesn't it depend what there's a scarcity of?
But is this "scarcity" in the same sense?)
It's worth noting that his prediction was not "2030"
I know; that's why I said "Vinge says by 2030," not "in 2030."
I think this is a seminal, critical, beautiful essay. [...] I also think it's largely bunk.
There's no denying that it's been an enormously influential essay, but I have to admit that I didn't recognize it as such when I first read it. My first response was closer to, "He is so far off base that he's not even wrong." (Quoting the perhaps apocryphal story about Wolfgang Pauli.)
What Kurzweil are you referring to? What I've read of him is The Age of Spritual Machines, and my reading of that is that it does *not* suggest a hard-takeoff, singular-event, blink-of-the-eye Singularity,
Again, I think you are confusing two different usages. Kurzweil doesn't say that the Singularity with be a hard takeoff or a blink-of-the-eye event. He just says that by the 2040s, we will have undergone a profound transformation that he calls the Singularity.
and you do know, right, that I come up with the aphorisms first and then figure out how they're true after you all pick on them?
Nowadays I figure that you only believe half of what you argue, and the specific half changes constantly. :)
It's still worth noting that scarcity can be manufactured, though, since we live in a world where most scarcity is manufactured
I understand what you mean, although I'd phrase it a little differently. The scarcity of, say, vacations in Cancun is not artificial in the sense that there's plenty of unused capacity in Cancun which certain people are closing off in order to make Cancun appear more attractive. (I don't know if Apple could make more iPods if it wanted to or not; if it could, then that is what I would call artificial scarcity.) But I agree that a vacation in Cancun is something that many people would not desire if it weren't for all the advertising, etc. that they see. So I might say that the desire is manufactured, but the scarcity is real. There is simply not an infinite amount of Cancun to go around.
(Let's say you run a mile in seven minutes. Would being able to run a mile in six minutes be progress?
Yes, if I'm trying to compete in a race (for example).
Let's say you eat lunch in seven minutes. Would being able to eat lunch in an hour be progress?
No, if I'm in a food-eating competition. (Again, just an example.)
Footraces and food-eating competitions are very pure examples of competition for status; few participants would claim that the experience of running a six-minute mile is intrinsically more pleasurable than running a seven-minute one (whereas people do claim that, say, taking a vacation in Cancun is more pleasurable than staying at home). Money may be involved, but money is hardly the primary motive. Lots of people just like beating other people in competition.
My first response was closer to, "He is so far off base that he's not even wrong."
What did you think was [not even] wrong? And has that changed?
He just says that by the 2040s, we will have undergone a profound transformation that he calls the Singularity.
Sure. The issue, though, is whether profound change is continuous or discontinuous. The aphorism doesn't say "there's no such thing as a singularity in the sense of a profound, irreversible transformation". It says "we are in the midst of a profound, irreversible transformation and have been since about 500,000 B.C."
I don't think I'm the only one conflating the Singularity as "we build intelligent machines", the Singularity as "human life will never be the same again", and the Singularity as "humans become increasingly irrelevant in a world dominated by creations of their own far beyond their comprehension". I think the equation of those statements is the point of Vinge's essay. Since I think that we will build somewhat intelligent machines this century -- ones that will pass a moderately difficult version of the Turing test; that the intersection of machine intelligence, the malleability of the self and the body and the mind, and the computational nature of consciousness will work vast, unimaginable, far-reaching transformations on our world; that "human life will never be the same again" is pretty meaningless, and that in general 2050 is not going to look more incomprehensible from 1950 than 1950 did from 1850 (and that 2005 does not look more incomprehensible from 1905 than 1905 did from 1805); and that "humans become increasingly irrelevant in a world dominated by creations of their own far beyond their comprehension" is, honest to God, a process that became irreversible in 1464, that it is largely true already, that our supposed tools of social organization are actually our masters, and that automating them using silicon and electricity, rather than work rules, is an evolutionary rather than revolutionary change... given that all that's the case I think I'm if anything trying to de-conflate the Singularity.
About how we build intelligent machines, Kurzweil may be more or less right (though optimistic); Vinge is wrong. There are processes that will drive adoption of increasingly sophisticated uses and constructions of machine intelligence, and we will ultimately merge with our tools even more than we already have. "They wake up" is a poor metaphor to describe this process.
Some things that are wrong with hard-takeoff singularity:
1) A machine that is smarter than you are is not necessarily good at, nor interested in, building machines smarter than it.
2) "Smarter" is anyway not a single variable, or even, probably, a meaningful set. Being smart in some ways may preclude being smart in others. An otter is not smarter than a rock in the way a human is smarter than an otter, and an MIT grad is not smarter than a football player in the way that a human is smarter than an otter either.
3) Intelligence, in the sense that more processing power helps with, is of very narrow utility, and only correlated with evolutionary fitness in a narrow environmental band.
4) "Master" and "servant" are actually not roles that describe coevolution particularly well. Aphids are not really the servants of ants. Aphids shape and control ants just as surely as ants shape and control aphids. It's mostly a parochial evolutionary quirk of male primates to construe everything in terms of dominance heirarchies.
5)Our tools constraining and controlling us is nothing new
6) Evolution does not drive the creation of "higher", "more dominant" forms of life -- it drives the explosion of diversity into available niches. Breakthroughs in evolution tend to cause colonization of new territories, and may not be well correlated with mass extinctions
7) Reason is only a small part of learning; the speed of learning brand new things is largely constrained by the time required for cycles of experimentation. Learning can be hampered by insufficient intelligence, but not arbitrarily speeded by abundant intelligence. Intelligence in the sense of complexity, generality, and a high degree of organization may hamper learning.
What did you think was [not even] wrong?
Basically the same sorts of things that you describe in your objections to a hard takeoff. One specific example that comes to mind is Vinge's assertion that a Ph.D using a computer could max out any IQ test in existence. Not only do I think that a computer is of little use on an IQ test, but I think IQ tests are a poor measure of practical intelligence.
The issue, though, is whether profound change is continuous or discontinuous.
All I'm saying is that Kurzweil apparently feels that the change occuring in the 2040s will be discontinuous enough to warrant a special name. I'm not going to try to defend his position.
I don't think I'm the only one conflating the Singularity as "we build intelligent machines", the Singularity as "human life will never be the same again", and the Singularity as "humans become increasingly irrelevant in a world dominated by creations of their own far beyond their comprehension".
True; these different senses of the word are widely taken to be interchangeable. I just happen to disagree with that practice. I think discussions will be more productive if participants specify the usage they intend.
(Digression: you mention evolution a few times in your objections to a hard takeoff. Coincidentally, I made some similar comments about evolution and intelligence in a discussion here.)
Disappointingly (for further development of the argument) we seem to agree about the Singularity; we only disagree about the usage of words in aphorisms (rigorous vs. provocative).
Cool discussion on evolution. I've commented there.
Let's say you eat lunch in seven minutes. Would being able to eat lunch in an hour be progress?
No, if I'm in a food-eating competition. (Again, just an example.)
Right -- but it would be, if you were negotiating for a longer lunch break.
You're making a distinction between intrinsic, real-world, physical scarcity, and perceived psychological scarcity or desire ("I don't have enough vacations in Cancun in my life"; "I don't have enough time").
I think that's a common -- almost universal -- commonsensical distinction, and operationally useful in many contexts. But I think that if you really look at it closely, the notion of real-world, physical scarcity in most contexts middle-class people use it in -- of iPods and Cancun vacations and time in your schedule -- falls apart.
True, there isn't an infinite amount of space in Cancun -- though actually, it's well within the technological capacity of the human species (excluding for a moment the requirement of technologies to manage desire, production, and social organization) to build some really freakin' big skyscrapers and some long automated supply chains and do some landscaping let everyone in the world live on a beautiful beach beach and sip pina coladas in a beach chair nine hours a day. There's enough coastline. We just park the beach chairs close together.)
But even refraining from that kind of thought experiment -- so there isn't enough Cancun for everyone. There also are not enough gold-plated lampshades for us all to wear on our heads. Is that a "real", physical scarcity? My examples are glib, but you can't really, practically separate the notion of scarcity and abundance from notions of desire.
In fact, we (specifically those of us who are reading these words, on the internet) live in a world that people a thousand years ago would perceive as one of unimaginable abundance, and we mostly perceive it as a world of scarcity.
I am not blaming this all on advertising. I think it's a natural human tendency -- the more we have, the more we feel we need.
It's interesting, though, that there are two possible kinds of technologies for adressing scarcity; ones that have people perceive less scarcity and more abundance, and ones that create more of the physical goods and experiences that we associate with abundance.
If you don't like scarcity and abundance as terms -- I don't want to get into a quibble -- we could use "deprival" and "satiation".
Technology can produce more satiation either by producing more stuff to be consumed, or by taming our appetite for consumption. It is interesting to note that the vast majority of our technologies are focussed on the former, even though only the latter is ultimately effective. And this is a phenomenon specific to our culture (broadly defined) -- maybe even, really, the salient distinguishing feature of our culture, the sense in which it's the same culture from Tokyo to Peoria to Oslo.
So no one wants to argue about the other three aphorisms, huh? :-)
Disappointingly (for further development of the argument) we seem to agree about the Singularity;
We just need to bring a transhumanist over here, and then sparks will fly. :)
There's enough coastline. We just park the beach chairs close together.
Do you really think we could transform all the world's coastlines to resemble the beaches of Cancun? I'm skeptical. Beautiful tropical beaches are usually cited as one of the things that will remain scarce even in a world of cheap nanotech. But this probably isn't worth arguing over.
Technology can produce more satiation either by producing more stuff to be consumed, or by taming our appetite for consumption. It is interesting to note that the vast majority of our technologies are focussed on the former, even though only the latter is ultimately effective.
What do you have in mind as examples of technologies that tame our appetite for consumption? Do you mean things like the Swiss educational system, or something else? And how do we know they are "ultimately" effective? Has the jury returned a final verdict yet?
Hmm, the Swiss educational system is an interesting example... but I was really thinking of things like the Eastern preoccupation with nonattachment, and techniques (like meditation) devoted to reducing attachment to worldly things. There are Western techniques too, from monastic and devotional to Stoic to psychological and pop-psychological ones.
They're of varying effectiveness, and while I'd argue that they are in fact at the present time at least somewhat effective (as opposed to, as I would expect Nick Mamatas to argue, purely cruel delusions -- really we need Nick *and* a transhumanist to really liven this up), I don't think any of them is "ultimately" effective at present.
What I meant is more, what is potentially effective in principle. The goal "satiate all desire" is not achievable in principle in a finite universe; the goal "learn to love what you have" is, for a perhaps surprisingly broad range of values of "what you have".
What I'm really arguing is that desire is not naturally self-satisfying -- there is not a natural level of stuff where, if you could get that level of stuff, you and your heirs would be happy -- but that desire is malleable, in that you can be educated, incented, led, to desire one thing or another, more or less.
All human cultures have devised technologies for both "how to get more stuff" and "how to be content with this stuff"; our culture seems to me to be heavily weighted towards the former, and this seems intimately related to the spread of the modern world. In some sense the modern world is nothing more than an aggressive and successful program for creating material abundance, experienced as scarcity.
I'm thinking of the contrast between Le Guin's future worlds, which are very calm, and where people live in villages and die of old age quite regularly and so on, and occasionally invent wormholes or travel to distant stars when they feel like it, they just don't feel like it that much. One gets the sense they would be probably capable technologically of living forever, covering their planets with cities, and so forth; they just think that would be a dumb idea. In contrast to most sfnal futures, they seem to have invested some energy into learning satiation, as opposed to just production.
And, to put it more simply, here's one of the central concerns of my fiction:
I desperately want to live forever.
But I don't want to want to live forever.
I want to learn how not to want to live forever.
I DID comment on the Hell aphorism, but your blog didn't save it for some reason. Maybe I just previewed it...
"There's no such thing as Hell, but to the extent that you believe in it, you're making it that much more real for the rest of us, so knock it off already!"
Repulsive or not?
You may substitute the Singularity for Hell if you so desire.
Which leads me to the slightly more serious question of what do V and K think of the Singularity? Is it inevitable, now that we've reached our technology level? Is it a goal that we as a culture are working towards? Is it forestallable? Like, what happens if there's a Buddhist uprising, and Ben's "taming our appetite for consumption" happens? Would the Singularity be null-and-voided? Merely postponed?
Also, why is it that only make-believe things have capital letters? Hell, Singularity, Ben...
Not repulsive, though perhaps goofy. Charmingly goofy.
Sadly, as problematic as I find the notion of a hard-takeoff Singularity, I find it more likely than a Buddhist uprising leading to our appetite for consumption being tamed.
Actually in Stross's short story "Antibodies", Singularity-created ruling AIs from Buddhist-dominated timelines are slightly nicer. :-) But in the real world, historical Buddhism has been perfectly compatible with both technological progress and chopping people's heads off with nice shiny swords.
Go read the V essay for what he thinks (or thought back then). Kurzweil thinks we are all uploaded, merged with our machines, living in a nanoengineered infinitely mutable world by 2100, and sometime thereafter defy all laws of physics through sheer cleverness.
Um, isn't discussing the timeline of the Singularity somewhat goofy in its own right?
You're just lucky I didn't say Safety fifth!
Seriously, though, I think that cultural reality is constructed by its participants, and that if enough people believe in Hell, then there will be a form of Hell in reality, even if it's only the agony of strict rationalists being compelled to read Discordian babble. To the average 14th century German peasant, it was absolutely in no way repulsive that "foolish, well-meaning people may go to Hell simply by inattention to their relationship with God, but so what? We're okay." My aphorism was be burn-at-the-stake-worthy, not goofy. As would your aphorism that his deeply held belief was repulsive. Since the reality of that 14th century German peasant is equally present to goddess as the reality of the 21st century Discordian, his beliefs have as much (and perhaps more) of an effect on our reality as, say, Vernor Vinge's.
Personally, I find the notion that "Foolish, well-meaning people may go to Hell simply by inattention to their relationship with God(dess)" equally as repulsive as the ridiculous "We are all in Hell and there is no way out."
But so what? Cancun's okay.
I'm not sure how you average over 14th century peasants; certainly the 14th century was full of remorseless people who seemed to take satisfaction in the damnation of their enemies. But I think the callousness of "I don't care if you go to hell" wouldn't have been apparent to, at least, the 14th century peasants I'd like to hang out with. Even if slaughtering infidels may have taken precedence practically over saving souls, the ideal of saving souls was very real, and indifference to the damnation of others surely counted as a sin. I expect your average 14th century peasant would at least have been less callous in this regard than your average 14th century knight or prelate.
The aphorism's insistence that indifference to the damnation of others is wrong would not be the slightest bit heretical in the 14th century (its relative acceptance of existentialism might be more so, but even there, with a good ecclesiastical lawyer I think I could get off, as I am not arguing that existentialism is not an *error*, merely that despair is not in and of itself *repulsive*).
Yes, arguing over the date of the Singularity is deeply goofy.
You seem offended that I have labelled your Discordian belief as goofy; I did not mean to offend. I was under the impression that Discordians considered "goofy" to be a compliment, in the context of religious doctrine.
That Goddess embraces all worldviews equally is comforting to me on some level, but not particularly germane to what I believe. She is presumably equally present to Adolf Hitler's worldview; that does not deter me from censoring it. I have a different job than Goddess does.
There's nothing goofy about saying "everyone's worldview has an effect on everyone else". There's only something goofy about saying "please stop thinking what you think, it's bugging the rest of us". It's not reprehensible or even wrongheaded; it's goofy because, first, it's not going to work, and, second -- unless I misinterpret you -- it retreats from engaging with other people's ideas, hoping only that they will go away.
So why do you find my off-the-cuff summaries of C.S. Lewis's and Samuel Beckett's worldviews, respectively, repulsive? As I said on Jed's blog, I think they are honest efforts to make sense of the world. I have sympathy for someone who believes in a world with huge costs, or inevitable suffering, or where human effort is futile. You may not find those worldviews attractive, but I don't think there's anything reprehensible in holding them. Maybe the world is like that; it behooves us to consider it.
It's only being smug or complacent or indifferent about other people's suffering that horrifies me -- not fearing it, or believing in it.
I'm not saying that goddess embraces all philosophies equally, but that they affect her equally. Adolph Hitler, and people who are smug complacent or indifferent to other people's suffering, have an equal effect on goddess. I just hope there are enough Jainists out there to counteract them. Whether goddess has a personality of her own, I don't know, but her experience is the sum of the universes, so how she'll end up being depends in large part on how we all are.
That said, I think the conflicting moralities of a few billion humans will not weigh terribly on her soul, given the zillions of insect, electro-magnetic and silicon souls that she also experiences. And that's just on Earth. Since she experiences the birth and death-throes of a star equally to the birth and death-throes of Hitler (for instance), I think her role will end up being more one of "observer" than of "participant," but what do I know?
My repulsion is in the idea that there is a Hell, not in any particular view of it, per se. That said, my response this afternoon was glib and in the spirit of being offended (not about being called "goofy," which I am, but about having my "goofiness" be used to dismiss my notions without looking at them), and I really agree with your aphorism: that what is truly repulsive is indifference to others' suffering.
Again, that said, a species only lasts (typically) four million years or so. That gives us a nice long time to enjoy thinking that we will have any lasting effect on the universe, but it's hubris to expect that we'll get any more than that.
Which brings us back to the Singularity. If we become wired to the point of being able to break the laws of physics through sheer cleverness, in effect becoming little gods, I expect that the population of such creatures would dwindle.I expect the Singularity will come for a privileged few, not for everyone. I don't believe godhood will ever be ubiquitous. The question would become how many gods can a universe support? And would we still be human?
Of course, if it WERE "godhood," and it WERE ubiquitious, I'd imagine everyone would be able to go off and create their own universe and play happily in their own sandbox.
Probably the Big Bang happens a lot; whenever a race achieves Singularity.
"Actually in Stross's short story "Antibodies", Singularity-created ruling AIs from Buddhist-dominated timelines are slightly nicer. :-) But in the real world, historical Buddhism has been perfectly compatible with both technological progress and chopping people's heads off with nice shiny swords."
I note here Walter Jon Williams' "Prayers on the Wind", which posits a self-aware 'library/computer' (functioning as a literal mind of Buddha) that incarnates itself as a human Boddhisatva. And, yes, positing reincarnation, chopping people's heads off with swords can be a humane act (if by 'humane' you mean ultimately leading them to nirvana and/or buddhahood). Wonderful story.
7) Reason is only a small part of learning; the speed of learning brand new things is largely constrained by the time required for cycles of experimentation.
Wait, are you trying to say something like, "mastery is a function of intimacy with failure?" :)
"the existence of other wills puts an upper bound on extropian transcendence of the body"
Sure, as does the second law of thermodynamics (and "Nerd Hell is other people" works well aphorismically). But it's not just that; that's not really what I'm talking about. Even all of *you* cannot be subject to your will -- not if you are interesting.
Well, I did say "upper bound." Interestingness, though desireable, is explicitly not one of the extropian stipulations.
I continue to be curious about your idea of a dark self. By calling it a self, do you mean to identify it as posessing all the qualities we usually think of in a self in addition to being unknown/dark?
After a long period of living closely with another person, I've noticed that a significant part of the processes of my self have become joint processes, to the point that I function quite differently after a few days apart. We're a tightly coupled dynamic system. From this, I imagine (perhaps unfoundedly) that all interaction with other selves involves some degree of systemic coupling, which in aggregate might be sufficient to guarantee some degree of unknown, uncontrollable self. No?
This doesn't deserve a drive-by, but a drive-by is all I have time for at the moment. So more later, maybe.
1. A singularity without a discontinuity isn't a singularity. It's just an event horizon.
2. Bài huā qífàng, bǎi jiā zhēngmíng.
3. Tout est au mieux dans le meilleur des mondes possibles.
4. What and what come first and second?
2. Okay, I give. What does that mean?
3. Pangloss was French?
4. Opinions differ.
2. "Let a hundred flowers bloom, let a hundred schools of thought contend." Intimacy with failure sounds to me like intimacy with experimentation, viewed from the pessimistic end.
3. German, wasn’t he? (But Voltaire was French.) My point, though, was that the difference between this and “we are all in Hell and there is no way out” seems, again, to be mostly one of perspective and emphasis.