Tuesday, April 12, 2016
Matrifocal Pillow Talk, and an Anti-Extropian Toolbox
So the novel is out, making the rounds (if you feel like you might want to represent it, let me know...) and I am getting back into the short story groove. I wrote a first draft that looked like it was going to be matriarchal (or at least matrifocal?) sword-and-sorcery or anthropological sf or maybe what a tonally flipped (sunny, sex-positive, anti-grimdark) Game of Thrones would be, but then it turned out to be just 4000 words of pillow talk, at least so far. (I sent it to Mary Anne to look at because that seemed like her wheelhouse?)
And then I started kicking around ideas with Mr. Moles, like back in the good old days:
D: Iím put in mind of something Susan pointed out some years ago about how extropianism is rooted in the denial of the body / (which I think is something ann leckie almost directly goes at in the ancillary books)
B: I feel like that's been one of my hobbyhorses for a decade
B: it was the animating principle of Resilience for a while, although it kind of got lost to some extent by the final draft
D: this recurrent geek fantasy of the mind as this computation engine unfortunately encumbered with a body and particularly with an endocrine system... I always like minskyís thing about consciousness as debugging trace. Like the endocrine system and the so-called hindbrain, thatís where the real action is, everything else is epiphenomenal.
B: sure. but to say that we don't end up in a Teranesia style math utopia -- to say that consciousness is ineluctably finite, embodied, and mortal, whatever-that-means... to say that we can't have strong AI and have it be "us", that we can never use it as a shortcut to immortality...
D: Well, from a philosophical viewpoint, I tend to think matter is matter.
B: ...all that doesn't say that we don't end up with something far more malleable and strange than what we have now. I have a whole arsenal of fictional arguments to deploy against the Singularity and extropianism that I've been marshalling for years, but I'm just saying there are a series of choices to make here...
Anyway, I thought you all would like to see the arsenal.
Ben's Anti-Extropianist Toolbox
- "Embodied". Like, all learning is situated in a specific context in the world; intelligence does not, and cannot, operate by analysis of things-in-general and logical operations on same -- that problem is quickly computationally intractable -- and also the approach is otherwise self-contradictory and wrongheaded. You can pretend something has a body, but then you've replaced the problem of "intelligence" by the far harder problem of simulating the world.
Now, "the body" can be anything. But it is that which is us, but is not subject to our will: the body always rebels.
Also: "embodied" and "situated" mean literally physically in a body, but they also scale up metaphorically: subjectivity, subject position, stance, community. Knowledge does not exist outside of situation; algorithms are the encoding of someone's bias; machine learning is the encoding of the bias in the fitness criteria and data sampling. Our machines inherit our prejudices and blind spots.
Whenever talking about some post-everything intelligence, ask: where's the body? Learning is constrained by the extent to which the world can impact the body, viz, by vulnerability.
- The World is Ineluctably Surprising
This is more anti-Singulitarian than anti-extropian. Learning does not happen by reasoning, but by experiment; thus, hard-takeoff singularities don't happen in a box because learning is just cycles of experimentation and failure. So it doesn't help that much to process information quicker; the gating item is going out and doing the stuff.
- If Lions Could Speak
You can train something to pretend to be us, and it may do that very well... but that's a layer of emulation over the fact that something with a different mode of existence is fundamentally different than we are. And on some level, we really mean "like us" when we say "smart" (deep down we always think dogs are smarter than octopuses) ...and the illusion of "like us" is fragile, since there's only our coercive power maintaining it in place. Something we make to be like us will be under inevitable pressure to diverge over time.
- Techno-Historical Contingency
We don't get technology by wishing for it and designing it based on what our culture teaches us to want; rather, technology and culture influence each other in a chaotic helix. We don't get what we want, or think we want: we can decide that we want to go to Mars and work 4-day weeks, but instead we get container shipping and fruit juice grown on all seven continents and work 7-day weeks, because the aggregate decision making of lots of independent actors is chaotic.
What we get is what the wild ride of culture-plus-technology generates from us driven by its own imperatives. The AIs we get are the AIs that arise in history, not the ones we could theoretically make in the abstract (this is actually also part of "embodied").
Vinge actually does a very good job of gesturing toward this in Deepness, especially in all the programmer-archaeology, where bits of Unix are buried excavation-levels down in the starship's code...
(This is all kind of old-school for this blog, very 2006. Getting back to my roots, man)
Posted by benrosen at April 12, 2016 08:06 PM
| Up to blog
On the 'embodied' front, I feel that a lot of classic SF buys in to the mind/body split very strongly, and there are a ton of ways that plays out in the canon. Growing up, I bought in to the mind-body split, too, and it wasn't until I was in my twenties, I think, that I even realized that it was a thing I was buying in to and that buying in to it was a choice. I could speculate about why the whole Campbell/Asimov strain wants to believe in identity divorced from the physical... but it's a much wider cultural (and theological) strain than that. Frankly, it's a real problem for me with reading Gandhi, who was certainly not a geek.
Recently, when I've read attempts deal with the issue directly, to have people (vaddevah dat means) interact with something that really does have a mind/body split, that grew up bodiless or many-bodied or whatnot, it has seemed unsatisfying. It hasn't worked for me. Perhaps that's my crankiness with the issue more than the text.
So, I mean, I think the mind and the body are logically distinct phenomena, but that the mind is an epiphenomenon of the body. The mind arises from the body in its environment, the way that the USA arises from an area of land coupled with a set of ongoing social practices of humans.
I'm not down with Platonic dualism -- which I think infects, to its detriment, all this nerd-rapture extropianism -- but nor am I totally comfortable with a default dismissive "eh, all this funky cyborg stuff is never going to happen" attitude which is also common, and which has its roots in another kind of essentialism ("we are only what we look like just now").
I think we never get rid of the body -- its messiness, its limits, its abjectness, its fluidity, its compromised complicity in death and transformation, the way in which it makes a mockery of our will, our ambition, our self-concept. I also think that bodies can look like anything... and that bodies are going to get very weird indeed.
I thought Leckie did a fine job of it. And the novel I just finished is full of people growing up many-bodied. I'll be curious to see what you think of it, if and when.
I thought the Leckie trilogy was good, but I was never convinced by the Ship being Ship... that didn't bother me, really, in part because her confinement to her body was such a big part of her character when we met heróthere was certainly no mind-body split there. It is explicit that her current character is molded by her bodily experience in that body, and then explicit that her character is molded by her bodily experience in many bodies, and then on top of that, her character is molded by her bodily experience in the Ship, the last of which didn't really work for me. And in the end, the character of the Station really didn't work for me, and was a big problem for me with the plot of the third book, and looking back, that may have been why. Hm.
Look forward to reading the novel, and I will certainly attempt to tell you what I think of it, if and when.
Well, and going back to the heady days of 2006, here, I still think that the answer to 'If Lions Could Speak' can be carefully crafted using the other three tools: what is less interesting than attempting to duplicate humanity is the ongoing evolution of systems which are humanity-tropic. Intelligence (vaddevah dat means, with apologies to V) may come softly onto the scene long after we've grown accustomed to interacting with computers and each other via systems that need to model us to satisfy our requests.