Journal
 

Tuesday, April 12, 2016

Matrifocal Pillow Talk, and an Anti-Extropian Toolbox

So the novel is out, making the rounds (if you feel like you might want to represent it, let me know...) and I am getting back into the short story groove. I wrote a first draft that looked like it was going to be matriarchal (or at least matrifocal?) sword-and-sorcery or anthropological sf or maybe what a tonally flipped (sunny, sex-positive, anti-grimdark) Game of Thrones would be, but then it turned out to be just 4000 words of pillow talk, at least so far. (I sent it to Mary Anne to look at because that seemed like her wheelhouse?)

And then I started kicking around ideas with Mr. Moles, like back in the good old days:


D: Iím put in mind of something Susan pointed out some years ago about how extropianism is rooted in the denial of the body / (which I think is something ann leckie almost directly goes at in the ancillary books)
B: I feel like that's been one of my hobbyhorses for a decade
D: yeah
B: it was the animating principle of Resilience for a while, although it kind of got lost to some extent by the final draft
D: this recurrent geek fantasy of the mind as this computation engine unfortunately encumbered with a body and particularly with an endocrine system... I always like minskyís thing about consciousness as debugging trace. Like the endocrine system and the so-called hindbrain, thatís where the real action is, everything else is epiphenomenal.
B: sure. but to say that we don't end up in a Teranesia style math utopia -- to say that consciousness is ineluctably finite, embodied, and mortal, whatever-that-means... to say that we can't have strong AI and have it be "us", that we can never use it as a shortcut to immortality...
D: Well, from a philosophical viewpoint, I tend to think matter is matter.
B: ...all that doesn't say that we don't end up with something far more malleable and strange than what we have now. I have a whole arsenal of fictional arguments to deploy against the Singularity and extropianism that I've been marshalling for years, but I'm just saying there are a series of choices to make here...
D: :)

Anyway, I thought you all would like to see the arsenal.

Ben's Anti-Extropianist Toolbox


  1. "Embodied". Like, all learning is situated in a specific context in the world; intelligence does not, and cannot, operate by analysis of things-in-general and logical operations on same -- that problem is quickly computationally intractable -- and also the approach is otherwise self-contradictory and wrongheaded. You can pretend something has a body, but then you've replaced the problem of "intelligence" by the far harder problem of simulating the world.

    Now, "the body" can be anything. But it is that which is us, but is not subject to our will: the body always rebels.

    Also: "embodied" and "situated" mean literally physically in a body, but they also scale up metaphorically: subjectivity, subject position, stance, community. Knowledge does not exist outside of situation; algorithms are the encoding of someone's bias; machine learning is the encoding of the bias in the fitness criteria and data sampling. Our machines inherit our prejudices and blind spots.

    Whenever talking about some post-everything intelligence, ask: where's the body? Learning is constrained by the extent to which the world can impact the body, viz, by vulnerability.

  2. The World is Ineluctably Surprising
    This is more anti-Singulitarian than anti-extropian. Learning does not happen by reasoning, but by experiment; thus, hard-takeoff singularities don't happen in a box because learning is just cycles of experimentation and failure. So it doesn't help that much to process information quicker; the gating item is going out and doing the stuff.
  3. If Lions Could Speak
    You can train something to pretend to be us, and it may do that very well... but that's a layer of emulation over the fact that something with a different mode of existence is fundamentally different than we are. And on some level, we really mean "like us" when we say "smart" (deep down we always think dogs are smarter than octopuses) ...and the illusion of "like us" is fragile, since there's only our coercive power maintaining it in place. Something we make to be like us will be under inevitable pressure to diverge over time.
  4. Techno-Historical Contingency
    We don't get technology by wishing for it and designing it based on what our culture teaches us to want; rather, technology and culture influence each other in a chaotic helix. We don't get what we want, or think we want: we can decide that we want to go to Mars and work 4-day weeks, but instead we get container shipping and fruit juice grown on all seven continents and work 7-day weeks, because the aggregate decision making of lots of independent actors is chaotic.

    What we get is what the wild ride of culture-plus-technology generates from us driven by its own imperatives. The AIs we get are the AIs that arise in history, not the ones we could theoretically make in the abstract (this is actually also part of "embodied").

    Vinge actually does a very good job of gesturing toward this in Deepness, especially in all the programmer-archaeology, where bits of Unix are buried excavation-levels down in the starship's code...


Discuss.

(This is all kind of old-school for this blog, very 2006. Getting back to my roots, man)

Comments (4)   permalink

Hey There

Is this thing on?

This blog has been moribund for a while, beset by a combination of the general malaise of blogs since the lords enclosed our lands, my being heads'-down for so long on the novel, and the recursive reluctance that procrastination engenders in a project long put off.

I do miss the form, though, so I am going to try to make a few shorter posts without mulling them over too much.

Stay tuned.

Comments (3)   permalink