Journal
 

Friday, September 29, 2006

on "On certain philosophical arguments against machine consciousness"

Dave Moles has transferred his blog to a new server, so I can't comment on this post over there any more. So it'll have to be here.

While I like Aaronson's post on Alan Turing as a moralist, and I do think there's a beauty in his view of the Turing test, I'm somewhat more sympathetic to what I think Halpern may be saying, or at least what I think Halpern should be saying; I think "computers will never think" is not a nonsensical, nor necessarily a bigoted, position, and I think the Turing test, as usually understood, is not necessarily really equivalent to the principle of judging people on their behavior.


My own money is on "strong AI, but no time soon". I would be surprised, but not incredulous, if we had computers that would regularly impress us as being "smart like people" on Vinge's timeline of 2030 or so; I would also be surprised, but not incredulous, if after thousands of years of effort the problem of machine intelligence turned out to be intractible. Or uninteresting -- fundamentally uninteresting, i.e. not on any chain of possible historical-technological progression.

It's clear that thought can be a property of a physical system, since we have one physical system -- us -- that it's a property of. Thus it seems obvious to me that it's possible *in principle* to build such a physical system. I can't credit a position which says "only God can make a thinking physical system, and it has to look just like us".

But that's a pretty big "in principle". I can credit a position that says "you could build an intelligence, but AI as we know it is barking up the wrong tree".

Let's say you are hanging around Europe in 1500 or so and you meet a clockmaking fan. He's just seen the automata who parade around the Zytglogge in central Bern on the hour, and he says to you excitedly, "one day clockwork men such as these will be able to draw a thousand men's portraits in the blink of an eye, and solve algebraic theorems, and if you name any phrase, they will give you a list of every book it was ever writ in!"

Leaving aside the fact that he's clearly nuts -- would he be right?

Sort of depends what you mean by "clockwork men", doesn't it?

The Vinge/Kurzweil 2030-or-so deadline for strong AI is based on the notion that the critical issue is processing power -- that once your computer is doing as many operations per second as a human brain -- or a million times as many, or whatever -- it should be relatively straightforward to get it to "think", whatever thinking is. As a software guy, I feel like this is something like saying, "well, we have enough paint, so now getting the Mona Lisa should be trivial."

The issue isn't computers being "as smart as" us. "Smart" is not really linear that way. If you had to pick some way of comparing totally heterogenous creatures on "smart", processing power is a reasonable metric. If you have to decide whether frogs are "smarter" than octopuses or visa versa, processing power is as good a method as any.

So in those terms, all you need to know when computers are going to be "as smart as" us is Moore's Law.

But octopuses are not going to be passing the frog equivalent of a Turing test any time soon -- and nor are we, for all that we are "smarter". The Turing test is, in fact, a good measure of what people intuitively mean by "artificial intelligence" precisely because it doesn't measure when computers are "as smart as" us, but rather when they are "smart like" us.

Or when they can pretend they are.

As cute as I find this


There’s a story that A. Lawrence Lowell, the president of Harvard in the 1920’s, wanted to impose a Jew quota because “Jews cheat.” When someone pointed out that non-Jews also cheat, Lowell replied: “You’re changing the subject. We’re talking about Jews.” Likewise, when one asks the strong-AI skeptic how a grayish-white clump of meat can think, the response often boils down to: “You’re changing the subject. We’re talking about computers.”

it's disingenous for two reasons. First, you can contradict Lowell by pointing to extant honest Jews. But you can't contradict Halpern by pointing to extant real thinking computers (you have to point to posited future thinking computers, and that doesn't refute his point that it may be a waste of a great deal of time and money looking for them). And second, whatever Lowell thought, Jews in fact have enormous morphological similarities with other Harvard students. We can extrapolate confidently from what we know about honest Gentile Harvard students to predict things about honest Jewish Harvard students, on the basis of this morphological similarity.

Let's say that I am a proponent of Teapot AI, which is the theory that under the proper heat and pressure conditions, the steam inside an ordinary kitchen teapot will spontaneously self-organize into an intelligent system which will know English and be able to talk, out of its whistle, in a way that would convince an interlocutor that the teapot was a well-educated male mid-thirties upper middle class American, where it not for the voice being so whistly. And I am speaking with a skeptic of Teapot AI.

Me: Teapots will think!

Teapot AI Skeptic: No they won't.

Me: Yes they will!

T.A.I.S.: That makes no sense. Teapots are nothing like people. Steam can't think.

Me: Bigot! How is it that a grayish-white clump of meat can think, then?

T.A.I.S.: I have no idea. But it certainly appears that it can.

Me: But if I could, theoretically, get a teapot to pass the Turing test, would you agree that it could think?

T.A.I.S.: Um. I think it would be more likely that it was a trick, actually.

Me: Why would you be more skeptical about a teapot thinking, if it appeared to think, than you would be about another human thinking? Are you a solipsist?

T.A.I.S.: No, but I have two reasons to believe that another human thinks; one, that the human behaves like someone who thinks, and two, that the human is extremely similar to me in design, and this morphological similarlity makes it very likely that similar external behavior is produced by a similar internal process. And what I mean by "think" is not a set of behaviors, but an internal process.

Me: Why do you care what internal process is used? If the outward behavior were the same?

T.A.I.S.: I guess it partly depends how much behavior we're talking about. If I were to come to know and love the teapot after living with it for many years, I would probably come around to the conclusion that it "thought". But if you're talking about the teapot showing up in a chat room and fooling me for a half an hour into thinking it was a person? It would still be more likely that it was a trick. Because it seems so intrinsically unlikely that the "heat up a teapot and wait" process you're proposing would actually produce intelligence.

Me: Are you saying the brain is for some mysterious reason the only physical system that can be intelligent?

T.A.I.S.: Well, first, if what you mean by intelligent is crunching a lot of numbers, obviously not. If what you mean is "doing stuff that feels really human to us", it might be that that's just too hard to do with a different substrate -- that something else we build might be smart in the sense of really complex, but too *alien* to ever feel "like us". But more to the point, why teapots? Of all the possible systems?

Me: But you admit we can one day build intelligence!

T.A.I.S.: Maybe...

Me: So then it'll be a teapot! I mean, maybe there will be some cosmetic differences, or a few new techniques, but basically, it'll be a teapot, right?

T.A.I.S.: So not.

(Substitute "clockwork man", "computer", etc., for "teapot" as required.)


The thing is, the Turing test rests on the idea that humans are somehow hard to fool. I mean, chatbots today regularly pass the Turing test, but no one is proposing that they are actually "intelligent" in the nebulous way we mean when we fight about AI. But to our Bernese clock fan of 1500, they look *unimaginably* intelligent -- and Google looks Godlike. So why are we unsatisfied? Because we know it's a trick.

But arguably it will always be a trick. Not because computers can't be vastly more "intelligent" than we for some scrupulously fair, mathematical, non-ethnocentric meaning of "intelligent". But because they won't be us. And when you come right down to it, we may not actually mean anything by "intelligent", other than "like us".

Turing is worried about the robot child being teased in school; I wonder if this is not like expecting the spontaneously emerging teapot voice to just happen to be a white male middle class American. If the process by which the robot arose was in any sense "free", would it be in any danger of feeling hurt at being teased?

Or would we have to carefully arrange, parameter by parameter, petabyte of common sense knowledge by petabyte of common sense knowledge, for it to have something that looks like the experience of being teased? And if we have to go to such monumental efforts to arrange for the robot to feel teased, isn't it then odd to agonize about it?


I'm not saying that the strong-AI position is absurd either. Maybe if you have enough processing power and some general-purpose algorithmic goodies -- classifier systems, neural nets, anthill optimization -- and you rig up a body and a simulated endocrine system and whatnot so that your robot can simulate "kid" well enough to embed itself, initially, in the social space of a schoolyard -- yeah, sure, maybe it'll quickly arrive at internal states and processes that mirror ours. Maybe the mind is mostly malleable, maybe the hardwired bits don't matter so much or are simple to fake, maybe the universe naturally tends toward our kind of intelligence, maybe we are both simpler and more general than we think, maybe there is so much redudancy in the social networks and the language and so on that any general-purpose algorithmic system you drop into a human culture ends up quickly humanized, like a sponge soaking up soup.

But it's certainly not the only option.

Comments (70)   permalink

Monday, September 18, 2006

Habeas Corpus

I'm at a loss to say anything clever about this, but the American congress is in the middle of debating whether to get rid of habeas corpus.

If you are an American citizen, you might want to call your Senator today. Be polite and firm. "I'm kind of partial to the rule of law. Would it be too much trouble to refrain from locking people up indefinitely with no access to a court?"

Comments (7)   permalink

Thursday, September 7, 2006

Aviva responds

thank you for all your messages, they were all great.

i made a friend yesterday and i didn't really have fun the day before yesterday, but yesterday i had fun because i made a friend and stuff. but kindergarten was great except for the day before yesterday.

it's okay if you missed your turn and you want to write back, you can write back, just go ahead.

Comments (7)   permalink

Tuesday, September 5, 2006

There is a house...

So my story The House Beyond Your Sky is up at Strange Horizons!

It's very nice to be back at SH. It's been a while.

Go read the story first, 'cause I want to say a few things about it.

Okay....

"The House Beyond Your Sky" is set in the far future. And I mean the FAR future. It makes Droplet look like an Anne Tyler story (though, come to think of it, "Droplet" is not entirely unlike an Anne Tyler story. She too likes old, married couples). I suspect, though I cannot prove, that "The House Beyond Your Sky" is one of the latest-set stories in the history of science fiction. Maybe the latest.

In a typically long and wonderful editing process, Jed nailed me down on the exact cosmology the story is set in: it's "The Big Rip outdoors and the Big Freeze indoors"; the narrators live in enclosed houses that have managed to maintain sufficient local gravity (e.g. importing it from nearby branes) to avoid the Rip, but the rest of the universe is a sparse gas of leptons and photons. In their houses, the narrators can play the Dysonian eternal intelligence game.

Many wonderful people critiqued this story in the years since I first started it, but I recall particularly Patrick Samphire's objection that the original science in the story was too correct. (Other people may have said this too). The idea that people living in the far future would have our physics, he told me, is absurd. He was right.

Update: hmm, although this suggests that it was Ted Chiang who made that point. Maybe they both did...

The story originally had a much longer preface about the history of the universe. I really liked it, but it was certainly a roadblock to reader comprehension. :-) I was thinking I'd post it, though, if anyone is interested....

Update: Okay, so you're interested. See after the cut below.

Also, check out that illustration! Is that awesome or what? :-)

Click here to continue reading "There is a house..."
Comments (21)   permalink

Monday, September 4, 2006

Kindergarten starts tomorrow


Wish Aviva luck!

Here's what she has to say:

my friends moms the teacher. i bet i'll have a lot of fun being there and at the schoolbus. do you think i'll have fun?

Comments (6)   permalink