Having dabbled on the edge of AI for a number of years, I enjoy the technology as well as the philosophy that surrounds the subject. Jon Udell writes of the so-called Singularity...
That's us: just goldfish to the post-human super-intelligences... True machine intelligence was what the advocates of strong AI wanted to hear about, not the amplification of human intelligence by networked computing. The problem, of course, is that we've always lacked the theoretical foundation on which to build machine intelligence. Ray Kurzweil thinks that doesn't matter, because in a decade or two we'll be able to scan brain activity with sufficient fidelity to port it by sheer brute force, without explicitly modeling the algorithms.Now's time for this goldfish (me) to refresh himself with a healthy dose of Terry Winograd and Fernando Flores. I don't consider myself religious in the least. However I think that neither "intelligence" nor "emotions" are in any way mechanical (i.e. transferable to a machine). I do think they are *biological abstractions*. They are the names we give to our interpretation of biological (and so, chemical, and so, physical) processes.
Back to Jon on the current understanding of human vision...
We are, however, starting to sort out the higher-level architecture of these cortical columns. And it's fascinating. At each layer, signals propagate up the stack, but there's also a return path for feedback. Focusing on the structure that's connected directly to the 14x14 retinal patch, Olshausen pointed out that the amount of data fed to that structure by the retina, and passed up the column to the next layer, is dwarfed by the amount of feedback coming down from that next layer. In other words, your primary visual processor is receiving the vast majority of its input from the brain, not from the world.We can manipulate biological and psychological processes. We can mimic them mechacinally to increasing degrees. But we cannot "make them" out of parts and we cannot "tansfer" them to such a machine. What would that mean? The simulation of the hurricane is not and never will be the hurricane even if the simulation actually destroys the city. Something destroys the city, but it is not a hurricane.
A machine that has some representation of my "brain" may continue to develop a further simulation of what my brain would be like had it undergone similar stimula. But in no way is that machine now "my brain" and in no way does that machine "feel" things in the way my "real brain" feels things. Those feelings are interpretations of abstractions of biological processes. The machine is merely a simulation of such things. If you care about me and then observe further simulations of such a machine you may further interpret your observations to be "happy" or "sad" and you may even interpret that machine to be "happy" or "sad". But that machine is in fact not happy or sad in any biological or psychological sense.
Let's face it, even my dog "loves" me because I feed it, and when it comes down to it my kids do too. We are abstract interpreters of biological processes. One interpretation of that may be "loneliness" but so? Here is where I note that my dog gets as excited to go out and pee as it does to see me come home from a week on the road. Fortunately my kids have a finer scale of excitement.
Back to Jon for more about augmentation rather than "strong" AI...
I had the rare privilege of meeting Doug Engelbart. I'd listened to his talk at the 2004 version of this conference, by way of ITConversations.com, and was deeply inspired by it. I knew he'd invented the mouse, and had helped bring GUIs and hypertext into the world, but I didn't fully appreciate the vision behind all that: networked collaboration as our first, last, and perhaps only line of defense against the perils that threaten our survival. While we're waiting around for the singularity, learning how to collaborate at planetary scale -- as Doug Engelbart saw long ago, and as I believe we are now starting to get the hang of -- seems like a really good idea.Seems like a really good idea. "Strong" AI is fascinating, but "augmentation" is useful.
5 comments:
I've always thought that, aside from being built out of biological materials and having different 'calculation methods' (for lack of a better term), we're in a sense just computers running some program written somewhere in out DNA. Therefore, I think that it's fully possible to create a strong AI, it just requires computing power of epic proportions, a super-complex program that has yet to be written, and sophisticated input devices. I say it's completely possible, just unlikely to be done for a long time. Also, for all intensive purposes, would it matter whether you were just running a simulation of yourself? Could you prove that you weren't? It raises some neat metaphysical questions.
Just my $1/50
I think we can and will get better at approaching a "strong AI" although there is a long way to go. Estimates by experts have continually been underestimated.
My question is what is the result of "strong AI"? Even if I can be fooled by such an AI should I assign that machine have the same "meaning" as the real thing? If I am fooled into thinking that some machine has "feelings" does that in any way mean the machine really does have feelings in the way I perceive you and I to have feelings?
And no, I cannot prove that I am not a "simulation" of some other sense. That is an interesting question, but since I cannot prove it, I have to base everything on the premise that I am what I perceive myself to be.
For now I agree this is just a set of neat metaphysical questions. I am fascinated more by the thought of there being a "strong AI" than I would be by there actually being such a thing. I think.
Thanks.
"My question is what is the result of 'strong AI'?"
I'd say AI capable of struggling with this question.
If I am fooled into thinking that some machine has "feelings" does that in any way mean the machine really does have feelings in the way I perceive you and I to have feelings?
Intelligence and feelings are emergent phenomena; they arise from the dynamic behavior of complex neural networks. In the case of biological organisms, chemical processes drive that dynamic behavior.
If you had a sufficiently accurate simulation of the network topology and those chemical processes, why wouldn't intelligence emerge? To think that it wouldn't seems to assign some magical property to protein and hormones. Should we call that property a soul?
The Tree of Knowledge:
The Biological Roots of Human Understanding
Post a Comment