Cypher's Chair

The Off-Topic forum for anything non-LDS related, such as sports or politics. Rated PG through PG-13.
Post Reply
_grayskull
_Emeritus
Posts: 121
Joined: Wed Nov 22, 2006 9:36 pm

Cypher's Chair

Post by _grayskull »

Cypher's Chair

Out of all the debates surrounding the mind, the topic of greatest controversy is over qualia, or basic sensory perception. If we're trying to knock out a theory of mind, the most intimately counterintuitive aspect of our project is to understand qualia in the same theoretical terms as understanding just about anything else. Leibniz framed the problem succinctly long ago:

One is obliged to admit that perception and what depends upon it is inexplicable on mechanical principles, that is, by figures and motions. In imagining that there is a machine whose construction would enable it to think, to sense, and to have perception, one could conceive it enlarged while retaining the same proportions, so that one could enter into it, just like into a windmill. Supposing this, one should, when visiting within it, find only parts pushing one another, and never anything by which to explain a perception. Thus it is in the simple substance, and not in the composite or in the machine, that one must look for perception.


Today there are quite a few in-between positions and different takes. One of the most popular ways to conceive of the mind is in terms of a computer - computationalism (a brand of functionalism). Of course, that's a broad project, not one that maintains computers as we currently know them could pull it off. But I offer as a thought experiment against the grain of Leibniz's, the scene from The Matrix where Cypher is explaining to Neo how he monitors what's going on inside:
there's way too much information to decode the Matrix. You get used to it, though. Your brain does the translating. I don't even see the code. All I see is blonde, brunette, and redhead.


I'm not suggesting anyone could ever really, read code like that, but, let's just be clear about what's at stake here. Cypher is claiming to "see a perception".

Now as a bridge between the two thought experiments, Leibniz's Mill and Cypher's Chair, consider the FARMS' Mayan tour, which I base on a comment by Rorty from Philosophy and the Mirror of Nature:

Imagine strolling through the Mayan ruins alone. You encounter pots and squiggles on rocks, but nothing to explain the Mayan language and culture. Now imagine a walk with a FARMS apologist who is considered by everyone to be the world's foremost expert on the topic of Mesoamerica. He begins interpreting the squiggles and explaining the pots. Some thoughts begin to flow in your mind, you don't see exactly the same squiggles you did before. Imagine the FARMS apologist, he's seeing even more than you are - an entirely different world than you're seeing! And finally, imagine a native Mayan, the sqiggles cary such force that he can almost feel them. Not too hard to imagine if you consider curse words in your own language. If you speak English natively, the German equivalent just doesn't hit you the same way.


Rorty's summary would be that of course you're baffled by Leibniz's mill if you don't read brain language. And what I'm suggesting is that there aren't multiple kinds of consciousness but one kind that exists on a spectrum. Where we are tempted to report a feeling when we decode some instances of foul languge, consider the hypothetical cypher who can make ridiculously fast code translations - how else to make the report but to say he "sees" them? And to the extent that it's unbelievable Cypher could pull it off, wouldn't we make it more realistic by increasing his abilities? More neural connections in his brain maybe, essentially, to make him MORE like a computer rather than less like one?!

I believe the intuition from either of the first two thought experiments is derived primarily, not from the force of the logic they impose, but by the way they direct (or misdirect) our imagination. Of course if we blow up the mill larger than life and walk through gigantic gears and pullies we'll never "see" or "find the explanation" of a thought - or an operating system for that matter. But if instead of gears, we talk symbols, tiny ones that flash by in the millions before our eyes, our intuition is led to think that perhaps with the right abilities to do the translation, a - as we label or mislabel it - "stream of consciousness" would fill in.

Nobody's Home

Another famous thought experiment in Mind is Mary's room (copied from Wiki):
Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal chords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. [...] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?


I couldn't possibly do justice to all the commentary on this thought experiment. I feel guilty even having an opinion on it. But consider my thought experiment, Nobody's Home:


A room exists with nothing but black and white things in it but it contains all the knowledge in the universe in print and on black and white DVDs. The door is opened and a yellow banana is tossed in. Does the room contain any more knowledge than it did before?


Knowledge is not a ubiquitous substance that just exists in Mary's "mind". When we look at it that way we've already presupposed what the answer is going to be. What this account lacks is a description of the physical tokening involved by which we can realistically talk about Mary knowing anything. You can only believe there is knowledge in Mary's room because we've been conditioned with metaphors that make knowledge a static thing, a list of propositions someone can own, or have, but when we take that position to the absurd, and literally make it a description - a bunch of textbooks - we see the problem. The reality is Mary isn't "aware" 24/7 of every fact in third person cold-hard form about color. She thinks about this or that, consults a book, lumbers through some equations, and then goes to sleep. This is akin to our FARMS scholar cracking Mesoamerica. He'll never know "what it's like to be" Mayan. Mary might, however, like the FARMS scholar be able to evoke a kind of "proto-feeling" that others can't. And that would be a significant acheivement. Something akin to Gleick's comment on Richard Feynman:
Feynman seemed to possess a frightening ease with the substance behind the equations.


Math was so tacit to Feynman that we're tempted to see a substance. It makes us wonder who's thinking more, they guy who plays with math like putty or the guy who spends hours to balance a checkbook? Wouldn't we be tempted to say a math wizard experiences something phenomenal that others don't when doing equations? Surely in Feynmans mind, there are steps "skipped" that I'd stumble over for hours. In that situation, who's doing more thinking, him or me? We'd be tempted to say Feynman's mind was so efficient that he's less aware of the explicated rules, and just "experiences" the math, what, as a continuous stream? Whether it's Feynman, Mary, or myself, knowledge doesn't just exist abstractly as a list of propositions, but it's a complicated scene of physical tokenings in the brain. Efficiency is quale-like, and labored is "conscious-thinking"-like.

The Tokening Problem

John Searle thought of an argument against functionalism once where if the paint on his wall was informationally complex enough to describe anything, it could describe a computer, and if the mind is a computer, then the paint must be conscious. There are a few subtle flaws here, but the most important is that it seems all the talk about models and formulas can help us forget that a computer isn't recursive anymore than a blueprint is a building. A computer is an actual running machine in space and time that physically tokens an instruction set just as a brain is. And when we understand that, trying to get at exactly what we know and how we know it is vastly more complicated than standard considerations of "propositional knowledge". What I'm trying to say is that the knowledge doesn't exist in Mary's mind or her books, but somehow in this complicated tokening process of reading, thinking, and writing things down. And it is unlikely given the grain of the thought experiment, that Mary would encounter her books and imagine things in her limited memory in such a way that she could produce a Yellow banana. But like the Mill turns its blades on size, Mary locks our minds in a room bungling around with test tubes and textbooks. It asks us to conjur up representation and knowledge in a particular way that isn't up to the task. But if she took some lessons from Cypher, she might be able to get that book knowledge into a form where it streams on her black and white monitor and she gets a yellow banana out of "pure information". There are other ways she could get that knowledge I admit if we allow this, and it's been suggested that we can only beg the question one way or another, but considering knowledge in relation to physical tokening rather than lists of propositions that exist on a page or "in a mind" helps us imagine how producing that knowledge might be possible.

Tokening and the Third Person

The tokening problem above is embedded in the talk of subjectivity, objectivity, first person, third person, and any language which tries to separate, "us from them". I don't believe there is a such thing as a truly first person or third person explanation. The most dry and exacting descriptive accounts in english nevertheless wouldn't be fully translatable (and I follow Quine's language translation problem) by an alien from another world. There is always a little "first person" burried tacitly within anything claiming to be objective and vice versa, there is no other-world residing meaning. And what's taken for granted by a community is very hard to emulate artifically for those on the outside.

Leibniz's mill confines us to learning Mayan from a first visit to the ruins. Mary's room augments our tool set with dictionaries and videos. But with Cypher as a guide, it may be possible to create the virtual environment that helps us token the knowledge in a way to facilitate translation from symbols to "qualia" though in reality it's probably not feasible. But I submit our FARMS experts know, literally, "what it's like to be" Mayan just a little bit more than the rest of us. And Mary knows "what it's like to see" yellow a little more than I would in her room. And a hypothetical Cypher "sees" red just like I do when reading code. So while I don't think a Cypher will ever exist, I can imagine the extreme end of the spectrum he represents. And I think it's not at all unreasonable to believe that one day "science" will have an account of "qualia". It won't be Cypher level understanding, but on the level of a FARMS scholar who understands Mayan and doesn't think there is property dualistic or pronoun problem that prevents him from explaining what it's like to be Mayan. Because what falls out of this phenomenology is an epistemology where there are only varying degrees between tacit knowledge and propositional knowledge, and propositional knowledge can't stray to far away from tacit knowledge. Where our information processing falls short to see Cypher's code, we might yet hit the mark on describing Leibniz's Mill.

In a nutshell, my current position then is twofold:

1) "qualia" is the hyperspace setting on the spectrum of thinking and not thinking, it's an extreme form of "access consciousness" and if that can be explained computationally, phenomenal consciousness can be. The human brain isn't capable of thinking fast enough in order to get "true" qualia from thinking (like Cypher does) only a vague impression of it.

2) The ontological problems once we get to that place in science will transform along with language such that it will remain undecided on question like whether matter has two properties or the account is truely descriptive, because what seems so maddingly first person now will be tacit within language. Somwhat maybe as life once seemed inexplicable, even in theory. Somehow our conceptual resources have evolved to the point where even young adults aren't baffled by the notion of a description of life.


my blog that talks about this stuff too:
http://gadianton2.tripod.com

Main influences for this post to substitute for proper citations:

Dennett
Searle
Block
Rorty
_Tarski
_Emeritus
Posts: 3059
Joined: Thu Oct 26, 2006 7:57 pm

Post by _Tarski »

I like your Cypher's chair analogy but for the uninitiated it should be pointed out that the analogy is not to suggest that there is a single little guy inside the brain who looks at a neural code and translates it extremely quickly etc. and thereby experiences it in certain way that is "qualitative"--that just defers the problem.
One either has to imagine a whole person peering into the brain or having access to some source of detailed information about the potiential perceived scene and trying to "see" the "qualia" in the code. If we wish to consider it just from the point of view of the person then that analogy only works if we admit that from the inside it is the whole person that is performing the perceptual and interpretive tasks. In Dennett's language, there is almost certainly no cartesian theater---material or otherwise.
_grayskull
_Emeritus
Posts: 121
Joined: Wed Nov 22, 2006 9:36 pm

Post by _grayskull »

Good Comments Tarski.

To expand a little I'd like to just go over the place (for others) of intentions and the place of qualia in Dennett's brain with no cartesian theater. Intentions are the 'you' which holds beliefs and things like that and qualia are the 'you' that feels things. For either of these, there is no place where it "all comes together" for Dennett, but for intentions, there is a "center of narrative gravity" while for "qualia" there is, I guess, nothing. His eliminativism is considered so with respect to qualia. So for Dennett, there would be no visual "perception" for Cypher.

Dennett's views on qualia are some of the most interesting but I haven't decided how far I take them. Part of the problem is that his articulation from what I can see doesn't explicitly reject "qualia" as indubitibility (A) as Rorty does (the perception is in the mistake.) But this is the key thesis for Searle and others so Dennett appears to be talking past them and vice versa. I'm very open to enlightenment on this point. More interesting articulations of qualia come in the form of phenomenology (B), Husserl being the key figure probably, and this is the clear kind of thesis that Dennett attacks, but among current philosophers I don't know how popular the position is and to me, rejecting it doesn't matter much. I think it's very likely that chasing down "redness" to neurons will lead to disappointment but the same is true of knowledge. Epistemology is a failure as much as phenomenology is, so to the extent that his thesis attacks (B) qualia I'm less inclined to say there is no such thing any more than I am inclined to say there is no such thing as knowledge.

I'm kind of leaving that discussion bracketed here, I'm not wanting to take a firm position on whether qualia exist or not, but how whatever it is or isn't might possibly exist in a unified mind that doesn't make rigid distinctions between access and phenomenal consciousness. Now in Dennett's theory, there wouldn't be a place where the "translations" come together as you correctly point out, but in the unrealistic super fast thinking world I'm suggesting it might emulate what vision is hardwired to do. Certainly Dennett follows Fodor's lead and opts for specilized 'modules' which perform various tasks even though they don't 'come together'.

Also key to my thinking here, an idea I'm playing around with, is that when it comes to reporting in the 'first person', there might not be any other way to describe what one is doing as in the case of unrealistically fast translations other than "experiencing". (see dennett's discussion on turning up baud rates - though I'm building on this point).
_Tarski
_Emeritus
Posts: 3059
Joined: Thu Oct 26, 2006 7:57 pm

Post by _Tarski »

grayskull wrote:Good Comments Tarski.

Dennett's views on qualia are some of the most interesting but I haven't decided how far I take them. Part of the problem is that his articulation from what I can see doesn't explicitly reject "qualia" as indubitibility .

Can you expand or clarify this statement?
_grayskull
_Emeritus
Posts: 121
Joined: Wed Nov 22, 2006 9:36 pm

Post by _grayskull »

Tarski,

Unfortunately it's going to be hard for me to give you exact citations because that's what I really need to do. I can't post in a way right now where I can sit down with a book. But in the next few days I'll try to find at least a couple of page numbers to give you.

What I mean is Dennett gives a number of qualia examples where you 'think' you are taking in some kind of unified pure experience. He then shows how it isn't true, you 'think' you're seeing something but you're not. It's an illusion (I can't remember if he uses that term). BUT, he is clear that it ISN'T the kind of illusion where you brain is 'filling in' anything. As you rightly point out, there is no place it all comes together. The language I know he uses more than once is 'you aren't really having this experience....it only seems that way'.

Searle doesn't cite the same examples I'm thinking of, but his basic critique of Dennett is that the phenomenal is in the seeming, no matter how mistaken it is. So if it "seems" you're having an experience, then that is good enough to qualify as qualia.

So let me call this kind of qualia, the "seeming" which might be utterly useless in cognitive science, minimal qualia, or the "A" qualia in my last post. I have found statements by others summerizing Dennett that he's an elimitivist toward qualia, including "A" qualia. But I can't find anything in his own writing where he makes that case, or deals with "A" qualia. I could have missed it, or I could be reading in something that isn't there. Like I say, I'm open to suggestions.
Post Reply