Content-type: text/html Downes.ca ~ Stephen's Web ~ Barbarian Inventions

Stephen Downes

Knowledge, Learning, Community

Jul 29, 2004

I think there's a theme. There may be a theme. But don't spend time looking for one; it's not written that way.

One

Please let me be clear about my objection to registration.

For access to services where a unique ID is by definition required - participation in a mailing list, for example, posting a personal blog or discussion board comment, accessing personalized content - then I have no problem with registration. I have many such user IDs and even have a registration form on my own site for the same purpose.

But when links are posted on the open internet, as though they are accessible with a single click, and the user is presented with a registration form instead the content they were expecting when they clicked on the link, then that's where it becomes not just annoying but damaging.

Such links are nothing more than spam. A type of link spam. Trying to lull readers in with a false promise in order to sell them something for a price.

Sure, the price is low low low. Sure, the product or service can't be beat. And, of course, the company couldn't survice without your business. I know the message from the newspapers. And I'd be more sympathetic if I didn't see exactly that same message from the links pretending to be email messages polluting my in-box.

We've heard on this list from people with years of newsroom experience attesting in favour of registration. Well I come in to this debate with years of internet experience. I remember when the Green Card Lottery swept across usenet. I remember when commercialization of the internet was still a living issue. So I can say this with some authority: I've seen this play before.

We will be hearing from various studies and surveys that most people don't mind registration, that most people provide accurate information, that most people see it as the cost of supporting content on the internet. These people are responding at a time when registration sites are relatively few in number. But as they begin to report success, the bandwagon effect takes hold.

Ask the same people what they think in an age before every second link takes them to an advertisement, not the content it promised. People will have much shorter tempers by then. You can't depend on the surveys to guide you here. You have to ask yourself - is what we're doing fundamentally honest? Because while a little dishonesty today may be tolerated, a lot of it in the future won't be, and people will react with a much stronger than expected anger, because they will feel that their tolerance and good nature has been abused.

The message is: stop the false advertising. What I see, though, is the false advertising accelerating. I saw an item today about Wall Street Journal RSS feeds. Now what use is an RSS link from the WSJ? Unless you are one of the few who subscribe, it's nothing but spam. I hit a link today from the Kansas City Star - it let me in to read the story, but the second time (when I went back to verify a quote) it demanded a registration. It was basically trying to trick me, an occasional visitor, into providing a link to its onsite advertising.

Now the beauty of Bugmenot is that it really only works against those false advertising sites. If your site isn't putting out misleading links all over the internet, people aren't going to be getting annoyed at you and using Bugmenot to gain access. And even if someone has created some Bugmenot accounts, there won't be people using those accounts because you're not duping people into staring at a registration screen. So there's no reason to worry - or to get upset - unless you're polluting the web with misleading links.

And from where I sit, if your major means of interacting with the web and welcoming new readers is with a lie, then you should not be surprised if people respond in an angry manner.

Newspapers themselves can be honest with links. Put "(Registration Required)" in your page titles so that aggregators and Google display appropriate notice in link listings. Don't ask for registrations for content you intend to widely publicize. If you run a registration site, keep the deep web deep - don't pollute out browsers with misleading advertising. Or best of all, participate in the exchange that defines the web by putting your content out there for free (the way the rest of us do) and save registration for where it's needed.

So think of Bugmenot as an honesty meter. If its creating unwanted (and unregistered) traffic, then your means of promoting yourself online is in some way dishonest, and you are paying the price for that. And don't expect anyone to be sorry about the fact that you're paying that price.

You reap, you know, what you sow.

Two

Re: Dreyfus. Community in the Digital Age: philosophy and practice. 2004.

In Kirkegaard's book Present Age: "More basically still, that the Public Sphere lies outside of political power meant, for Kierkegaard, that one could hold an opinion on anything without having to act on it. he notes with disapproval that the public's 'ability, virtuosity and good sense consist in trying to reach a judgment and a decision without ever going so far as action.' This opens up the possibility of endless reflection. If there is no need for decision and action, one can look at all things from all sides and always find some new perspective......All that a reflective age like ours produces is more and more knowledge....i" by comparison with a passionate age, an age without passion gaines in scope what it loses in intensity".....Life consist of fighting off boredom by being a spectator of everything interesting in the universe and of communicating with everyone else so inclined. .....

Such a life produces what we would now call a postmodern self---a self that has no defining content or continuity and os is open to all possibilities and to constantly taking on new roles.....the anonymous spectator takes no risks....When Kirkegaard is speaking from the point of view of the next higher sphere of existence, Khe tells us that the self requires not 'variableness and brilliancy but 'firmness, balance and steadiness (Either /Or)...Without some way of telling the significant from the insignificant and the relevant from the irrelevant, everything becomes equally interesting and equally boring, and one finds oneself back in the indifference of the present age.

It is, of course, illusion that there could be a life free of choice, even for the most dispassionate and idle spectator. The fact of being human forces choice on us every minute of every day.Willthe ground support me if I take a step forward? Will this food nourish me or poison me? Should I wear clothing today? It is true that these choices are in a certain sense mundane and everyday. But at the same time, they are foundational, the most important choices a person can make - a committment to at least a minimal ontology, a decision to continue living and the means toward that end, an understanding and acceptance of social mores. It is true that most people make such choices without reflection - showing that there must be something to meaningfulness over and above choices - but it is also true that people who seriously reflect on such choices, who consider both alternatives to be genuine possibilities, nonetheless in the main come to the same resolution as those who make such choices automatically. In matters that are genuinely important, choice is itself an illusion. And in cases where choice is not an illusion, it is also the case that the decision is not so fundamental. The two outcomes are of relatively similar value, at least in comparison to fundamental questions of existence, life and living.

If by failing to make a choice in this or that matter, if by remaining dispassionate and accumulating, as it were, more knowlege, if by doing this one may remain insulated from any consequences, it seems evident that the choice one would have you make in such a case falls in the opposite extreme, a choice not about that which is fundamental, but about what is trivial. Though it may be true that we may suffer some consequence by acting one way or another, if a failure to act affects us not in the least then there is no motivation for action, and the choice we believe we face is illusory, and therefore the meaning we would derive from making such a choice illusory also. The choices that engender meaning in our lives are not those we can duck in order to live in a post-modern idyll, but those we cannot avoid, similar in nature to those of a fundamental nature, but less wide in scope.

To make a choice simply to attain the distinction of having chosen is to trivialize the nature and import of making a choice. If one chooses a religion only in order to claim membership in the ranks of the converted, such a choice mocks the devotion that resaults from the presentation of religious phenomena or experience. If one chooses a political affiliation only in order to have voted, then this decision renders meaningless the resolution of affairs of state implicating individuals and nations in matters of economy and war. It is, indeed, the making of such decisions purely for the sake of making a decision, by a person who has no stake in the outcome, that causes the greatest amount of hardship and suffering. The firmness, balance and steadiness of a person who has made a choice for sake of making life less boring is to be feared the most, because such a person has made a choice that did not need to be made, and would have no motivation to alter or sway their course of action in a direction more compassionate or rational. "She has a very deep conviction to some very shallow ideals," it was once said of a politician friend of mine, and the result was action without knowledge, and in the end, nothing more than an illusion of relevance.

Many people criticize me for the moral and political relativism I advocate in numerous spheres; this does not mean that I have made no choices, but rather, that I have made choices - about value, about right, about religion, about society - only when such choices were required by circumstances, and only applicable to a scope in which such a choice were relevant. Kierkegaard is right, though the process of choosing, one can come to believe, and to thereby make the facade originally accepted a reality in fact. But the sort of choice he advocates, there is no need to make. Like Julian of Norwich, presented with religious phenomena or experience that make a previous life incomprehensible, a choice of religion may be the most rational or sane alternative. But God, as they say, does not speak to everyone, and those to whom God has not spoken need not formulate a`reply.

When life presents itself as a fighting off of boredom, of finding nothing or more or less important, the usual cause is not that a person has not committed him or herself to a certain set or beliefs or a certain course of action, but rather, because the person has not accumulated enough knowledge to understand the choices that need to be made. The post-modern stance of observing, countenancing, and experiencing a wide variety of moral, social, political and religious beliefs (among others) is the rational and reasonable approach; when one does not have enough data to make a decision, and a decision is not forced, the rational course is to gather more data, not to prematurely make an ill-informed decision. This to me would seem evident! Why, then, laud the merit of meaningless choicesin order to give life meaning? The meaning of life will present itself soon enough; in the meantime, the only thing a person ought to do is live it.

Threeo

Re: Unshaken Hands on the Digital Street, by Michael Bugeja.

The author assumes that interaction with the physically present must take priority over the physically distant (and electronically connected). Remove the assumption in this article, and require that it be supported through argumentation, and the impact of the dialogue is lost.

In fact, it seems to me, the order of precedence of interaction ought not not be resolved by proximity, which is typically merely accidental, but by two more salient factors: priority (that is, all other things being equal, the interaction that is most important to the actor takes priority) and precedence (all other things being equal, the interaction that began first takes priority). Most interaction is a case of these two stipulii conciding in two people: for each, the interaction with the other is the most important of those available at the moment, and will continue until concluded. 'Interruption' is the process of one person suggesting that the importance of an interaction is greater than one already in progress, and it is (of course) the right of the interrupted to make the determination as to whether this is so.

In the pre-digital age, priority and precedence coincided with proximity. That is, all the choices of interactive possibilities were of people located in physical proximity, and physical proximity being a limited quantity, predence assumed a much greater importance. But it would be a mistake to equate proximity withy priority and precedence; with electronic communications, it is now possible to have a situation in which a communication by telephone is of greater priority than a presently existing in-person interaction. When a telephone rings, this is an interruption, and the receiver typically makes an assessment (often by looking at the caller ID) as to whether the telephone call is likely to be more important than the present interruption.

What is also true, in an increasingly crowded and mobile world, is that the value of physical proximity is diminished. In less mobile, less crowded times, one could assign a high degree of probability that a person wishing communication while in close proximity was also a person with whom communication would be a priority - it would be a spouse or child, a business associate, or a customer. But today's physical interactions are increasingly with strangers with whom one has no prior attachment, and so the probabilities have now tipped the other way: it is more likely that a telephone call, from one of the few people in the world to know your number, is of greater importance than a conversation with a stranger on the street or in the office.

When a person in physical proximity interrupts a person using a mobile telephone or similar electronic device, the probability is that their priority to the person being interrupted is less than the priority of the person being talked to. Where once people apologized for being on the telephone when a stranger wished to speak, it became apparent that no person need apologize for talking with his spouse, child or friend, and that it is the stranger imposing the interruption and making the request. Breaking off a telephone call (or even shutting off an MP3 player) to help a lost tourist is a mark of altruism, and as the stranger had no prior claim on the person's time, such behaviour ought to be thanked rather than criticized when written about in an article.

The mistake being made in the article below is in the assumption that the virtual interaction is somehow less real, somehow inherently less important, than the proximal physical interaction. "By the time they attend college, they will come to view technology as companionship." But this is a fallacy, a confusion between the message, which is a product of the media (a "phone" call), and the content, which is a product of the interaction (a call "from John"). More straightforwardly, the vast majority of online and electronic interactions are with real people, and there is no a priori reason to assign a real person lesser importance on the basis that they are distance (and, given such a person's prior attachment with the caller in question, very good reason to assume the opposite, that the distant person is of greater importance than the proximal). Electronic communications may be caricatured as communications with the non-real, but to draw any conclusion of important from this characterization is to ignore an obvious and self-evident truth: real people communicate electronically.

The characterization of the product of electronic communications as "dumb mobs" is an assassination ad hominem. Were it true that drunken parties the only consequence of such forms of virtual communication (were it true that such parties were known to be caused by such communications at all, as though they had not occured prior to the advent of the telephone) then perhaps one might have a case. But electronic communications have conveyed messages as vital as the boirth of a child, the formation of a business, the death of a relative, humanity's step on the moon, and so much more. Empirical observation shows that the party-generation capacity of electronic communications is a minimal, and infrequently emnployed, use of the medium. It makes no sense, then, to assign to the communication the morality of the mob.

The reactions of a person who, by dint of physical proximity, assume priority and precedence over any and all electronic interactions, are, quite frankly, the reactions of a self-important boob. They convey the impression of a person who believes that his or here mere physical presence ought to command the over-riding and immediate attention of all who come within his or her purview. They show no respect for the importance a caller may place on communicating with friends, family or associates, and demand immediate and sole attention to the matter at hand, namely, him or herself. In a world of competing interests and of increasing demands for interaction, people have learning that they must from time to time take their turn. This author strikes me as one who hasn't learned this yet.

Four

While it is a fact that each of us, as knowers, is situated in the world (situated bodies) and we learn by bumping (commonsensical understanding) into the world; What constitutes knowledge is not reducible to any of us or to our bodily presence, any more that what constitutes the English language depends upon the use of English by any speaker of the language or what constitutes mathematical truths depends upon any person's calculations.

Trivially, this is an assertion to the effect that a recognizable entity (such as knowledge, langauge or mathematics) that has an existence outside ourselves is not reducible to states of affairs inside ourselves. If we argue from the outset that these are social phenomena, then it follows by a matter of definition that they are not reducible to mental entities. But this is no more revealing than to say that a house is not reducible to our perception of a house. Such a statement is necessarily true, assuming the independent existence of the house.

More interesting is the perspective where we are silent on the external existence of the house. We presume that our perceptions of a house are caused by a house, but it is also possible that our perception of a house was caused by something that was not a house, or caused by the convergence of discrete perceptions that have no discrete external status at all. After all, we can have perceptions (or, at least, thoughts) of a unicorn, without at the same time asserting that a unicorn has an independent external existence.

The real question is, is our concept of a house reducible to our perceptions of a house. That is to say, can we arrive at the idea of a house through some form of collection and organization of perceptions? The logical positivist answer to this question was that we could, though the entities and mechanisms proposed (an observation language, logical inference) proved manifestly inadequate to the task. A similar stance seems to be being taken here. Our concept of a house cannot be reduced to a set of mental entities; no mechanism of inference appears to be adequate to the task.

When we look at this more closely, we see that the assertion is that the entity in question - our idea of a house - is not composed of the entities from which is is supposedly derived. That is to say, we could replace one or even all of our mental entities (thoughts, perceptions, etc) with distinct instances of those entities, and yet the perception of a house would remain unchanged. This gives it a unique ontological status.

Consider, for example, what would happen were we to attempt the same thing with the Great Wall of China. The Great Wall is composed of bricks. Were these bricks removed, and replaced with new bricks, we would no longer say that the Great Wall of China exists; rather, we would say that we have constructed a facsimile of the Great Wall, and that the real Great Wall is now a pile of rubble somewhere.

By contrast, consider the image of Richard Nixon on a television set. This image is composed of pixels. Were we to replace one or all of the pixels (as happens 72 times a second, more or less, depending on your screen refresh rate) we nonetheless say that we are seeing the same image of Richard Nixon. The image has a continued existence even though all of its physical components have been replaced.

Why do we say that one set of pixels and another set of pixels constitute the same image? It is clearly that the two sets of pixels are organized in a similar way. For example, both sets of pixels have two clusters of dark pixels near the mid-point of the image - what we would call Richard Nixon's eyes. We say that the two sets of pixels constitute a single image because the organizations of the two sets of pixels resemble each other. Take one of the sets of pixels, and organize them randomly, and we would say that we no longer have an image of Richard Nixon, ever were we to have exactly the same set of pixels.

Now it is tempting, when identify a similarity such as this, between sets of unrelated collections of physical entitities, to say that some discrete physical entity must have caused this similarity to occur, that there is a real Richard Nixon that this image must be an image of. But of course the same reasoning would force us to agree that there is a real Donald Duck. Since Donald Duck is an animation, and does not exist except in the form of similarly organized pixels, it is evident that such reasoning is in error. But then we must ask, what is it that makes a collection of pixels into Richard Nixon or Donald Duck?

The being an image of Richard Nixon is not contained in any or all of the pixels. Nor may we assume that it is caused by an external entity. All external possibilities thus being exhausted, the explanation for the fact of an image being Richard Nixon must lie in the perceiver of the image. We say that the image on the screen is an image of Richard Nixon because we recognize it as such. This organization of pixels is familiar to us, so much so that we have associated it with a name, 'Richard Nixon', and even apparently unassociated utterances, such as 'I am not a crook.'

In a similar manner, entities such as knowledge, language and mathematics (as commonly conceived) exist only by virtue of the organization of their constituent parts. No particular instance of a fact, a word or a calculation is a necessary constituent of these. But something is called a piece of knowledge, mathematics or language only if it is recognized as such.

Most of our understanding in the world of what it is like to be embodied is so ubiquitous and action-oriented that there is every reason to doubt that it could be made explicit and entered into a database in a disembodied computer. We can attain explicit knowledge through our understanding with the world, by virtue of having bodies. We can find answers to questions involving the body by using our body in the world.

There is a lot packed into the seemingly innocuous phrase, 'made explicit', and the phrase is sufficiently distracting as to throw us of our course of investigation.

Consider, again, the image of Richard Nixon. What would it be to 'make explicit' this perception? One suspects that it needs to be codified, cast into a language. Thus, we say that our perception of Richard Nixon is 'made explicit' when it is associated with the phrase 'Richard Nixon'. (Is there another sense of 'made explicit'? Does the discussant have some other process in mind?)

When the image of Richard Nixon is made explicit in this way, however, a great deal of information is lost. The original perception is abandoned - nothing remains of the organization of the pixels; the pixels, and the organization that characterized them, form no part of the phrase 'Richard Nixon'. Nor either is the act of recognition contained in this phrase. The association of the image of Richard Nixon with similar, previously experienced, phenomena, can no longer be accomplished.

What is important to recognize here is that the information has been lost, not because the original image was produced by our bodies, and that the phrase wasn't (an assertion which is, as an aside, patently false - where else did the phrase 'Richard Nixon' come from if not from our bodies?). It is because the image of Richard Nixon has been completely replaced by this new entity, which represents the original entity only through association, and not through resemblance. Only if, on being presented the phrase 'Richard Nixon', we could call to mind the original image (the original organization of pixels) would we be in the position to make the same set of associations as the original viewer.

If I am presented with 'A' I can immediately infer that 'A is for Apple'. But if I represent 'A' with 'B', then I no longer have the capacity to make that inference. There is nothing in 'B' that would lead me to say 'Apple' (and the expression 'B is for Apple' even seems absurd). Presented only with 'B', therefore, I am unable to equal the cognitive capacity of someone who has been presented with 'A'. It is not therefore surprising to see people say that the accomplishment of such cognitive capacity on the part of a system presented only with 'B' is impossible.

But it is not impossible. It is impossible only if it is impossible to present the system with an 'A' instead of a 'B'. It is impossible, for example, if the having of an experience of 'A' is something only the first sort of entity can have, and that the second sort of entity cannot have. And that comes down to this: is the stimulation of a neuron by a photon the sort of thing that only a human can have? Put that way, the question is absurd. We know that photons stimulate things other than human eyes; that's how solar power works.

Perhaps, then, recognition is the sort of thing that can only be accomplished by a human. Presented with the same organization of photonic stimuli, is it the case that only a human can recognize it as Richard Nixon, while a non-human system is restricted to, say, associating it with 'Richard Nixon'? Again, the answer to this seems to be no. While it is true that most computers today think and store information only in symbolic form, it is not reasonable to asert that they must. A computer can store an image as an image, and given multiple images, nothing prevents a computer from performing the cybernetic equivalent of recognition, the realization that this is similar to that.

The question here is whether the perception of a given phenomenon - any phenomenon - dependent on the physical nature of that phenomenon, in such a way that the given instance of the perception could not be replaced with a similar instance without it becoming a different perception.

It is clear that the exact physical instantiation of the perception is not required. If I were to lose an eye, and were to have this eye replaced with a donor eye, such that the eye (and therefore any action of the eye) has a completely distinct physical constitution, it is not clear that I would no longer be able to see. Indeed, our intuitions and our research run in the other direction. We can replace eyes (and other body parts) without changing the perceptions that these body parts produce. Seeing with a donor eye is just like seeing with our original eye, or so much so that the difference is not worth remarking upon.

One asks, now, whether the physical constitution of the donor eye be the same as the physical constitution of the original. Is it necessary that the donor eye be a human eye. Were the donor eye to be instead an artificial eye, strikingly similar, or course, to the original eye, but nonetheless indisputably of non-human origin, is there anything inherent in the function of this new eye that would make it not capable of enabling the same perception as the original eye? It is true that today's artificial eyes produce only shadow-like vision. But this attests only to the fact that it is difficult to make eyes.

More significantly, would it be possible, with the replacement eye, to recognize an image of Richard Nixon as being an image of Richard Nixon? It seems manifest that it would. For, as observed above, what makes an image an image of Richard Nixon is not the physical constituent of the image, nor even the origin in an external cause of the pixels, but rather, the organization of the pixels and the recognition of this organization as being similar to other perceptions we have already had. And even were all of a person's perceptions obtained through this artificial eye, there seems to be nothing inherent in the physicality of the eye that would make this impossible.

As we more through the other organs of the senses, and as we move deeper into the cerebral cortext, we wonder, then, at which point this stops being the case. At what point do perception, recognition, and cognition, cease to be founded on the organization of the pixels, and start to be founded on the physical constitution of the pixels? At what point does it become necessary for a thought to be grounded in a human brain before it can be said to be a thought about Richard Nixon? The nature and function of the human eye is not different in kind from the nature and function of the deeper layers of the brain; what works with the eye would seem, in principle, to work with the deeper layers of the brain. So what is it about the human brain that makes it impossible for a computer to emulate.

If we think of computers as symbol processors, then the answer is evident. At some point, a translation from perception to symbol must occur, and at that point, so much information is lost that the processes behind that transformation are no longer capable of performing the same sort of inference a brain that does not make that transformation can perform. But is there anything inherent in computation that makes it necessary that all processing be symbolic? Is there any reason why a computer must store knowledge and concepts and ideas as strings of symbols and sentences? There is no doubt that today this is a limitation of computers. But it is not an inherent limitation; it exists because designers stipulate that at some point in processing physical input will be converted into symbolic data.

Yet, already, in some instances this never happens. When I capture an image with a digital camera, and upload it into my computer, the image is not converted into symbols (and it would be absurd to do so). The original state of the pixels, as they were inflenced by photons, is what is stored. Of course, this image is not intermingled with other images, as it would be in a human brain. It is stored separately as an 'image file' and displayed or transported as an entity when requested. Even so, this, at least, is an instance of non-symbolic data representation in a computer.

Suppose, instead, when an image were loaded to my computer, it were compared with every other image previously stored in by computer, and that the image displayed was not the original image, but rather, whatever image (or composite) was suggested by this presentation. Something like (exactly like) recognition will then have happened, and the second stage necessary for perception will have occured.

So long as we don't transform input into symbolic form, thereby stripping it of important information, there is no reason to assume that the cognitive capacity of a system handling that information is reduced. And if there is no reason to assume that the cognitive capacity is reduced, there is no reason to believe that the cognitive capacities of humans could be emulated by a computer.

Human beings respond only to the changes that are relevant given their bodies and their interests, so it should be no surprise that no one has been able to program a computer to respond to what is relevant. Bodies are important making sense with the world. Forms of life is organized by and for beings embodied like us. Our embodied concerns so pervade our world that we don't notice the way our body enables us to make sense of it. So, if we leave our embodied commonsense understanding of the world aside, as using computers forces us to do, then we have to do things the computer's way and try to locate relevant information replacing semantics. Prof. Dreyfus criticizes AI as the epistemological considerations concerning how human bodies work in intelligent behaviour.

It is evident that humans force computers to think symbolically, by virtue of such things as interface and operating system design. But do computers force humans to think symbolically?

The answer is no, and the reason for that answer is that humans are not symbol processers. Let me repeat that. A computer cannot force a human to reason symbolically because humans are not symbol processors.

Oh, sure, we have the capacity to understand and interpret symbols. But this is done in the same manner that we undertsand and interpret an image of Richard Nixon. The symbol is perceived, not as a symbol, but as an image (you have to *see* or *hear* the letter 'B'). The presentation of this symbol will call to your mind other perceptions with which it has become associated. And if you're lucky (most of us aren't, but that's another paper) the presentation of the associated 'A' will generate in you the capacity to draw the same associations as had you been presented an instance of 'A' in the first place, leading you to think, 'Apple'.

In other words, for humans, symbols are not what we use to think, but rather, what we use to communicate. We represent a mental state (a perception, say, of Richard Nixon) with a symbol (the phrase 'Richard Nixon') and send the symbol with the hope and expectation that the presentation of the symbol 'Richard Nixon' will generate in the receiver a mental state resembling your original mental state (a perception of Richard Nixon).

What is important to keep in mind here is that the information received from other people, by means of an encoded information transfer (ie., a sentence) does not become some kind of different and special *kind* of information in our brains. Information transferred to us as symbols does not remain exclusively as symbols in our brain, for the precise reason that the brain immediately wants to begin associating symbols with other types of perceptions.

The fact that we process symbols in the same way we process other types of information is what makes them work. Were we to process symbols differently, then they could not evoke other types of memories, and we would have two separate and distinct regions of thought, one for symbols, and one for images, and symbols could never be associated with images, and thus someone's utterance, expressed to us as a string of symbols, "Watch out!" would never translate into action.

To suggest that receiving information symbolically instead of receiving it directly causes us to assume a different moral, ontological, or inferential stance regarding this information is absurd. It is absurd, because it assumes that symbols never evoke other forms of perception, when it is manifest that the only reason symbols work at all is because they do.

Computers do not force us to leave our commonsense understanding of the world aside. Nothing could force us to do that. Not even one of Dreysfus's papers.

Five

Adam Gaffin wrote: Sure! Now add another reason to register: Get e-mail at 4:45 p.m. every weekday advising you of any traffic problems you might encounter on the ride home (brought to you by Aamco or Midas ...)

Now *that* is something I would sign up for (or would were I the sort of person who commutes in a large city). Send it directly to my car. Have the car advise me not to take my usual route home. Information about other parts of the city available on request.

I would sign up for it because I would understand that I cannot receive such information unless I tell the vendor where to send it. I may also be willing to provide demographic information (such as, where I work and where I live, more or less) because it is directly relevant to the information I am receiving.

I might tell you what kind of car I drive if the service also offered me information on things like service updates and recalls, gasoline prices and lineup lengths at service stations along my route, and related communter information.

The very same service delivered to PDAs or information managers might concern bus routes or commuter trains. It would be a real value to know just how far away the next bus is from my stop (some bus services are offering this already - but are newspapers?)

I don't know whether you call this 'news' but that's irrelevant. I don't segment out 'news' as a special sort of information in my life. The fact that the information marketplace segments it that way is mostly an accident of history. What we have is a voluntary exchange of informational value for informational value. Nothing wrong with that.

Terry Steichen wrote: I disagree, particularly from the providers' perspective. News publishers *must* keep some kind of a central news focus, or they risk losing their identity and their offering will degenerate into an informational hodge-podge. They'll end up competing with everyone and no one at the same time, trying to be all things to all people.

Hm.

It didn't bother them when they added a sports section and assigned reporters to travel with the team, reporters who over time came to be seen as part of the team.

It didn't bother them when they added an entertainment section and began running glossy publication stills and prefab promos for upcoming releases.

It didn't bother them when they added a lifestyles section and began running recipes, horoscopes, Dear Abby and the daily comics.

It didn't bother them when they added a fashion section, draped scantily clad models with no underwear on their front page, and featured in-depth articles on the trouble with treacle.

It didn't bother them when they added 'Wheels', a secton consisting of one page of text-based promo for a new car line and eleven pages of car ads.

It didn't bother them when they added the home and gardening section, featuring columns written by marketing representatives for city nurseries and planting advice offered by seed houses.

It didn't bother them when they added a travel section, running glossy images of idyllic beaches (shanties carefully concealed by shade trees) provided by travel agencies and travelogues written by employees of these travel agencies.

Why should it bother them now?

Epilogue

There is a tension between the producers of media, both online and traditional, and between the consumers of this media. Greater connectivity and greater capacity for content creation have given the concumers the capacity to produce their own media, and this poses what is deemed to be unacceptable competition to the consumers, who find that their traditional modes of production, business models and distribution channels are threatened. In every domain, it seems, we hear the call for a closed network, whether it be in the form of bundled libraries, proprietary social networking sites, digital rights and authentication, learning design, or media formats. The developers of processes and standards for these multiple domains, heeding the demands of the producers, are complying with development that have the effect, if not the intent, of preserving a one-way flow of communication. Slowly, however, the consumers who create are developing their own technologies, standards and communication channels. This is a development that ought to embraced, not ignored or impeded. When we in education cease to heed the demands of traditional producers, and open ourselves wholeheartedly to the idea that content is created, distributed and owned by the consumer, only then will the promises of the network age be realized, and the era of online learning truely begun.



Stephen Downes Stephen Downes, Casselman, Canada
stephen@downes.ca

Copyright 2024
Last Updated: Apr 19, 2024 7:17 p.m.

Canadian Flag Creative Commons License.

Force:yes