- My eBooks
National Research Council Canada
All My Articles
About Stephen Downes
About Stephen's Web
Subscribe to Newsletters
Privacy and Security Policy
Web - Today's OLDaily
Web - This Week's OLWeekly
Email - Subscribe
RSS - Individual Posts
RSS - Combined version
JSON - OLDaily
Stephen's Web and OLDaily
Half an Hour Blog
Google Plus Page
Huffington Post Blog
Interactivity: Another Tack On It (3)
- Interactivity and Best Practices in Web Based Training
- Interactivity: Another Tack On It (Part 1)
- Interactivity: Another Tack On It (Part 2)
- Interactivity: Another Tack On It (Part 3)
- Interactivity: Another Tack On It (Part 4)
Posted to WWWDEV 28 October 98
Because interactivity is a subject near and dear to my heart the ongoing discussion is of interest to me. As readers of WWWDEV know, this means email...
Dave <djaeger@GULF.NET> writes, High interactivity learning materials are generally more effective learning materials, and viewers spend more time with them. I can't agree with this. I have developed very simple page turner (level 1 (lowest) interactivity) that was just as effective as an electronic panel simulator (level 5 (highest) interactivity). Attaching a certain amount of time to a particular level is almost impossible. A level 5 simulation exercise that has a very simple fault that the user needs to isolate could easily be completed in a matter of minutes.
I should point out that a statistical generalization, such as I have offered, is not refuted by a single instance, such as Dave provides.
I do agree that the unlimited number of possibilities and choices a user has with higher levels of interactivity, the potential to spend more time does exist. However, it not just due to the level, but a combination of the level and strategies.
Yes. This is a good point. A video game has a very high level of interactivity. However (at least the way I play them) the time spent can be very short. The sort of interactivity demanded by a video game requires quick response time and reflex actions. Other programs, which still provide high degrees of interactivity, may require more thought and reflection on the part of the user, which would increase the time spent by the user.
None of that alters my main point, which is: greater interactivity tends to increase time spent. Yes, there may be counter-instances. Yes, other variables are involved. But the main point stands.
I think that a better measure of interactivity would be to construe it as a ratio of the amount of information exchanged between the participants by each participant. The closer the ratio is to 1:1, the higher the interactivity. The further the ratio is from 1:1, the lower the interactivity. For example, consider a typical page turner. Assume an average of 5K per page. The act of clicking on a link will send (maybe) 512 bytes. Thus we have a 10:1 ratio (web server : viewer), which is fairly low. You forgot that the viewer had to read (take in) all 5K of data on that page. which would make the ratio 1:1. 5K of information is there to read; 5K of information was read.
No. This misconstrues the elements of the ratio I was describing. What we are measuring is the *flow* of information. The *transfer* of information from one entity to another. Knowing that there is simply 5K of data on a page tells us nothing about interactivity. Knowing that 5K was *transferred* (through the process of being read) does tell us something. The transfer of information from book to human constitutes *one* side of our ratio. The other side would be conposed of the transformation of information from human to book. This would be much lower, ranging from just a few bits (page turns) to a few hundred bits (anotations).
Computer based training (one on one with the computer) and those of like form, need to have the interactivity level relate to the amount of influence the user has on the material. Now when computers become more advanced in artificial intelligence and don't require manipulation from a user, then another form of interactivity grading should be pursued.
The phrase "amount of influence" is too vague to use in this context.
As Gary Powell (I think) stated yesterday, interaction in CBT occurs if the computer learns more about the user as a consequence of the interaction. 'Learns' in this context is a misnomer - the computer is able to supply more concrete data to open variables related to this particular user (thus reducing the number of possible states the user could be in, from the computer's perspective).
But "influence" suggests further that the computer needs to act on this information. Not so. While the information stored provides a potential for action, it may be the case that the computer acts only if certain information is provided, and not otherwise. It may receive some information it never acts on. The same is true of humans. For example, I may believe that Fred is a liar, and so treat him cautiously. I learn the new fact that Fred misrepresented his age on his driver's license. This new information reinforces my concept of Fred. Thus, I still treat him cautiously. Undeniably I received new information, and an interaction occured between me and the source of the information, but no observable change in behaviour resulted.
We want very definitely to separate the ideas of 'information transfer' and 'modification of behaviour'. The latter is too narrow a criterion for interactivity.