New post

Matt Bower, May 26, 2017
Commentary by Stephen Downes

Matt Bower refers to himself in the third person throughout this blog post introducing us to his work with the Blended Synchronous Learning project (see He introduces us to the idea of a "blended-reality environment" (which should really just be shortened to 'blended environment'). "Video and sound recording equipment captured activity in a F2F classroom, which was streamed live into a virtual world so that remote participants could see and hear an instructor and F2F peers. In-world activity was also simultaneously displayed on a projector screen, with the audio broadcast via speakers, for the benefit of the F2F participants." This makes sense but in my experience the key is to ensure the video is large enough to display near-life-size avatars or images, and to ensure the audio in each direction is of sufficient volume and timbre to be accepted as being an equal voice. The paper itself is behind a paywall at BJET but there's a (preprint?) copy at ResearchGate.
Views: 1 today, 660 total (since January 1, 2017).