Immersive Collaboration: Casting Presence in a Leading Role

October 24, 2009

NY TheaterOn Thursday, I experienced a wonderful example of effective training in Second Life.  During the weekly Train For Success meetup, Mark Jankowski of Virtual Training Partners conducted a portion of his negotiations training course and gave a tour of his training grounds.  It was impressive not due to fascinatingly complex scripted objects, or remarkably crafted beautiful works of practical art that bring “ooohs” and “ahhhhs”.  It was impressive due to the way Mark used the varied settings to reinforce & complement the core learning dialog.

It got me to thinking…what role does “presence” play in immersive collaboration?  And, how best can we utilize it?

Presence is defined as, “The state or fact of being present, as with others or in a place.” In other words, a sense of self and other…that you the participant are in a “real” and “tangible” space with other intelligent (or semi-intelligent) beings.

Many folks designing and deploying immersive collaborative experiences (learning being one such) miss out on on one of the most powerful aspects of this technology by failing to intentionally design for presence.  All too often I see small group interactions and trainings that require nothing more of the participants than merely talk, look at a slide/image, and type.

Presence is precisely why immersive environments can be far superior to many traditional technology-enabled approaches, like webinars, conference calls, online forums, email, IM, and the like.  There is a higher potential sense of presence due in part to more human senses being engaged.  Also, the mind must process spacial orientation within the environment, which some acknowledge increases subject matter retention (most notably analyst Erica Driver of ThinkBalm in a recent blog post).

But presence does not necessarily require highly realistic builds.  Simple spaces with a few images, builds, and tools smartly (and intentionally) laid out can often be more compelling than an intricately built and near-photo realistic space.  And, from Mark’s example above, the context in which the setting occurs (a baseball skybox overlooking the field during his explaination of the baseball negotiation case study) plays a powerful role as well.

So how does presence play into immersive collaboration?  And how can we best use it?  With collaboration, communication (sharing of thought & intent) between individuals is key.  Crafting an experience that makes one feel more like they are “in” a space among others increases a groups capacity to effectively communicate, allowing for a multitude of methods/channels more closely resembling that of communication in an actual physical presence.  As anecdotal evidence, I’ve consistently found my geographically-dispersed development team accomplishes more in 30 minutes in an immersive meeting than hours, even days, spent in a web-based collaboration application.

Remember, this is still a relatively new technology and thus the industry is still trying to figure out best practices.  As a start, here are some guiding questions that may help you better design for presence:

  • Are my goals for the experience clearly defined?
  • How can the setting, the builds, and the tools within the space help me achieve the goals?
  • Is there a setting(s) that best complements the topic?
  • How will the overall setting impact participants?  What impact do I need it to have?
  • How do I feel when I see/interact with/move about the area?
  • What kind of interaction do I need participants to have with the setting/with the presenter/with each other?
  • Is it important for participants to see one another’s avatars? Why or why not?
  • What’s the difference between conducting it here in the immersive environment and conducting it as a webinar?
  • What if participants are flying around?  Will this change my build/the interactions/my presentation?
  • How much movement/navigation will I require of participants?
Advertisements

Brainstorming Immersively: Of Streams, Position, & Coordination

August 30, 2009

Friday was the 8th installment of the ThinkBalm Innovation Community‘s Brainstorming series.  I’ve consistently found ThinkBalm immersive events incredibly beneficial, particularly in refining my understanding of enterprise use of immersive technologies.  This one was no exception.

It was a focused, vigorous 1 hour discussion lead by Erica Driver & Sam Driver (ThinkBalm) on the topic of “How to write an immersive technology business case.”  We used my newly released BrainBoard version 1 as our primary collaboration orchestrator.  At the close of the session, Erica gathered evaluation feedback from the participants by using the Attitudometer.

Brainstorming Area

Here are some observations & thoughts from my experience:

I saw some fascinating non-verbal problem solving and coordination

Erica & Sam structured the discussion agenda with 3 points.  Thus, contributions appearing on the board were placed under one of the 3 points.  The initial setup was to use the 4 quadrants on the main board to sort the user generated notes for each of the 3 points.  Any miscellaneous notes would be sorted into the 4th quad.  The supplemental board was placed on the side just in case we needed more room.  Well, we needed it.  Due to the sheer number of user notes, we quickly moved the 3rd and miscellaneous points to the supplemental board, leaving the main board for the 1st & 2nd point notes.  I bold & italics “WE” for emphasis…All this was conducted without verbal coordination.  It happened organically (or digitally, I guess).  Several individuals visually observed the need for more room, and coordinated a solution using visual observation and interaction.  And all this without breaking the momentum of the ongoing discussion.

Simultaneous voice & text = better discussion communication

My computer was not playing nice that day, thus I could hear participant voices, but no one could hear mine (a fact that my wife humorously pointed out might have been a good thing).

However, the curse was actually a blessing.  I was reminded yet again that although voice is a powerful & flexible communication tool, it is inferior to text for synchronous multi-person contributions.  I found myself following the stream of voice discussion, while simultaneously contributing relevant thoughts & ideas into notes on the BrainBoard.  I could see several others doing the same thing, their thoughts popping into existence onto the board.  This visual/textual discussion stream at times tracked the vocal, qualitatively and quantitatively expanding & enhancing it.  At other times, it branched away from the vocal, following the rabbit-like course of thought in a separate exploration of the core discussion topic.

This approach to brainstorming allows multiple contribution roles

Brainstorming Main BoardI also found the way in which I contributed to the discussion changed over the course of the hour.  It seemed that in the first half, I followed the voice discussion rather closely, contributing many textual notes to the board without too much concern for their position on the board (or position relative to other notes).  My role seemed to be primarily to add thought.  As we moved into the 2nd half, I found myself spending more time reviewing & sorting the notes.  I started grouping similar notes together, looking for patterns that would inform the ongoing discussion.  The act of positionally sorting the notes during the discussion seemed to help me connect the concepts together into a working, developing understanding of the overall context.

I would be fascinated to learn what impact the repositioning/sorting of the notes had on other participants’ understanding of and contributions to the discussion.

Immersive/virtual environments enable positional relevance in discussions

Utilizing voice and text simultaneously enabled multiple discussion streams to which participants could contribute.  Having these streams in an immersive environment, where the participant contributions exist disparately and spacially as notes on the BrainBoard, allowed for their position to become relevant.  For example, you contribute a thought in a note.  You move that note to a position on the board.  As more contributions are added, someone moves your thought next to another thought.  Seeing the two thoughts placed closely together causes an insight in yet another person, who notes their contribution on the board.


Meerkat Viewer – The Buggy Coolness

August 23, 2009

Downloaded and tested the Meerkat V0.1.6 viewer this weekend.  Despite the occasional crashes and inoperable extra features, I like what it has to offer.

Avatar List

Accessible via the “Meerkat” dropdown menu on the top bar, this little window tool gives some pretty handy features.

  • Shows all nearby avatars
  • Allows you to mark, track, get key, and instantly TP to avatars

Avatar List

Chat Bar as Command Line

Probably my favorite little extra.  With it, you can set your own custom chat-based commands.  The Teleport to camera position is great fun, however your mileage may vary.  After I TP’d a few times, other avatars could not see me.

Command Line

Visual Environmental Settings

This one is a immersive photographers dream.  Via the Arrowed button at the bottom toolbar, you can pull up a rather lengthy list of environmental settings that change the way the world looks.

Environmental

Make your region look ghostly…

Ghost

…or all funky alien-like.

Alien

Again, your mileage may vary.  I encountered several inventory locking quirks that I suspect have something to do with the viewer.  It also crashed on me once when selecting one of the environmental presets.  However, the neat extras make up for the early-version bugs.  I won’t be using it for a business meeting or event yet, but for the fun experimenting times, it’s great.  I’m definitely keeping my eye on this viewer.  Visit the Meerkat development home to download and try it out yourself.

Have you tried it yet?  What were your impressions/favorite features?


iED Presentation on Immersive Collaboration Tools

July 24, 2009

For my presentation on the most practical immersive collaboration tools available for Second Life & Open Sim at Monday’s Immersive Education Day, I will not be showing them via powerpoint, but rather as a tour in the immersive environments.  I’ll post more on the tools I chose to highlight in a later post.  For now, here are locations and instructions on how to join the tour in-world.

Second Life

Time: approximately 1pm CDT
Location: ThinkBalm Island
http://slurl.com/secondlife/ThinkBalm%20Island/52/107/38

  • We will spend our time in Second Life surveying tools at the top of the ivory tower on the west island.
Second Life location of the iED presentation

Second Life location of the iED presentation

ReactionGrid

Time: approximately 1:30pm CDT
Location:  Jeff Lowe Region

  • When you login for the first time, you will appear in the welcome sim.  On the platform, there are touch-click teleports to several of the core regions.  Click on the region “Jeff Lowe”
Where you will first arrive

Where you will first arrive

  • When you arrive to the region, you should see the tour platform near the volcano lab.  That’s where the demo will occur.

ReactionGrid Demo Area

EDIT:

Here is a link to my field notes on each of the tools I will cover:
http://docs.google.com/View?id=dfnzv8zw_455hk5xn2xs


The DIY Immersive Laserpointer (or, the Pinoochio Technique)

May 25, 2009

pointing presenter

Anyone who has presented, trained, or demonstrated a tool within an immersive environment knows just how difficult it can often be to reference a specific position when communicating to others.  There is no simple physical world equilivant to pointing your arm and hand, or using a laser pointer to highlight focus.

Although I have just released a 3D Pointer tool, I also wanted to provide a simple, but limited in functionality, alternative for those do-it-yourselfers (or cheapskates) out there.

Here’s a quick tutorial on how to make your own laserpointer for the SecondLife & OpenSim immersive environments:

1.  First, create (rez) a cone.

Laser Pointer 1

2. Now, increase the height SIZE of the cone (the Z axis) to about 5 meters.

Laser pointer height 2

3. With the mouse, RIGHT CLICK on the object, and from the Pie Menus select MORE>, ATTACH>, HEAD>, NOSE.  This will attach the object to your nose.  This will replace anything you are currently have attacked to your nose (for example, a specialized avatar component).

Laser Pointer attach 3

4.  Laugh at how silly you look.

Laser Pointer funny 4

5.  Right click on the object and select EDIT.  Adjust the ROTATION of the Y AXIS to 90 degrees.  The object should now be pointing forward (still silly looking).

Laser Pointer rotate 5

6.  Finally, lets reposition it.  Notice the BLUE, GREEN, & RED axis arrows running though it?  Click & hold on the RED arrow then slide it forward.  Click & hold on the BLUE arrow then slide it down so the object is almost level with your chest.

Laser pointer adjust 6

YOU ARE DONE.  Try it out by moving your mouse around.  Notice how the pointer now points toward where your mouse is located.  You should probably rename the object (so you can find it easier in your inventory) and maybe change the color or texture.  Just detach when finished.  If you want to use it again, RIGHT CLICK on the object in your inventory & choose WEAR.

For those of you that might need a more flexible pointer (one not attached to your avatar, that can easily point out exact positions within the environment, and that multiple people can easily share use), you might checkout my newly released 3D Pointer.


Terminology Tossed Salad (3DTLC pt 1)

April 25, 2009

Hands down, no doubt, the best conference I’ve attended.  Since there were so many stimulating talks, questions, & conversations, I will attempt to synthesize my observations topically in the next few postings.

Terminology Tossed Salad:  What do we call this technology?  Is it Virtual Reality, Virtual Worlds, Immersive Internet, Immersive Technology, 3D Virtual Environments, Multiple User Virtual Environments?  Keynote Joe Little (BP), prefers 3D Virtual Environment (3DVE), the ThinkBalm team prefers “Immersive Internet“, although most individuals’ default is still virtual worlds.  I have a problem with the terms “virtual worlds,” “virtual reality,” and “virtual” for that matter.  All have baggage that cloud the clear perception and potential of the technology.

First, the “virtual”.  I’m a huge sci-fi fan (but was humbled at the ThinkBalm Innovation Community meetup by the sci-fi prowess of Mark Oehlert & Sam Driver…I am in awe).  Sci-fi is a double-edged sword to immersive technology.  It sparks the imagination and feeds the possible, but it brings with it the entire genre.  It’s entertainment, it’s accessible only by those with advanced technological skills, it’s speculative & not practical, it’s not real.  Basically, it’s niche entertainment for nerds.  To some, the term “virtual” brings images of “Tron” or “Lawnmower Man” or any number of low budget 80’s alien or mind-control films.  A collection of entertainments that reinforce the “unrealness” of the virtual.  Link that term with any other, and you still bring the associated baggage.

As far as “reality” goes, isn’t it all real?  We don’t call talking on the phone “virtual” or “simulated” interaction.

I choose to use the term “immersive”.  It brings with it a sense of presence, space, of surrounding oneself in something.  You can immerse yourself in a sport, in a hobby, in work, in an environment.  It’s focuses on the person experiencing and the potential for the subject being experienced to impact that person.  The degree of immersion is important, since the greater the sense of presence, the greater the engagement in the subject becomes.  Immersive technology is thus any technology utilized that incorporates the 3 dimentional in creating an experience of presence.