Postulate on the Importance of Avatar Visibility

January 30, 2010

All the recent improvements in the ways we as humans interact with the digital (see Project Natal for a good example), and Cameron’s Avatar movie which depicts a direct brain interface control, got me to thinking…how important is it really to see the avatar you control?

Most immersive environments either show your avatar within the environment (3rd person overhead view), or show the environment directly from the avatar’s eye (first person, mouse look view).  Many give you options to change between viewpoints, and some allow you to “disconnect” your viewpoint from the avatar (like a movie depiction of an outer-body experience).  A view of the avatar helps the user control the avatar, making it easier to move & interact.

I postulate that the importance of seeing your own avatar decreases (and seeing others avatars increases) as it becomes easier to make your avatar mimic your own body.  Here’s a simple little infographic I sketched this afternoon to convey my thought:

I would be interested in hearing if others agree or disagree, and their rationale.


Immersive Collaboration: Casting Presence in a Leading Role

October 24, 2009

NY TheaterOn Thursday, I experienced a wonderful example of effective training in Second Life.  During the weekly Train For Success meetup, Mark Jankowski of Virtual Training Partners conducted a portion of his negotiations training course and gave a tour of his training grounds.  It was impressive not due to fascinatingly complex scripted objects, or remarkably crafted beautiful works of practical art that bring “ooohs” and “ahhhhs”.  It was impressive due to the way Mark used the varied settings to reinforce & complement the core learning dialog.

It got me to thinking…what role does “presence” play in immersive collaboration?  And, how best can we utilize it?

Presence is defined as, “The state or fact of being present, as with others or in a place.” In other words, a sense of self and other…that you the participant are in a “real” and “tangible” space with other intelligent (or semi-intelligent) beings.

Many folks designing and deploying immersive collaborative experiences (learning being one such) miss out on on one of the most powerful aspects of this technology by failing to intentionally design for presence.  All too often I see small group interactions and trainings that require nothing more of the participants than merely talk, look at a slide/image, and type.

Presence is precisely why immersive environments can be far superior to many traditional technology-enabled approaches, like webinars, conference calls, online forums, email, IM, and the like.  There is a higher potential sense of presence due in part to more human senses being engaged.  Also, the mind must process spacial orientation within the environment, which some acknowledge increases subject matter retention (most notably analyst Erica Driver of ThinkBalm in a recent blog post).

But presence does not necessarily require highly realistic builds.  Simple spaces with a few images, builds, and tools smartly (and intentionally) laid out can often be more compelling than an intricately built and near-photo realistic space.  And, from Mark’s example above, the context in which the setting occurs (a baseball skybox overlooking the field during his explaination of the baseball negotiation case study) plays a powerful role as well.

So how does presence play into immersive collaboration?  And how can we best use it?  With collaboration, communication (sharing of thought & intent) between individuals is key.  Crafting an experience that makes one feel more like they are “in” a space among others increases a groups capacity to effectively communicate, allowing for a multitude of methods/channels more closely resembling that of communication in an actual physical presence.  As anecdotal evidence, I’ve consistently found my geographically-dispersed development team accomplishes more in 30 minutes in an immersive meeting than hours, even days, spent in a web-based collaboration application.

Remember, this is still a relatively new technology and thus the industry is still trying to figure out best practices.  As a start, here are some guiding questions that may help you better design for presence:

  • Are my goals for the experience clearly defined?
  • How can the setting, the builds, and the tools within the space help me achieve the goals?
  • Is there a setting(s) that best complements the topic?
  • How will the overall setting impact participants?  What impact do I need it to have?
  • How do I feel when I see/interact with/move about the area?
  • What kind of interaction do I need participants to have with the setting/with the presenter/with each other?
  • Is it important for participants to see one another’s avatars? Why or why not?
  • What’s the difference between conducting it here in the immersive environment and conducting it as a webinar?
  • What if participants are flying around?  Will this change my build/the interactions/my presentation?
  • How much movement/navigation will I require of participants?

Brainstorming Immersively: Of Streams, Position, & Coordination

August 30, 2009

Friday was the 8th installment of the ThinkBalm Innovation Community‘s Brainstorming series.  I’ve consistently found ThinkBalm immersive events incredibly beneficial, particularly in refining my understanding of enterprise use of immersive technologies.  This one was no exception.

It was a focused, vigorous 1 hour discussion lead by Erica Driver & Sam Driver (ThinkBalm) on the topic of “How to write an immersive technology business case.”  We used my newly released BrainBoard version 1 as our primary collaboration orchestrator.  At the close of the session, Erica gathered evaluation feedback from the participants by using the Attitudometer.

Brainstorming Area

Here are some observations & thoughts from my experience:

I saw some fascinating non-verbal problem solving and coordination

Erica & Sam structured the discussion agenda with 3 points.  Thus, contributions appearing on the board were placed under one of the 3 points.  The initial setup was to use the 4 quadrants on the main board to sort the user generated notes for each of the 3 points.  Any miscellaneous notes would be sorted into the 4th quad.  The supplemental board was placed on the side just in case we needed more room.  Well, we needed it.  Due to the sheer number of user notes, we quickly moved the 3rd and miscellaneous points to the supplemental board, leaving the main board for the 1st & 2nd point notes.  I bold & italics “WE” for emphasis…All this was conducted without verbal coordination.  It happened organically (or digitally, I guess).  Several individuals visually observed the need for more room, and coordinated a solution using visual observation and interaction.  And all this without breaking the momentum of the ongoing discussion.

Simultaneous voice & text = better discussion communication

My computer was not playing nice that day, thus I could hear participant voices, but no one could hear mine (a fact that my wife humorously pointed out might have been a good thing).

However, the curse was actually a blessing.  I was reminded yet again that although voice is a powerful & flexible communication tool, it is inferior to text for synchronous multi-person contributions.  I found myself following the stream of voice discussion, while simultaneously contributing relevant thoughts & ideas into notes on the BrainBoard.  I could see several others doing the same thing, their thoughts popping into existence onto the board.  This visual/textual discussion stream at times tracked the vocal, qualitatively and quantitatively expanding & enhancing it.  At other times, it branched away from the vocal, following the rabbit-like course of thought in a separate exploration of the core discussion topic.

This approach to brainstorming allows multiple contribution roles

Brainstorming Main BoardI also found the way in which I contributed to the discussion changed over the course of the hour.  It seemed that in the first half, I followed the voice discussion rather closely, contributing many textual notes to the board without too much concern for their position on the board (or position relative to other notes).  My role seemed to be primarily to add thought.  As we moved into the 2nd half, I found myself spending more time reviewing & sorting the notes.  I started grouping similar notes together, looking for patterns that would inform the ongoing discussion.  The act of positionally sorting the notes during the discussion seemed to help me connect the concepts together into a working, developing understanding of the overall context.

I would be fascinated to learn what impact the repositioning/sorting of the notes had on other participants’ understanding of and contributions to the discussion.

Immersive/virtual environments enable positional relevance in discussions

Utilizing voice and text simultaneously enabled multiple discussion streams to which participants could contribute.  Having these streams in an immersive environment, where the participant contributions exist disparately and spacially as notes on the BrainBoard, allowed for their position to become relevant.  For example, you contribute a thought in a note.  You move that note to a position on the board.  As more contributions are added, someone moves your thought next to another thought.  Seeing the two thoughts placed closely together causes an insight in yet another person, who notes their contribution on the board.


Meerkat Viewer – The Buggy Coolness

August 23, 2009

Downloaded and tested the Meerkat V0.1.6 viewer this weekend.  Despite the occasional crashes and inoperable extra features, I like what it has to offer.

Avatar List

Accessible via the “Meerkat” dropdown menu on the top bar, this little window tool gives some pretty handy features.

  • Shows all nearby avatars
  • Allows you to mark, track, get key, and instantly TP to avatars

Avatar List

Chat Bar as Command Line

Probably my favorite little extra.  With it, you can set your own custom chat-based commands.  The Teleport to camera position is great fun, however your mileage may vary.  After I TP’d a few times, other avatars could not see me.

Command Line

Visual Environmental Settings

This one is a immersive photographers dream.  Via the Arrowed button at the bottom toolbar, you can pull up a rather lengthy list of environmental settings that change the way the world looks.

Environmental

Make your region look ghostly…

Ghost

…or all funky alien-like.

Alien

Again, your mileage may vary.  I encountered several inventory locking quirks that I suspect have something to do with the viewer.  It also crashed on me once when selecting one of the environmental presets.  However, the neat extras make up for the early-version bugs.  I won’t be using it for a business meeting or event yet, but for the fun experimenting times, it’s great.  I’m definitely keeping my eye on this viewer.  Visit the Meerkat development home to download and try it out yourself.

Have you tried it yet?  What were your impressions/favorite features?


iED Presentation on Immersive Collaboration Tools

July 24, 2009

For my presentation on the most practical immersive collaboration tools available for Second Life & Open Sim at Monday’s Immersive Education Day, I will not be showing them via powerpoint, but rather as a tour in the immersive environments.  I’ll post more on the tools I chose to highlight in a later post.  For now, here are locations and instructions on how to join the tour in-world.

Second Life

Time: approximately 1pm CDT
Location: ThinkBalm Island

http://slurl.com/secondlife/ThinkBalm%20Island/52/107/38

  • We will spend our time in Second Life surveying tools at the top of the ivory tower on the west island.
Second Life location of the iED presentation

Second Life location of the iED presentation

ReactionGrid

Time: approximately 1:30pm CDT
Location:  Jeff Lowe Region

  • When you login for the first time, you will appear in the welcome sim.  On the platform, there are touch-click teleports to several of the core regions.  Click on the region “Jeff Lowe”
Where you will first arrive

Where you will first arrive

  • When you arrive to the region, you should see the tour platform near the volcano lab.  That’s where the demo will occur.

ReactionGrid Demo Area

EDIT:

Here is a link to my field notes on each of the tools I will cover:
http://docs.google.com/View?id=dfnzv8zw_455hk5xn2xs


Experientialization vs. Visualization in Immersive Development

July 12, 2009

I recently assisted ThinkBalm with their experiment in immersively displaying their recent business value of immersive technology study. They wanted a “tour” experience structure, requiring the displays to be “stations” along a path that participants traveled.  It was quite a challenge (and a ton of creative fun) developing stations that clearly, quickly, and interactively conveyed the core message of the result topic, while also attempting to maintain a thematic visual and conceptual strain throughout.  Here are a few thoughts & bits of learning from the experience:

Immersive displays require thinking experientially, not merely visually:

Quickly in the process of ideation and development I realized, this was not merely data visualization (as most people refer to this type of project).  The builds needed to be not only visual, but also possess dimensions of position, ordering, presence, interactivity, and consideration of self in relation to others (considerations not typical when developing webinars, visualization graphs, or powerpoint summaries of results).  True, participants would need to gather a large amount of the total message at a glance, so the visual was important.  But, more importantly, we had to explore & answer questions to address these additional immersive dimensions.  Questions such as:

  • From what position (Avatar & camera angle) would participants view the display?
  • Would different angles of view convey different meanings?
  • How many would be experiencing at a time?
  • How long do they need to remain to assimilate the message?
  • What would they converse about when cooperatively interacting with the display?
  • What do we want them to talk about?
  • How will what they experienced before impact how they interpret and experience what follows?

Early development sketch of the Barrier Gauntlet (ThinkBalm Data Garden display)

Know the core messages:

For every display, it is critical to identify the core message/primary take away.  For the Deploy-2-Save game, it was that businesses chose immersive tech over alternatives to reduce costs and increase engagement.  Every other creative decision/possibility was guided by this prime.  Ideas on shape, color, scale, position, transparency, rigidity, interactivity, automation, etc should be accepted or rejected based on whether it makes the core message easier or more difficult to understand.
TB Experience-Barrier Gauntlet 1

With text, less = more:

A picture is worth a 1000 words.  So avoid text when possible.  Use constructs that convey concepts, and then use them to replace text when appropriate.

Participants, not viewers:

An experience is worth a 1000 pictures.  So, in pulling reports, data, information, presentations into immersive environments, focus on what the participant will experience to insure the correct takeaway.  Also, remember to consider how that experience will be impacted/changed by collaborative participation.
Interactively display in ThinkBalm Data Gardens

Use textures to make colors accessible to colorblind:

I used a lot of color throughout to differentiate, communicate, and establish thematic throughlines.  During one of our first shakedown tours, one of the participants was red green colorblind, which dramatically impacted his experience. Sam (ThinkBalm) brilliantly applied a specific texture to each color of prim, allowing those participants with visual color disability to easily distinguish a “red” display element from a “green”.  It’s a great practice I will continue for all future builds.

If you would like to visit the experiment yourself, go visit the ThinkBalm Data Gardens in Second Life.  If you haven’t yet immersed yourself into Second Life and still want to see the results, watch the video tour.


The DIY Immersive Laserpointer (or, the Pinoochio Technique)

May 25, 2009

pointing presenter

Anyone who has presented, trained, or demonstrated a tool within an immersive environment knows just how difficult it can often be to reference a specific position when communicating to others.  There is no simple physical world equilivant to pointing your arm and hand, or using a laser pointer to highlight focus.

Although I have just released a 3D Pointer tool, I also wanted to provide a simple, but limited in functionality, alternative for those do-it-yourselfers (or cheapskates) out there.

Here’s a quick tutorial on how to make your own laserpointer for the SecondLife & OpenSim immersive environments:

1.  First, create (rez) a cone.

Laser Pointer 1

2. Now, increase the height SIZE of the cone (the Z axis) to about 5 meters.

Laser pointer height 2

3. With the mouse, RIGHT CLICK on the object, and from the Pie Menus select MORE>, ATTACH>, HEAD>, NOSE.  This will attach the object to your nose.  This will replace anything you are currently have attacked to your nose (for example, a specialized avatar component).

Laser Pointer attach 3

4.  Laugh at how silly you look.

Laser Pointer funny 4

5.  Right click on the object and select EDIT.  Adjust the ROTATION of the Y AXIS to 90 degrees.  The object should now be pointing forward (still silly looking).

Laser Pointer rotate 5

6.  Finally, lets reposition it.  Notice the BLUE, GREEN, & RED axis arrows running though it?  Click & hold on the RED arrow then slide it forward.  Click & hold on the BLUE arrow then slide it down so the object is almost level with your chest.

Laser pointer adjust 6

YOU ARE DONE.  Try it out by moving your mouse around.  Notice how the pointer now points toward where your mouse is located.  You should probably rename the object (so you can find it easier in your inventory) and maybe change the color or texture.  Just detach when finished.  If you want to use it again, RIGHT CLICK on the object in your inventory & choose WEAR.

For those of you that might need a more flexible pointer (one not attached to your avatar, that can easily point out exact positions within the environment, and that multiple people can easily share use), you might checkout my newly released 3D Pointer.


Follow

Get every new post delivered to your Inbox.