Good uses for the VR metaverse, #2

Another Tumblr repost. More thoughts on things multi-contributor VR could be used for.


 

Dynamic news reconstruction

You have some users managing a stand-alone world with a one-way viewing “plane” for observers to watch what’s going on and submit contributions to the editors as a newsworthy event is happening.

Let’s use the first Furguson protests as an example.

The room is black and of indeterminate size. There are a series of glassy squares in a circle overhead, as might be arranged in a hospital surgical gallery. There are multiple channel-rooms viewing from those virtual windows, but no rooms physically behind them in the main reconstruction room. They are one-way portals for imagery.

The room has no distinct light sources yet and the avatars of those present are rendered in flat shadowless colour.

First someone pulls the most recent satelite image of the area they can get, probably from Google maps if not a seperate public depository. Someone quickly maps that to a geographic height map to get the lay of the land. The observers get an overview showing items being event-tagged into it, but individual editors are already subjectively zooming back and forth through the document timeline, scaling and applying incoming photos and videos to the physical model.

Nearby buildings are crudely extruded up out of the ground and detailed with projections taken from the hundreds of photos and snippets of footage. Locations in 3D space are calculated from converging pieces of footage, offsets between individual time-stamps are noted and corrected for. What starts out as a crude mismash quickly becomes a four-dimensional scrapbook of stills, video and other data.

An extra wave of specialist editors join the reconstruction as word of the story spreads, looking at faces and outfits and noting who is where and where they are at each stage of the event. Police vehicles are identified, their VR render tagged with information on the makes, models and public histories. Multiple footage angles on the rifle-weilding police alow a targetting overlay to be added, calculating the line-of-sight down each gun barrel (in the form of an extrapolated probability cone). The arcs of tear-gas canisters are calculated back to their probable launch-points and the officers at that location using the in-world physics engine.

Someone tries to add information about guerilla warfare tactics to the map, but it’s spotted and wiped off quickly as irelevant and dangerous. Someone petulantly starts a #conspiracy channel for viewers to add optionally.

As the confrontation moves, the piecemeal map grows in different directions. Some people dedicate themselves to sitting on the livestreams, matching it’s broadcast location and viewing angle as best they can as it moves. Others follow them, popping up crude representations of the buildings to fit and making minor positional adjustments to this smear of imagery through virtual space-time. Others follow using events in the most uninterupted footage as a solid backbone to syncronise their own finds to.

Partly a multimedia record unlike any other, partly a vicarious experience of the event itself as it happens in a depth never before acheived. With time the world-document is refined more and more and new pieces of record are submitted and patched in. False evidence shows up easily when it can’t be matched with the events in dozens of other synchonised and overlapping pieces of record.

If it seems extreme, think of this; making a cheap VR headset using a tablet phone and some cardboard is now readily possible. You can use these to watch 3D videos.

Some day very soon someone will be livestreaming 3D video from inside an assaulted protest like at Ferguson, or inside the next Gaza bombardment, or at the crash site of a freshly downed 747. It will no longer merely be video from the perspective of someone on the scene, it will be their perspective absolute. Your head will be jerked to one side as theirs turns from the splinters of flying concrete, your eyes will fall to the ground as that citizen coughs in tear-gas, you will stagger jelly-legged with them in a feild full of fresh corpses. You will be in their shoes, riding helpless in their body through these events.

The 2-dimensional, individualy experienced, static internet is not capable of doing that justice.

What might you the viewer get from this? Well if you were in one of the viewing channels conceptually overlapping each-other in the gallery, it would probably depend on the channel topic. One might have a channel moderator, zooming the viewing angle back and forth and looking for human rights abuses. Another might be a general chat channel filled with the same disgusted and shocked reactions found on any other social media, it’s viewing windows scaled huge to accomodate the heaving mass of avatars. Another might be doomsday preppers, engaging in fantasies of perceived SS tactics or false-flag rationales behind anonymised white-noise personas. More channels might be proactive, hunting down new sources of information and filtering it for submission to the document. Some humanist channels might just be filled with the others who need somewhere to break down and weep at the sight of it all.

In short, this is another possible way an interactive and unrestrained virtual environment could be used in a way no other medium currently allows, to acheive results faster and in more detail than before. And there as an imersive analytical document to the future instead a collection of disparate images and messages.

Good uses for the VR metaverse, #1

Reposting from Tumblr, in response to a posting about Lucidscape‘s intent to produce a Metaverse framework for distributed physics simulation.


 

imagine Grand Theft Auto V as the Windows of the future

A combination of getting shot by violent thugs and having everything locked down by a giant corporation? Sounds delightful.

Seriously, a fascinating idea but an odd approach to my mind. If I understand it right it’s a distributed “back end” to handle the physics interactions of a virtual world that contains all connected services.

I’m not sure I like it being seamless. It feels like it enforces conventional 3D space constraints on an inherently hyperspacial data space. If a VR world engine can’t handle non-ecluidian spaces, it’s a pale imitator of what a virtual web could be.

It also breaks a basic rule of technological advancement; does it do something better that current methods? This seems to be pushing something people want to exist into being, but from bloodymindedness and fantasy rather than any real improvement.

What do I mean? Here’s an example; how do you switch tabs in a virtual world? Every major browser does this now and it was a wonderful step up from having dozens of different browser windows open.

Virtual worlds represent 3D spaces with avatars within them. It’s a live environment, an interactive world without the ability to pause. At best you could maybe leave an avatar in some sort of AFK mode or maybe get a bot to run it? Leave an answering machine message maybe? If you dare limit it to one instance at a time, it will be dead in the water no matter how pretty.

What I’m saying is that no matter how well it’s rendered or the physics are simulated, it’s still a worse way of presenting bulk information. It actively detracts from it.

If you want the fantasy of a VR metaverse to take off, find ways of using it that are better than using a browser or file explorer for the same ends. And do so in a way that can be applied to the existing data that’s out there.

Here’s an idea; a good virtual disk manager. You’ve got an extra dimension, so use it. I want a render of my hard-drive floating in front of me, I want to be able to display the physical locations of all the files on the disk, overlay the access times and highlight the bad sectors. I want all the partitions as clear borders, all the fragmented files pulsating in time with their individual fragments and the related I/O split counts bucking upward like a graphic equaliser. I want the wasted area to glow with the void of space and the wireframe case to strobe softly with refreshing SMART data tracing out in graphs either side of it. I want the sort of tool that would be an incomprehenisible mess in 2D but gives you every available facet of the dataset AT A GLANCE when rendered in VR. THAT is what I’m talking about! THAT IS THE POTENTIAL OF VR!

The sadest thing for me is logging onto something like SecondLife and seeing houses with stairs and doors. Stairs you don’t need to climb, because there’s no real gravity, where you have no actual feet to touch a ground that doesn’t exist. Doors meant to keep people out of an imaginary space there was no need to render or load if it was really undesired.

Stop rendering your avatar in one location and render it in another. The need for movement is bypassed.

Don’t block the view of something, just decline to render it at all.

Don’t step into one room from another, step out of one existance into a different one.