Lab: 360 panoramic image capture coming to the viewer – soon!

It All Starts With a Smile; Inara Pey, October 2016, on Flickr The ability to take 360-degree panoramic shots is to be integrated into the viewer, with access via the snapshot floater (Image location: It All Starts With A Smile  – blog post – static image produced with the Illiastra Panoramic Camera HUD) – click the image to see it in 360-degree format

Just as I was working on an article about  the Illiastra Panoramic Camera and producing static / interactive 360-degree images of Second Life, I attended the Third Party Viewer Development meeting on Friday, October 7th. During that meeting, Troy Linden announced that the Lab are working on incorporating the capability to generate 360-degree snapshots directly into the viewer.

The new capability is to be called 360 Snapshot, and will be integrated into the snapshot floater (alongside of additional snapshot improvements contributed by TPV developer NiranV Dean – although these sit outside of the 360-degree feature).

In essence, the snapshot floater will act as a 360-degree camera rig, allowing you to position your avatar almost anywhere in-world and capture a full 360-degree image, stitched together by back-end processing by the Lab. The image will then be shareable via the SL Share feature, and should be available for download to your local drive.

The work is far enough advanced such that a test viewer (not a project viewer) will be appearing sometime quite soon, with the Lab being keen to get it capability out into the hands of users to try. However, the important thing to note is that it will be a test version – it will not be a final, polished solution right out of the gate. The idea is to give users an indication of things like picture quality, approach taken, etc., and allow the Lab to examine exactly how much additional functionality they need to consider / include in the capability.

Initially, the stitching element will be absent; users will have to take care of that themselves after saving the image set to their local drive. There are also some potentially significant issues the Lab want to look at in detail through the use of the test viewer.

In particular there is the question of how the capability will interact with the simulator Interest List: will items effectively behind your avatar’s field of view update correctly in order to be properly imaged by the system? If not, the Lab will need to look in to how things might be adjusted. The idea here is that by carrying out such tests publicly, the Lab can work with interested users and photographers to identify potential limitations and problem areas in the approach, and so hopefully address them.

In commenting on the project, Oz acknowledged that there are HUD systems available which have been inspirational, and much of the driver behind this capability is the desire to give users a simple “point and shoot” interface.

There is no indication yet on limitations which might be placed on the system, such as image resolution, etc. Hence again why the capability will be appearing in a test viewer when it emerges, rather than a project viewer. The Lab also isn’t committing to any kind of time scales for this work, other than the test viewer is liable to appear reasonably soon; or how long the project will take to reach a release status once a test viewer does appear. The focus is on a step-by-step development of the capability.

Note: the audio clips here are extracts of salient points from the discussion on the 360 Snapshot capability. To hear the full discussion of the capability, please listen to the video of the Third Party Viewer Meeting video, starting at the 08:49 point.

Advertisements

The Illiastra Panoramic Camera: 360-degree images of Second Life

Illiastra Camera Test; Inara Pey, October 2016, on Flickr A static panoramic view of our home island produced using the Illiastra Panoramic Camera and the Hugin Software.

I received a generous gift from Illiastra Ascendent (NVZN, aka James Reichert in the physical world) over the weekend, who sent me the Illiastra Panoramic Camera (MP link) to try-out in Second Life.

This is a HUD-based system which can be used to produce set of images of an in-world scene which can be stitched together using suitable software to create a static 360-degree view. These can in turn be uploaded to Facebook or websites such as VRchive and YouTube, as scrollable, 360-degree views of a location.

The system comprises two camera HUDs, “basic” and Pro, together with a photosphere for viewing captured images in-world. The difference between the two cameras being that the “basic” model uses 8 images to create a 360-degree panorama, while the Pro version takes a total of 26 (including directly above and below you) to produce either a panoramic view using 24 images, or a spherical view using all 26 images.

Producing your static panoramic image is a 2-step process:

  • Capturing your in-world shots using the camera
  • Stitching them into a panoramic mosaic using a suitable software application.

Once this is done, you can proceed to prepare them for 360-degree viewing on Facebook, VRchive, etc. Illiastra provides comprehensive set of videos on producing your panoramic shots, stitching them together and uploading them to Facebook, which I highly recommend.

For the rest of this article, I’ll take you through producing a panoramic shot and then uploading it to VRchive and converting it to a 360-degree video for You Tube.

Taking the Shots

There are some basic steps to follow when preparing to take shots using the system:

  • Position yourself at the centre of the location you want to capture in a 360-degree image. Be careful of where you select – too close to building or trees, etc., could have them dominating a part of the view.
  • Set your preferred windlight and daytime settings.
  • Make sure you freeze the clouds – you’ll be taking up to 26 images which will need to be stitched together, and moving clouds could make that a bit of a bugger to do. Use Menu > World > Environment Editor >Sky Presets > Edit Presets or PhotoTools > Clouds and check the scroll lock check boxes.
  • Make sure the viewer’s camera is set to the default view  angle, FOV and focal length
  • Hide yourself from view  – used the supplied alpha mask after removing all attachments or use something like a Vanish gesture. Otherwise, the top of your head will be in every shot.
  • Tap ESC on your keyboard to free your camera (and free it from any other influences acting upon it).
Basic Camera HUD: closed (l) and in use (r)

Once you’re set, click the camera HUD your camera will rotate and position itself for the first shot. Use the Snapshot shortcut CTRL-‘ (tilde) to save the image – you’ll be prompted for a file name and location on your computer for the very first short after the HUD is attached.

The Pro version of the camera produces 24 shots using the left / right keys (+ CTRL-' for image capture), the chevrons denoting the progress through upper / lower sets of 8 images apiece. The up and down buttons position the camera for taking sky / ground shots respectively, which can be used to create spherical views
The Pro version of the camera produces 24 shots using the left / right keys (+ CTRL-‘ for image capture), the chevrons denoting the progress through upper / lower sets of 8 images apiece. The up and down buttons position the camera for taking sky / ground shots respectively, which can be used to create spherical views

When you’ve saved the shot – which is effectively the first frame of your panoramic image – click the right arrow on the HUD to advance the camera to the next point (indicated in green on the HUD), and take another snapshot (CTRL-‘). You won’t be prompted for a file name for this and the remaining frames – simply progress on around the HUD, capturing a snapshot at each of the highlighted views in turn.

If you are using the “Basic” camera, you’ll be taking a total of 8 shots – once around the HUD. If you are using the Pro camera, you will be taking 24 shots around you – that’s 3 times around the HUD clicking the right button, giving you 8 horizontal shots, 8 angled upwards, and 8 angled downwards – just follow the prompts on the HUD. When you’ve taken all 24, click the UP arrow on the HUD to capture an overhead view, and the DOWN arrow to capture a shot of the ground under your feet. Again – remember to press CTRL-‘ to save each image.

Note that after the very first instance of asking you to select a file location / name for your shot for image ever captured using it, the HUD will automatically save any subsequent set of shots you capture to the last location on your hard drive you used to save images captured using the snapshot floater

Producing your Panoramic Image via Hugin

Once you have taken your shots, you’ll have either 8 (“basic” camera) or 26 (Pro camera) shots of your location. These now need to be stitched together. GIMP or PhotoShop can be used for this for those proficient in using them, otherwise Illiastra recommends using the Hugin Panorama Stitcher available through Sourceforge.net.  I opted to use this.

With Hugin installed and launched, proceed as follows:

  • In the Assistant tab, click on Load Images…
    • If you have been using the “Basic” camera, select all 8 of your shots
    • If you have been using the Pro camera, selected the first 24 shots  – do not include the final overhead sky shot or ground shot – these can be added later, if required.
  • A dialogue box will appear. Enter a value of 90 in the Horizontal Field of View (HFOV).
  • Click OK to load your images into Hugin – things will initially look a mess – don’t worry!
  • Click on the Align button to initially align your shots – this may take a while to process, depending on your system, the image resolution, etc., and then may end-up upside down. Again, don’t worry!
Loading and aligning your images in Hugin
Loading and aligning your images in Hugin
  • When Align has completed, click on the Move/Drag tab and click Straighten. If your shots are upside down, enter 180 in the Roll text box and click Apply. Your images will further align and flip the right way up.
Straighten and correct inverted image, if required
Straighten and correct inverted image, if required

Continue reading “The Illiastra Panoramic Camera: 360-degree images of Second Life”

Reshade: post-processing Second Life in real time

Reshade: injecting shader effects into Second Life (or any game) in real time
ReShade: overlaying your SL world view with shader effects. In this image, I’m using the ReShade split screen option to show a real-time view of Oyster Bay, with the original windlight-based view on the left, and a preview of effects overlays on the right. (which have been deliberately exaggerated for effect)

ReShade is an application which has been generating a bit of buzz around Second Life for the last couple of weeks. When installed on a Windows PC (7, 8 or 10), it allows you to overlay you Second Life world view with a wide range of shader-based effects, which can be used in screen captures for images, or when recording machinima to offer real-time visual effects.As it is an overlay system, it also works with OpenSim environments.

I first got to hear about ReShade from Whirly Fizzle at the start of August (she in turn got to learn of it through Caetlynn Resident), and having been playing with the beta since then. Just how practical it might be is a matter of personal choice / want / ability with more traditional post-processing tools, etc. However, as version 1.0 launched on August 10th, with some much-need clean-up, I thought I’d offer a write-up on it, together with a few thoughts.

Remember, ReShade is third-party application, LL and TPVs cannot provide assistance in using it – and nor can I. If you need help with it, please refer to the ReShade forums. As relatively new software, it can be a little buggy, and it doesn’t always run with the viewer when installed – again, if you have problems getting it going, neither viewer support teams nor I can really help.

A quick and dirty demo video showing how ReShade effects can be used in real-time machinima capture in Second Life

Set-up

Please ensure you’re logged-out of Second Life when setting-up ReShade.

  • Download the ReShade Framework ZIP file from the ReShade website.
  • Unzip the contents of the downloaded file to a location of your choice.
  • Navigate to the unzipped folder location and right-click on ReShade Mediator and Run As Administrator.
  • The Mediator will launch to display the configuration tab (shown below). This is the UI element used to apply and adjust effects.
  • You now need to create a profile for Framework to work with your viewer.
Your first step is to configure the Framework Mediator to recognise your viewer
Your first step is to configure the Framework Mediator to recognise your viewer
  • Under the Profile section on the left of the Mediator, click Add. A file picker will open Use it to navigate to your viewer’s installation folder.
  • Locate the viewer’s .EXE file in the installation folder and click it once to highlight it, and then click the Open button in the picker
  • You will be returned to the Mediator panel, and the viewer name or “Second Life” should be displayed in the profile drop-down (below) – note that some TPVs may display their own name or may display “Second Life”, it makes no difference.
  • Make sure OpenGL has been correctly identified. Click on the Confirm button to create a profile for your viewer.
When adding a viewer to ReShade Framework, note it may display as
When adding a viewer to ReShade Framework, note it may display as “Second Life” rather than the viewer’s name – this doesn’t prevent things from working
  • When Mediator has finished creating the profile, click Apply at the top right of the panel.

The set-up process is now complete. However:

  • Note that this has created two files in your viewer’s installation folder: reshade.fx and opengl32.dll. These must be deleted if you decide to remove ReShade from your PC.
  • Also, as I’ve found ReShade to be slightly flaky, before going any further, copy the opengl32.dll and save the copy in another location – I’ll explain why later.

Continue reading “Reshade: post-processing Second Life in real time”

Filtering my view of Second Life

In my explorations of Second Life, I’ve tended to rely purely on the tools available within the viewer for snapshots.

My main reason for doing this is because I’m no graphics artist; frankly, and as I’ve said before, Photoshop gives me a case of the heebee-geebies within five minutes of starting it. I’ve no idea why, but I’ve come to accept that somewhere in the depths of my mind, I’ve created some kind of barrier which prevents me getting my head around it. I’ve fared better with GIMP, which I’ve used to create textures, alpha layers, my custom tiger tattoo and, more recently, normal maps (even if my PC doesn’t seem to particularly like the Windows 7 32-bit version of the normal map plug-in).

There’s also the fact that the viewer itself contains a wealth of options to enhance both photography and machinima. These aren’t always easy to find, given they can be buried in debug settings, etc. But a number of TPVs offer ways and means of accessing them, making the happy snapper’s life (i.e. mine), a lot easier.

Phototools, in particular has been a huge boon in this as it brings together so many photo-related options buried within the viewer all together under one floater for Firestorm users (and can, with some tweaking, be offered-up through other viewers if you know what you’re doing). With vignetting now added to it, it is a very powerful tool. Dolphin, Exodus and Niran’s viewer (to name the three that I’ve also used, snapshot-wise) also pool together various options for photographers under manageable menus as well.

Items such as Vincent Nacon’s optional cloud layers (again, an option in Firestorm, and which can be added to other viewers), also do an incredible amount to enhance in-world shots without the need for post-processing.

Keisei - "watercolour on canvas"
Keisei – “watercolour on canvas”

But one cannot swim in the shallows forever. So I’ve been playing around with GIMP’s in-built filters and a couple of plug-ins. I’m not about to claim I’m an expert or doing anything particularly clever in doing so – and I’m certainly not up to the standards of many who have mastered the subtle art of post-processing; but I’d thought I’d post a couple of the results of my fiddlings here.

I’ve also added the images to the appropriate sets in my Flicker pages.

Forgotten City - "pen and ink"
Forgotten City – “pen and ink”

Related Links