Press "Enter" to skip to content

Through the Looking Glass – holographic display hardware is great, but it’s not enough

Review Four years ago in a feature for The Register, I wrote about the latest technologies for three-dimensional photography and videography. At the time, the tech required an array of tens to hundreds of cameras, all pointed inward at a subject, gathering reams of two-dimensional data immediately uploaded to the cloud for hours of post-processing, image recognition, feature extraction, and assembly into three- or four-dimensional media.

Today, I can fire up an app on my whizzy new iPhone 13 Pro, point its onboard LiDAR sensor at a subject, and record – in four dimensions – and in real time. That’s enormous progress – a real revolution in sensors that gives our devices the capacity to capture depth. But, as I noted in the closing paragraphs of that feature, capturing depth does not mean that you can display it. All of our screens live in Flatland – everything projected onto a surface of zero depth. Even the lively four-dimensional worlds of computer gaming still squash themselves against the screen. There may be depth within those virtual worlds, but it’s not presented that way to our eyes.

Three-dimensional displays have been a bit of a holy grail of computing. Virtual- and augmented-reality systems use stereo pairs, projecting a slightly different image into each eye, but they’re still too big and clumsy to be widely adopted, even for professional uses. Far better to use something that looks like a screen – but with depth. 3DTV had a go at that a decade ago, but fell into the chicken-and-egg abyss of little content meaning slow adoption, meaning even less content, meaning … extinction.

There has to be a better way

On a tour of a friend’s lab at PARC a few years ago (yes, although a shadow of its former self, PARC is still going), I caught a glimpse of something that looked a bit like the marriage of a screen and a small fishtank. In it swam a three-dimensional simulation of a fish – presented in 3D! That was my first encounter with one of the brand-new 3D displays from Brooklyn-based Looking Glass Factory.

The core of the technology behind Looking Glass Factory’s displays dates back a century, when innovations in lenticular printing placed a series of lens ridges above a printed surface, refracting different portions of the print to the viewer as they changed their viewing position relative to the lenses. Lenticular printing provides an illusion of depth – leading to a century of artwork in questionable taste.

Revived for the 21st century, lenticular displays place an LCD display behind the lenses, then mathematically distort the image on the display, using depth information, so that the right bits of it are mapped to the right angles as the display refracts through the lenses. (As I understand it, this mapping is part of Looking Glass Factory’s heavily patented ‘secret sauce’.) The result is something that looks very much like a photograph – until you realise that you can ‘look around’ within it:

In January, Looking Glass Factory launched a Kickstarter campaign for its newest and most affordable device, called “Portrait” – a 20cm-diagonal lenticular display – garnering over US$2.5 million in pre-orders.

I borrowed one from a friend and have played with it for a month.

While it’s pleasing to see so much progress being made on depth displays, it’s disappointing to have such a promising device so frequently hamstrung by weak software support.

Performance issues

Although it does have an embedded Raspberry Pi 4 and can run standalone – working its way through a display list of pre-loaded animations – Portrait really shines when connected to a computer, via HDMI and USB ports. Although recognised as an auxiliary display, the lenticular elements make it seem blurry and not very useful as a display extender. The USB connection enables the computer to engage in a data conversation with the Portrait, mediated by a “Holoplay Service” that must be running in the background.

Because of the calculations required to distort images so they will correctly display on the Portrait, it really helps if the computer driving the display has a beefy GPU. Where my GeForce GTX 2070 super-powered desktop purred right along, my 2015 laptop struggled – and gave me the impression that the software itself had locked up. (It hadn’t, but it was running at 1/100th the speed of my desktop machine.)

Once connected, it’s a matter of using the Holoplay Studio app to get media onto the device. It’s easy enough to create still depth images – both iOS and Android smartphones offer “portrait” mode in their onboard camera apps, and photos snapped in that mode contain visual data, plus a “depth map” providing three-dimensional metadata. Load one of those photos into the app, and viola!

(Just remember to be careful while uploading that photo from your smartphone, as many tools will strip out this ‘extraneous’ metadata. If that happens, you’ll be left with a flat photo that Holoplay Studio will refuse to load.)

Video is somewhat more complicated. Unless you have a depth-map-equipped smartphone (on iOS, that’s anything with either FaceID or a LiDAR sensor), shooting four-dimensional video will require a device like a Kinect, Azure Kinect, Intel RealSense, or the like. Once captured, bringing it into Holoplay Studio should be as easy as drag-and-drop. But these four-dimensional video images are incredibly compute-intensive, so best save this work for a machine with serious GPU acceleration.

So close. yet so far away

Given my own background in web-based real-time 3D, I really wanted to be able to create a web-based 3D animation that I could view in … well … 3D. Here I hit a series of roadblocks: code that had once worked, but has since been abandoned, or closed-beta repos that hadn’t been touched in years, offering code samples that promised so much, yet did so little.

Such real-time web-based four-dimensional visualization looks like it should possible with Looking Glass Portrait – given the software interfaces the company does provide for Blender, Unity and Unreal – but doesn’t appear to be a priority for the firm. This means that the web – with its endless applications and creativity – is still a world apart.

That’s quite an own goal. Google’s recent Project Starline – using its own version of the display technologies within the Looking Glass Portrait – shows us how much we long for communication that goes beyond the flat screen. Two years confined as heads-in-boxes on endless Zoom calls makes us ache for something more substantial – something with a bit more depth. It’s easy to imagine a combination of smartphone plus Portrait plus web revolutionising video calls, but without a lot of developer software support it’s very difficult to make that happen.

Looking Glass Portrait brings the experience of depth to the desktop. Right now, that’s little more than a curiosity – barely useful, and rarely used. As we’ve learned over and over in the history of computing, new and non-standard hardware needs to be supported with excellent developer resources, so that some enterprising individual can build a killer app.

The hardware is ready. Although it should be the easy bit, the software is woefully lacking – and until that gets fixed I can’t see how the Portrait ever achieves its intriguing potential. ®

source: The Register