reckless glitch design – benjamin kiesewetter

XXHL, Giga Tours et Mega Ponts

Tour Montparnasse AR

(click to edit) There is some javascript here for the premium elementor media grid to replace videos by looping slomo exerpts!
  insert image,    set to video    type: youtube    url: youtube url
   controls/mute, work on Lightbox, which is cool feature, i recommend controls always true,    delete the lightbox icon in style/lightbox, maybe play wih margin
  THE MAGIC - insert into title:     slomoSection( 'youtubeid' , 111 , 222 , 0.5 )     nothing else! 111 is start time, 222 is stoptime, 0.5 is speed, title will be replaced by title from youtube

this will vanish on published page

XXHL, giga tours et méga ponts

The temporary exhibition “XXHL, giga tours et méga ponts”, ran at the Cité des Sciences et de l’industrie in Paris until March 2021. Based on various imersive and interactive exhipits, the exhebition took a look behind the scenes of these radical giants. Besides consulting and support for other exhibits, my main focus lay on developing an Augmented Reality application of the Parisian Tour Montparnasse.



Cité des Sciences et de l’Industrie – Universcience


Temporal Exhibition consisting of

multiple interactive exhibits

My Tasks
  • Unity 3d development
  • Augmented Reality consulting
  • asset production supervison
  • Shader development with Shader Graph
AR running on a tablet, augmenting the physical model of Tour Montparnasse in Paris

Tour Montparnasse AR

The Montparnasse Tower, a Skyscraper built in Paris in the early 1970s, is a dark and gloomy building that is planned to be renovated into a light and friendly landmark of Paris.
The exponat’s aim purpose was to explain the renovation plans to visitors of the exhibition. Multiple iPad tablets were each taking a video of a physical model of Tour Montparnasse. The tablets wood then track the position of that Model and augment the video of they took with a virtual 3d overlay of the construction of the tower in the past and the planned do-over. The iPads were movable ofcourse and controlled by a touch, tab and swipe user interface.
I took the role of one of the two software developes, so I had to be the geeky guy, who was told by motion designers, what to do and kept complaining about feasability.
Besides that I also had to develop a usability concept for beyond the usual click user interface.
AR app on a tablet, augmenting the physical model of the Montparnasse Tower in Paris,

How to build up an Augmented Reality in Unity

Layering in Unity: (1) Reality (video) (2) virtual background (3) virtual 3d augmentataion (4) 3d info augmentation (5) 2d top Level Interface

hardware and software

The Ai Software, we developed with Unity, is is running on multiple iPads aranged around a physical model of Montparnasse Tower and it’s environment. The tablets were hanging from the ceiling, but freely movable, so the viewer can focus on different points of interest.

AR running on a tablet, augmenting the physical model of Tour Montparnasse in Paris

reality - video layer (1)

We used the iPad cameras to track the physical  model and determin the position of the user’s iPad relative to the real model by matching the video to a point cloud we had taken of our model. We then used that life video as first layer of our AR-application.

montparnasse AR unity layers - video (iPad tilted lower)
montparnasse AR unity layers - video

virtual background layer (2)

For tracking the model we needed a high contrast, but when using the app, we wanted to dull the backround a bit to lead the view to the important foreground. So we rendered a virtual 3d copy of the same model life as a semitransparent glowy blueish layer on top of the video as a virtual backdrop and to says – the magic begins here.

montparnasse AR unity layers - backdrop layer
layers on display: video and backdrop

virtual 3d augmentation layer (3)

The focus of the app was showing, why the Montparnasse tower was a technological masterpiece when it was constructed in 1973. Yet the current state is a dark and dull tower and it is planned to be overhauled to become a more pleasent view in the future. Our main augmentation layer was build up by life rendered virtual 3d models that show the construction in the 1970s, the present state and planned the planned reconstructio of the tower. These highly complex and animated models were rendered in realtime on top of the background – or rather into the backround for performance.

montparnasse AR unity layers - renovation layer
layers on display: video and current state
montparnasse AR unity layers - construction layer
montparnasse AR unity layers - foundation layer

3d info augmentation layer (4)

To give detailed information on what the user is shown, interface elements hover in the virtual 3d space close to the sections of the tower they relate to. So users can pick their information of interest. These interface elements appeared in accordance to what state or section of the tower was currently shown, to not overwhelm the user.

montparnasse AR unity layers - 3d ui layer
layers on display: video, renovation and 3d ui

2d top level interface layer (5)

On top of everything else there was a 2d interface, that stayed fixed. It contained the most iportant (main category) interaction elements, used to toggle between the different states and sections of the tower: Construction under ground, construction of the tower, current state and planned renovation. This way we wanted to show the users what is inportant, and what they are just looking at.

montparnasse AR unity layers - ui main categories layer
layers on display: video, renovation, 3d ui and 2d ui main categories
AR app - rendered in Unity - highly complex model of the internal structure of the Montparnasse tower

augmented reality and performance

0.03 seconds ... is long!

When it comes to Virtual Reality a major problem is, that a single Computer needs to render 3d objekts at 240fps (frames per second) 120fps for each eye in high resolution. This high frame rate is neccesairy to prevent motion-sickness. 

Augmented Reality apps can easily run at 30 fps, but they often need to render on the teeny tiny internal graphics chip of a tablet or phone. 

In comparisson, some tripple-A video games run at 30fps for a single screen, on huge PC graphics card.

mobile GPU vs the render farm

Todays motion design departments use so called render farms – multiple high-end graphics cards, working together – which is not aplicable for single device aplications like AR or VR – and still sometimes rendering a single frame (image) may take minutes, because they are rendered into videos. So time is not so much an issue.

And motion designers think in Cinema-4d’s complex rendering options and multiple layers with blend-modes in After Effects or even pixel perfect text fields in Illustrator.
Unreal Engine or Unity 3d however needs you to think in polygons inside a moving 3d space and effects achieved by clever shader programming, which is a slightly different language and needs a much higher precision in preparation of 3d models and 2d assets.

I do know Cinema-4d and all of Adobe‘s main products, so one of my main tasks, when developing a VR or AR application in cooperation with motion designers is to translate their usually high expectations into doable Unity tasks and (often repeatedly) ask for precise 2d and 3d assets to lower the gigantic resource overhead for Unity 3d or Unreal.

highly complex 3d models of the internal structure of Montparnasse tower

every bolt and nut - technical 3d models

One of the biggest issues when showing architectual or technical models in VR or AR is their source. They often come from engineering offices and so they tend to including every bolt, nut and doorknob.

In our case we received a 3d model containing every door, window, fire wall and wide-flanged beam. And none of those were prepared to render in a game engine:
The meshes were high resolution but not containing UV-maps or consistent normals. Autodesk CAD Software or Cinema4d can handle that. But game engines do a lot of tricks to trade performance for speed, like textures instead of models of screws.  I will spare you the details let’s just say, it is like trying to Show everything Tolkien ever wrote in a 20 minute short movie – impossible.
I also remember how happy the creative director was, that he had found an beautifully modeled 3d model of a 1970s Parisian Metro waggon. I had to tell him, even if you hold the ipad close to the model, the metro trains in our AR-App would not be larger than a millimeter or two. On the iPad 11 screen that is only maybe 10-20 pixles. We ended up using textures of a Metro waggon on a simple box-shape (see lower right corner of the image).
In the end we had to eliminate as much geometry as posible and simplify and clean up what was left before using it in unity and finally loading it into the tablet’s Grafic-chip.
Since this took several iterations between the motion design department and the programmers, I developed a special editor script for Unity, just to import the next iteration of the model and connect it to the game mechanics … and also to secretly hide some more parts of the model on import, like whole stair cases which would have been inpossible to see anyway, but the motion design department insisted on keeping.
AR app on a tablet, augmenting the physical model of the Montparnasse Tower in Paris,

augmented reality and user inetraction

Motion designers are usually great at dramaturgy, keeping their audiance hooked during single presentation of any kind.
But the approach to think in a linear presentation-focused way does not work well in multi dimensional interactive exhibits.
When interacting with a simple presentation-app like this one, users may still jump from topic to topic at any time and in a chaotic pattern. This “chanel zapping” should at best be caught by the app in a way that feels like an immediate response to the user’s input and for any possible situation.
Further it is mpossible to predict the viewing angle during a presentations when having an augmented model, viewed from multiple and moving points of view.

the small details

One example: We were using four tablets looking at the scene each from a different corner. While they were movable the movement range was limited. The virtual scene was lit by a virtual sun. That meant that from at least one ipad, you would always see only the dark side of the tower.  We had several options:

Ignore it, put the sun at noon on the day of a soltice – 90 degrees down, which would look dul or put the sun into a different ‘sweet-spot’ for each of the four tablets, depending on their quadrant.

Finally we agreed on having it rotate, and by that making the scene more interseting and illustrating a big problem of the Montparnasse tower: It’s taking the sun for a vast area around.

layers on display: video, renovation, 3d ui and 2d ui main categories

the crosshatch

The interface concept I got looked great on stills, but lacked intuitive interaction concepts, based on user centric design – even more important since exhibition visitors are supposed to immediately control exhibits, they see for the first time.
I tried to implement some of these concepts and I needed to develop smooth transistions between any states of animation.
 On first glance, having 2d and 3d UI elements overlapping may appear to be a problem, but it is not. The users constantly move the ipad, bringing their element of interest to the empty center of the screen.
While most interactions ended up to work smoothely, one major flaw made it into the final exhibition: All UI elements on the display, 2d and 3d can be tapped. But the 3d elements can also be activated by focussing on them with the ipad. While moving the ipad across the screen, which may also happen unconciously when following movement, the users can axxidentally activate other elements, which is a feature but will just appear buggy.
AR app on a tablet, augmenting the physical model of the Montparnasse Tower in Paris,