Katabasis

On the occasion of their retrospective exhibition at Casino Luxembourg, Raphael Siboni et Fabien Giraud (the unmanned) wanted to create a 360 video controlled by an “artificial intelligence”.

The idea was that to train a model to recognize objects from the artists previous movies, and use it to detect those objects in a 360 video filmed inside the exhibition. A simple program would then choose one of the detected objects and track it for a while, choose another one, etc…

We ended up manually labeling objects (with a single category) then training a YOLOv5 model on those, which yielded pretty good results.

Initial tests using unity

Story time !

Something that proved unexpectedly challenging was Smoothly animating the camera between different orientations: The playback had to be done through python + libvlc, which expects euler angles as input for the camera orientation (yaw, pitch and roll).

The “numpy-quaternion” library seemed like a good candidate: It had everything I needed, including the slerp feature I was looking for (to animate between two rotations). However, the library’s author had pretty strong opinions on euler angles:

[convert to euler angles]

Open Pandora’s Box If somebody is trying to make you use Euler angles, tell them no, and walk away, and go and tell your mum. You don’t want to use Euler angles. They are awful. Stay away. It’s one thing to convert from Euler angles to quaternions; at least you’re moving in the right direction. But to go the other way?! It’s just not right. Assumes the Euler angles correspond to the quaternion R via R = exp(alphaz/2) * exp(betay/2) * exp(gammaz/2) The angles are naturally in radians. NOTE: Before opening an issue reporting something “wrong” with this function, be sure to read all of the following page, especially* the very last section about opening issues or pull requests. https://github.com/moble/quaternion/wiki/Euler-angles-are-horrible

Mike BoyleDocumentation for the “convert to euler angles” function in his quaternion library.

I needed a specific kind of Euler angles, not the one he provides, so I had to look elsewhere.

So I turned to scipy’s spatial.transform.Rotation module. This library is pretty overkill for us but at least now I have all the euler conversion functions I can dream of.

When trying to use scipy’s Slerp I had trouble figuring out the correct syntax, and the documentation was not very helpful, so when I found a function called “geometric slerp”, I tried it and it seemed to work.
Turns out “geometric slerp” and “slerp” are completely different methods that behave quite similarly with simple examples (while trying to figure out what’s going on) but quite differently in other situations.

In the end, the Slerp function was the correct one, and I found a (rather dodgy) way of using it:

# Create a rotation object that contains the current and target rotation by converting them to quaternion and back:
rots = Rotation.from_quat([
    current_rotation.as_quat(),
    target_rotation.as_quat(),
])

# Create a Slerp instance, using the combined Rotation object from above
return Slerp([0,1], rots)

The end !

Interesting Stuff I found during research

_field

A journey through the history of photography

The musée de l’Elysée was looking for a new way to present a part of their collection. In collaboration with Lab212 we created this interactive installation, where you move through a field of photographs by riding a swing

The project was initially developped and displayed at the musée de l’Elysée (lausanne), and has then been declined and shown in multiple museums and shows

Musée de l’Elysée – Lausanne

Art Genève

KIKK festival

The installation was shown at the 2021 KIKK festival in namur.

Unfortunately, due to COVID restrictions, we couldn’t get to Namur, so we created an “all in one” kit, for an easy installation on-site

Related projects

A similar concept (the interactive swing) has been declined into two installation we created for hermès

  • Bastien Girschig development, UX
  • Beatrice Lartigue Project management, concept, curation
  • Manuel Sigrist Commissariat
  • Cyril Diagne Original concept (interactive swing)

Living Archive

An AI performance experiment

Living Archive is a collaboration between Google Arts and Culture and Wayne McGregor, exploring Machine Learning use for a creative process

This project has three main components:

  • A creative tool, created for Wayne Mc Gregor and his dancers
  • A public experiment, showcasing Wayne’s archives in a new, engaging way
  • A film by Ben Cullen Williams, with visuals created from predicted data

Getting the data

Wayne McGregor provided us his whole archive: Footage from live performance, music videos, interviews, behind the scenes, etc…

Running pose detection on the archive revealed that a lot of it was not suitable for training a predictive model: Low resolution, dark & stylised footage, edited footage, fast moving camera, etc…

Thankfully WMG later provided us good quality footage, and more importantly, labelled by individual dancer

All the footage (the initial archive, and the labelled ones) was then run though pose-detection software (detects people and their arms, feet, head, etc… in an image), cleaned up and “packaged” as a dataset, for use in a machine learning model

Numbers

  • 103 archive video
  • 30 single-dancer videos
  • 57 hours of video
  • 386 000 poses in final dataset

Making predictions

With that data, we need to train an algorithm to create ‘new’ dance movements

We settled on 3 methods to do that:

  • LSTM: Long-short term memory learning model (the “real” machine learning method)
  • TSNE: plot the poses from the dataset on a 2d space, draw a line in that space, select poses close to the line, interpolate
  • Graph: same as TSNE, but with more dimensions

Choreography tool

we built a tool for Wayne McGregor and his dancers to easily request and use predictions:

A dancer records a movement using the webcam. His position is detected on each frame then analysed, and the result is fed to each prediction model (one per dancer)

The results are displayed in the tool and can be combined to create more complex outputs

Each figure starts with the same input movement, then predicts the rest according to each dancer ‘style’

Seven days

Seven days is the name of the film created by Ben Cullen Williams as a backdrop for the live stage show’s scenography

The concept of the film was to showcase a spectrum of visuals, from the most abstract (binary numbers) to the most concrete (actual filmed dancers)

The creative developpers at the lab created the raw visuals for the film, based on preditions generated by the choreography tool

A spectrum of abstraction

Sharing the love

We wanted the general public to be able to explore Wayne’s archive as we did, in a new, engaging way

This experiment allows users to explore, curate and share the poses from Wayne’s archives, mapped using the T-SNE method (the more similar two poses are, the closer they are on the map)

Metadata is available for each pose, enabling users to learn more about its origin: Videos of the preformance (when we’re allowed to show it), behind the scenes, details on the performance, etc…

Users can also use their webcam to find a pose by making one themselves

Press & awards

Living archive in Frame Magazine

The project has been featured and advertised in a variety of places, including

The use of predictive analytics doesn’t have to result in predictive work. In fact, algorithmically generated outcomes can present an aesthetic that is unexpected and off the beaten track

HOW PREDICTIVE ANALYTICS CAN CHANGE OUR NOTION OF BEAUTY – Frame Magazine

Gallery

Early pose map experiment
Pose labelling tool
Early tool interface

People

The project was a collaboration between Studio Wayne McGregor and Google arts and culture lab:

  • Bastien Girschig Machine learning model, development, project lead
  • Gael Hugo Early UI experiments, PIx2Pix pose rendering
  • damien henry Project management, ML mentor
  • Simon Doury & Romain cazier Early tool UI
  • Cyril Diagne Technical help (kubernetes and internal google tools)
  • Mario Klingemann Graph prediction model
  • Everyone at the lab Visuals for the Seven days film

_Field for Hermès NYC

In a collaboration with Lab212, for Hermès, we created an interactive installation to highlight the brand’s saddle making craftmanship

The installation is a variation on the _field installation: An video-interactive swing. Here, the swing is one of the saddles made by hermès.

It was shown during Heureka!: the opening event of the new Hermès store on Madison Ave, New york.

Other Projects

The event featured multiple other projects:

Behind the scenes

  • Nicolas guichard/Lab 212 Artistic direction, project management
  • Bastien Girschig development, UX

The [safe] Holy Bible

An AI experiment about AI, censorship and the Bible

Machine Learning is often presented as a magical solution to any and all problems. In this experiment, I explore the potential of this technology to help moderate what people write on the internet

I used Perspective API, a model developed by Google’s Counter Abuse Technology team and Jigsaw, that wants to “help ensure healthy dialogue online”

Perspective API can help mitigate toxicity and ensure healthy dialogue online

perspective API website

But instead of feeding the usual text (user comments and contributions), I gave it every sentence in the bible, and used the information returned by the tool to hide potentially offensive verses

About translation

While very powerful, the current tools don’t work very well with ancient hebrew, so a choice had to be made about what translation to work with

The New International Version (NIV) was a logical first choice, as it is the most popular one, but its licencing terms don’t allow reproducing the text in full. Instead, I decided to use the World English Bible, a Public Domain, modern English bible

To compare translations while reading, the user can click on the verse number. That verse will be opened on biblehub.com, where they can compare it with 27 different translations, including the NIV, King James version, etc…

Starfield — Feima Edition

A journey through an imaginary city between Paris and Beijing, in which objects take over. This dreamlike triptych reveals itself and lights up as the participant passes by, revealing another side of itself.

This installation is a collaboration with Lab212, and is based on _field for Hermès. It was made for Hermès and shown in Beijing in 2021

Behind the scenes

The artistic direction called for multiple complex features, like portals, mixed bezier, dynamic animation paths, selective post processing filters, edge detections, etc… Those were quite challenging to make work seamlessly together.

Thanks to this project, I also learned why Sketchup 3D models are generally frowned upon by game developers: While they look good from the outside, a lot of them are not optimised (pretty much the opposite, in fact: It would take some work to make such an unoptimised model with other software)… My very beefy GPU was struggling to hold 30 FPS with just a handful of models.

Seeing the bright side of Sketchup: This allowed me to get familiar with blender’s addon development, to create a custom cleanup and model optimisation tool. So, that’s something…

Also, thanks to covid, the whole setup had to be done remotely, using a combination of chinese chat apps, screen sharing and CCV feeds. This was fun!

Hangman machine

Overall, I would rate the global covid pandemic as a 1-star at best, but in some small ways it was actually a good time

One of those rare cases happened during the first confinement. The government had issued strict orders: stay inside, don’t meet other people, etc…

One day we found a note hanging from a string in front of our window. It was our upstairs neighbours wishing us a happy easter. We replied with another note, attached to the same rope

From then on, we exchanged messages, drawings, presents and mini games with them and their daughter through this new communication channel

One such mini game was the classic hangman. However, because I’m a nerd and the adversary couldn’t see what I was doing while waiting for my answer, I found a word list, and started working on a Hangman-solver

Yes, that’s cheating, but It’s much more fun to develop an algorithm to solve this problem than to just guess letters…

The code is available on github

Dites Côme moi

Dites Côme moi is a simple teleprompter app, initially created for my brother Côme (hence the name)

Google Arts and culture VR

A Virtual Reality app for Google’s VR headsets (Google Cardboard, then Google Daydream).

The app lets you explore the Google Arts and culture collection, zoom in to see brush strokes, view artworks in real size, and listen to audio guides by expert museum curators

Textured artworks

For this project, I also worked on a technique to retrieve the tiny bumps on paintings, to give them a more realistic texture.

open in new tab

How it works

The technique is based on differential lighting: Take a picture with light coming from the top, another with light coming from the bottom, subtract the two, add some filtering (to get rid of the lighting gradient across the painting), and you have your horizontal bumps.

Repeat with left / right lighting, you get your vertical “bumps”. Combine that with the horizontal bumps, add some processing (eg. normalisation) and you get your normal map

Generalizing

Now, all we need to do is apply the technique to all the artworks in the Google Arts and Culture collection… That will be challenging, so instead we decided to get the help of some machine learning model

Using the examples we did have (like the city painting used in our tests) to train a computer to “go from one to the other”. Here is an example of what it has learned to do

  • Antoine Guerchais Unity development, prototypes
  • Jonathan Thanant Unity development, prototypes
  • Bastien Girschig Early prototypes, Unity development, Painting texture detection
  • Google Arts and Culture, Google VR The project involved multiple talented people from the Google Arts and culture team, as well as the Google VR team

Peau de chagrin

Peau de chagrin is a small educational Youtube channel focused on environmental topics. I produced, edited and moderated comments for some of its videos.

A response to french climate-skeptic Francois Gervais
  • Come Girschig Research, Writing, Acting
  • Bastien Girschig Production, Editing, Moderation

Draw to Art

Draw to art is an experiment where you use your drawings to find and discover artworks. The experiment is available to visitors of the google arts and culture lab

Due to concerns around moderation of offensive drawings, the public experiment is a variation on that idea (where the user draws with predefined shapes instead

  • Bastien Girschig Frontend, Project Management
  • Romain Cazier Frontend

Virtual orchestra

Virtual orchestra is a experimental tool for orchestral direction practice: Using a VR controller as a baton, you conduct a full orchestra on any musical piece you want.

The sound may not be as pleasant as a real-life orchestra, but it’s a hell of a lot cheaper to train with it (a not insignificant factor for music students)

The Prototype still has many issues, but it already provides a full orchestra simulation, with the baton controlling the tempo, nuances (loudness) and articulation (legato/staccato) in real time.

Elisabeth callot with a controlled piano, and yours truly embarassing himself

You may be able to detect some slight issues with the tempo here and there… Keep in mind this is a prototype.

  • Elisabeth callot Original concept
  • Batien Girschig Development, UX
  • Jonathan Tanant Development, UX

Capécure

A journey through Capécure, an industrial hub in my hometown

VersaillesVR: the Palace is Yours

I was very lucky to be part of the Versailles VR project, a high quality virtual reality tour of the Palace of Versailles.

The “scan” was made using protogrammetry: from thousands of pictures of every little detail in the rooms, a software is able to reconstruct the 3D geometry of the original.

I was only part of the photography team, and was not involved in the development of the final application itself.

Wimbi

Wimbi is a mixed-reality game, played with an Ipad and a table. It was created as my Media and Interaction design diploma project at ECAL.

Process

The obvious technical challenge here was detecting the position of the taps on the table. The rest (linking up the detection and the game, creating the game, etc… were secondary problems in my mind)

Step 1 – Piezzoelectric elements

These things are cheap. And they work quite well for detecting small vibrations, so I decided to use those: The theory was that, by detecting when the vibration was received, then computing the time difference between each, I could approximate the relative distance between the “tap” and each sensor.

This turned out to be quite tricky

I ran a small experiment, connecting a few piezzoelements to an oscilloscope, and confirming that there was a detectable time difference with the scale and materials I was working with. There was. I was happy

Next step was simply to use an Arduino to do the same thing as the oscilloscope (note-to-self: an Arduino is not an oscilloscope).

The fist issue came: the signal was too low. Not a problem, I’ll create a pretty little PCB with an amplifier for each sensor:

And the result was:

… disappointing. This would never work in time. On to the next solution!

step 2 – Ultrasonic proximity sensor

A bit less “low-tech” than the previous ones, but now time is kind of running out and I need a solution. These will do just fine

I connect a bunch of them to an arduino, write a bit of triangulation code, and… It actually works!

The tape rectangle on the table represents the theoretical position of the Ipad. Tapping on the corner of the tape rectangle should result in a wave being created in the corner of the ipad. And it does!

But there are still some issues: The setup is very brittle (sensors can’t be moved relative to each other, there are some interferences between the sensors, you have to tap the table quite had (there is still one piezzo listening to vibrations) but your hand can’t be in the way of the sensor, etc…

This is not good enough. Let go deeper (shallower ?) in the levels of abstraction

Step 3 – Why not a kinect ?

I know, right ?

Yes, those existed then; No I did not think about using one until that point (3 weeks before the deadline). So, I spent a few nights writing some code to map the kinect’s depth sensor data into a touch interface, and voila!

With that out of the way, I need to create a game

step 4 – Creating the game

The concept of the game is there, what I need is levels. Creating layers in code may be fun, it’s also very inefficient, so I created this cute little editor

And there we are! The game was well received by the people there, and went on to not be a viral success (maybe the fact it requires a kinect, an Ipad, and a laptop had something to do with it…)

digital edition cover

Ten of September two thousand and eight

This is a digital edition project, made at ECAL. It showcases two divergent viewpoints, using the portrait orientation for one, and landscape for the other.

The story revolves around the creation of the large hardon collider, and the conspiracy theories that it generated (earth-gobbling black holes, gateway for the litteral satan, etc…)

The vertical side focuses on the scientific point of view: What is the LHC, what are we looking for with it, what have we discovered ? It includes a (written) interview with theoretical physicist lawrence krauss

The Horiontal side focuses on the … less scientific point of view: How the LHC will open a gate in the Van-Allen belt for satan, and why secret societies might be looking towards that.

Joana Castro Graphic design
Margrét Gyða Jóhannsdóttir: Photograpy
Bastien Girschig motion design, development, interviews

Screen break

Breaking a screen can happen to anybody. Especially me

This is what happened to me during a project at ECAL: carrying a pile of 3 Ipads in each hand (for the totem project), I tripped and slammed them together. Since they weren’t mine, I had them fixed, but I got me wondering: why not keep the broken but still working screen?

Many brands focus on making you phone “personal” and “unique”. Yet, what’s more unique than a broken screen’s craks ?

A keyboard made specially for you!
If customized screens get trendy, we might see some adverse side-effects

Redoit

Redoit is a computer program that gives precise instructions to a human operator to build a wooden object. The instructions are given in a straightforward, step-by-step fashion, so as to not overwhelm the operator.

The operator should not try to understand the final object: The machine decides what it wants, for its own internal reasons, the Human executes it.

Final objects

SmoothRide

While riding my bike to and from school, I noticed I became really good at avoiding the potholes, manhole covers and other bumps on the road (for comfort and to preserve my tires)

This gave me an idea for a game that would give you a score for how smooth your ride was. The app does not provide a definitive score, and it is up to the player(s) to determine what they want to compete on

A game, on a phone, on a bike, on the road… That seems like a great recipe for disaster. I did not die while using this, but that doesn’t make it a good idea.

Tamagotree

This tree has chosen you as its caretaker. You will have to tend to all its needs: water, nutrients, fend off attackers, help it grow its leaves and roots, etc…

Along the way, there will be challenges: diseases, humans, shadow from other trees, volcano clouds, and droughts

But in the end, you’ll have a beautiful, unique tree that you can share with friends, and hopefully some new knowledge about the secret life of trees

Below are some images from an early prototype of the game, created during a one week “hackaton” at Google Arts and Culture Lab

Totem

Interactive installation with 6 Ipads, created as a student project at ECAL.

Early tests
  • Guillaume Cerdeira Graphic concepts and development
  • Bastien Girschig Graphic concepts and development
  • Gael Hugo teacher

Chocolate Obsolescence

Chocolate is not the most reliable material for building electrical appliances. But I did it anyways…