All posts by Jack Evans

The Butterfly Effect Part 1 – An Interactive Projection of Lepidoptera with Accession Data Input… easy right?

In July 2022 at M Shed, we launched our exhibition ‘Think Global: Act Bristol’. It’s an exhibition that informs the public on Climate Change as a global issue, whilst showing how Bristol can and how Bristol is acting to fight climate change. An important topic that reaches through various aspects of society, including nature.

This Interactive was thought up to be displayed in the ‘Nature’ section of this exhibition. Its purpose? To allow the public to accession our collection of Lepidoptera. This is done by entering the data shown in the photographs of our Lepidoptera, these Lepidoptera are photographed with their original handwritten accession data in shot. This data input is done through a web form on a computer setup in gallery, which is accompanied by an interactive projection wall.

The interactive wall element is to give people a fun interactive experience in gallery by moving the Lepidoptera with their movement in front of the wall. As well as this, the wall plays animations after an accession entry has been submitted and the animation is based on the data entered by a member of the public. There are 3 animations that can be displayed, one for each classification of our Lepidoptera: butterflies, moths and extinct species.  

How it Works

The interactive has a keyboard, mouse, screen, projector and camera. These are used to carry out the two functions of the interactive, accession data entry and the interactive wall. The form function is there to enable people to transcribe accession data from photos of our Lepidoptera with their paper accession data. An example of one of these images is shown below.

an image of an ‘Celastrina argiolus’ with it’s accession data.

The form has the necessary fields with validation measures where necessary to ensure that the data entered is of use. The fields are as follows:

  1. ID Letters
  2. ID Number
  3. Species Name
  4. Collectors Name
  5. Sighting Day
  6. Sighting Month
  7. Sighting Year
  8. Location
  9. Other Number
Data entry page with data entry points listed and a photo for transcription

All of these fields have validation that restricts what data can be entered and some of them (Species Name, Collectors Name, Location) have an autocorrect feature. This kicks in after 4 correct characters in a row that correspond exactly to one of the possible entries for that field. This helps the public get the spelling correct and speeds up the process of entering in data. Having the autocorrect come up after 4 correct characters also deters spam data entries, at the member of the public can only submit an entry if it passes all 4 required validation points.

Screenshot of a data entry point showing an autofill suggestion for a species that could be entered.

Once the data is entered correctly and submit is pressed a loading screen will appear, this loading screen will stay on until an animation corresponding with the type of Lepidoptera is shown on the interactive wall.  

This interactive wall uses an ultra short throw projector to front project Lepidoptera onto a wall in gallery. The nature of this projector means that it is hard for people to cast shadows on the wall as the projector is mounted very close to the wall. As we were not able to rear project, this is the next best setup for this projection that also achieves an image over 3 and a half metres wide, which gives a good area for interaction.

There is a Kinect Azure mounted away from the wall which gets a depth image of everything in shot. This depth image is used to detect motion in front of the wall which in turn is used to affect butterflies in the area around where the motion is made. More Lepidoptera build up on the projection every time an entry is made in a day.

How it Works: The Nerd Version

The interactive runs on two systems with one system referencing the other. The data entry system is a Python Flask Server, which runs on Apache and can be run on a Windows PC or a Linux server. Though the server version I am yet to run in gallery due to some compatibility improvements and an inability to sort terms and conditions for this exhibition as of typing.

The server serves the client the data entry form with a randomly chosen image for transcription alongside it, the data inputed for each entry is saved to a timestamped json file. This file contains all the data fields as well as the filename for the image, meaning that all the data can be linked and sorted through afterwards in order to upload to our database. The server also updates a file that shows the latest species that has been entered, this is used by the Interactive Wall’s system to trigger animations.

The interactive wall runs on a Touchdesigner project that I created that uses a Kinect Azure to see people and know where to apply movement to the Lepidoptera in the projection. Touchdesigner is a real time visual development platform that allows you to create interactive installations, it’s a node based programming environment that allows interactives like this to be created in good time. Touchdesigner uses a particle system(particleGPU) that uses 3 videos, one for butterflies, moths and extinct species. These videos are then put on 2d planes that move and rotate in 3d space, these are the ‘particles’. These particles are affected by optical flow, which is generated by Touchdesigner analysing motion in the depth image, areas in which it believes there are motion are then used on the particleGPU video to move the particles in the affected areas.



For the entry animations that play when an entry is made by the public there are 3 videos that play, again one for butterflies, moths and extinct species. Touchdesigner overlays these videos onto the particleGPU output when the Flask Server signals it’s had a new entry, Touchdesigner will then check which animation should be played to make sure it corresponds with the relevant Lepidoptera. This process works however it is not instantaneous and It’s one of the elements of this interactive I wish to improve for future use.  

What’s next?

As of typing, the exhibition is yet to finish, I am hoping to add some improvements to the interactive before it’s derigged as having it in gallery would be a good test bench to make solid changes. These changes include:

  • Reworked css to improve compatibility on smartphones
  • Have the linux version up and running on our server so the public can enter data on their devices
  • Decrease the latency between both systems by taking a different approach for their communication
  • Add analytics to the Touchdesigner project so we can gather data

As of typing we have over 1500 entries from the public, which should enable us to have hundreds of these Lepidoptera catalogued, which is fantastic news for us! I think this interactive has big potential for other museums and I’m hoping that I can provide versions of this to other sites in future.  

Currently it’s planned that this interactive will be returning in a permanent installation, so I plan to add these additional changes for this. I will post a second blog on labs once I’ve done some upgrades and analysed the data we have gathered from this exhibition.

Special thanks to Bristol Museums Development Trust and the ‘Think Global: Act Bristol’ exhibition for making this all possible.

Mshed’s Lodekka Bus Interactive

So quite a few things have happened since my last blog post here… Notably on our end, the Mshed Bus Interactive! 

Over the covid period using part of our grant from the Art’s Council’s Culture Recovery Fund we decided to make our iconic Lodekka Bus an interactive installation aimed at sending kids round the museum to find objects in our collection. 

The goal was to create a new way of revitalising Mshed with covid safe interactivity, specifically using the Lodekka Bus. The bus is a significant part of our collection and was accessible on both floors before covid, however due to the pandemic it currently remains closed for a period of nearly 2 years as of typing. 

How it works

We wanted to add to the bus and give it some life back in these times and do this in a way that if it is to be opened again, would not restrict access, but add to the experience. Therefore a project was commissioned to project interactive characters in the windows. These characters (specifically from the bottom 3 windows of the bus’s left side) can be waved at and will respond to this with a story about an object in our collection.

The interactive as shown below projects onto 9 of the windows on the entrance side of the bus, and has a conductor character on the TV next to the Lodekka, signposting people to the interactive. Each of the 3 interactive windows has a hand icon that fills up based on how close it is to being activated by waving. 

This video shows the functionality of the interactive windows.

How it works (The Nerd Version)

The system uses 3 Azure Kinects, each hooked up to their own computer, equipped with an 8 core i7 processor and RTX Graphics card. The 3 PC’s are hooked up to 4 projectors (one machine handling two projectors), this gives each machine one Azure Kinect hooked up to one of the 3 interactive windows on the bottom floor of the bus. All the PC’s run the same Touchdesigner project and talk to each other in order to coordinate what characters are doing in the bus windows depending on which sensor is triggered.

The characters are premade animations with each video circling back to the same start and end frame in order for videos to change over seamlessly, each projector covers 2 windows so 2 characters per projector. The bus windows are covered in Contravision which enables you to see the image whilst also being able to see inside the bus and outside the bus from the inside. 

Touchdesigner also allows us to projection map the videos to the windows making them work perfectly in situ. The wave detection is able to tell when the hand is both raised and moving and a threshold is set for the amount of said motion. Visual feedback is given in a hand icon which shows the level the threshold is currently at. Once the threshold is passed the sensor has detected a wave and will change the video content, the character will then tell you about an object in the museum. As the system works on changing videos over the characters can be changed over with new characters whenever we want them created. 

Side of Lodekka bus, 6 windows on show all with characters projected into them, a young man with beard top right, a woman with shopping and headdress top centre, old man top right, old woman with newspaper bottom left, teenager with phone bottom centre and boy with dinosaur bottom right.


Research/Procurement 

I was given the task of researching the technical side as a whole to make sure this will work, most notably being able to get the system to recognise waving as a trigger for content and being able to make this work with what hardware is available and find a developer who could pull this off.

This was a leviathan of a project to pull off in the timeframe and we managed to make use of some fantastic developments in interactive technology to achieve this. Most notably Azure Kinect sensors and Touchdesigner, which is a real time visual development platform that allows you to create interactive installations with less code and is visual programming which allows for quicker development. 

It’s a software package I’ve been interested in for a while as it allows you to mock up interactives using sensors at a much quicker pace than coding them, as most bits of code you would need to join up use of different devices and multimedia are built into the software. It’s also not restrictive in that you can still use Python within Touchdesigner to add functionality where there is no native way of achieving what you want. 

The timeframe to get this project on the go was set at the peak of covid and was restrictive for numerous reasons, notably electronic supply chains suffering, no ability to do testing of sensors on more than one person and restricted access to site affecting testing and brainstorming of the project concept and logistics.

In particular this made researching if a robust enough sensor for detecting a wave was readily available to us with a developer who can work with it and hardware powerful enough to run the detection software. After getting hold of sensors we decided that the most robust option was going to be to use Azure Kinects, which have solid skeletal tracking, which is what we use to detect waving. 

Due to how niche this project was, finding a developer that was able to pull this off was difficult. Freelancers were definitely the option as few companies are willing to take this on without doing the entire project (content as well as hardware), let alone not charging an astronomical fee for the project (10s if not 100s of thousands of pounds). Probably the hardest turn around i’ve done thus far here getting all this to fit together and work. 

We also had issues with procuring computers with powerful enough Graphics Cards to run the Azure Kinect sensors (a key reminder that order by request does not guarantee you that product at the end, even after payment.) Thankfully we had a computer delivered before install week, months after putting in the order. It all pulled together in the end and we got a fantastic developer Louis d’Aboville, he’s done numerous projects with Touchdesigner and has done a fantastic job in this project. 

Development/Installation 

Once we had the project green lit and the purchases made, the software development began from Louis, which with his use of Touchdesigner has proven to give us a changeable, upgradable and robust system that achieved this complex project. Following development of the software being finished, we began the install process of the hardware in July, where the bulk of the install work was done. Alongside this in July the content development was given to Kilogramme, who did a stellar job working with the constraints of the content needed in order for it to work with the system. Particularly with making content the right lengths to make triggering the interactive quick whilst keeping continuity throughout by using the same start and end frames, all whilst making the animation look convincing.   

Because of how the pandemic was at this time planning out a timeline for this that would fit in with other obligations of staff was nigh impossible, so getting the install date nailed down took awhile and remobilsation work of getting our sites reopened and fully running had to take precedent as well as exhibitions such as Vanguard Street Art and Bristol Photo Festival also draining our capacity. So I would again like to thank both Louis and Kilogramme for the work done with an ever changing set of dates for key work to be completed.

And as of October 2021 we launched the interactive to the public! 

Where we plan to go from here? 

We don’t plan to stop after the initial launch. As the system was designed with flexibility we wish to use analytics integrated by the developer to figure out how to improve the system over time. Over time we can figure out how to optimise the gesture recognition by walking the line between robustness and sensitivity to the various types of human waving. We can also use signage in the interactive area to drop visual cues on how to best interact with the system. We can also add themes to the content in festive periods such as Christmas with snow, halloween with pumpkins, easter with eggs, etc. On top of this there is still more we could do with the system over time. 

I believe this system shows the capability of Touchdesigner being used in museums. The ability for it to cover most types of interactives that would be made in museums, whilst being a piece of development software that i think most technicians could pick up themselves over time. It has numerous uses apart from using sensors, cameras and video content. It can manipulate content, projection map and do real time 3d content, all of these elements can be linked in to each other in one project, in realtime. A good video showing the use of this in museums can be seen here.

I have been learning the software myself and have been able to pull off some basic interactivity using the Azure Kinect and other sensors and in time I aim to be able to build on this knowledge and apply it in the museum where and when possible, to pioneer new interactivity on our sites.    

Bus Window with sensor in front. Window has a projected character, a little boy with a dinosaur toy.

A Special Thanks to Bristol Museums Development Trust and Arts Council England for making this possible.

Arts Council England Logo
Bristol Museum Development Trust Logo

My Experience as a new Digital Assistant for Bristol Culture

My name is Jack Evans and I’m one of the new Digital Assistants at Bristol Culture. I am currently based at Bristol Museum & Art Gallery and I help the Museum by maintaining the technology we have in the galleries.

I am from Dursley, Gloucestershire and have lived in the South West for most of my life. After secondary school, I stayed on to do A-Levels in Computing, ICT, and Art and then went on to do a Foundation Diploma in Art and Design at SGS College. After which I went to University and as of this Summer I finished my degree in Fine Art at Falmouth University in Cornwall, where I specialised in Video Art, Photography and Installations. I did a lot of my work there using AV, Projections in particular, I put on a video art exhibition with other artists at a gallery in Falmouth and throughout my degree collaborated on many AV based art pieces.

I have always been very “techy” and have been building and fixing tech since my early teenage years. After doing my degree I still wanted to be connected to art and culture, but I also wanted to utilise my technical side. So I am incredibly happy that I am part of the Digital Team here at Bristol Culture and able to contribute to the work we do here. So far I and my colleague Steffan, who is also a new Digital Assistant, have been experimenting with ways of auditing and managing all the tech across the Museums. This will allow us to know the specifics of the technology and what tech we have in the galleries and tech available to replace older exhibits and start creating new and improved interactives throughout the next year.  

I have been maintaining and fixing the interactives we have at Bristol Museum & Art Gallery, yesterday I helped Zahid, our Content Designer fix a screen in the Magic exhibition gallery which required altering the exhibition structure and threading cables to the screen from above. We are starting to have fewer issues here at Bristol Museum and Art Gallery with interactives, as I’m now here, I’m identifying and solving issues allowing us as a team to have more time to come up with new ideas and improvements and spend less time on maintenance. 

I have also been cataloguing interactives we have in the galleries and I am starting to collect content from old interactives so we can begin to refresh old interactives. I have also helped the Curatorial team figure out what technology to purchase or rent for an upcoming exhibit next year, exhibition problem solving is something I’ve always wanted to do in my career so I was very happy to be a part of that process. My experience over the past few weeks here have been great, I have loved helping out and keeping the tech running for visitors, I look forward to more projects in the future and I am very proud to work here.