Monthly Archives: November 2021

Mshed’s Lodekka Bus Interactive

So quite a few things have happened since my last blog post here… Notably on our end, the Mshed Bus Interactive! 

Over the covid period using part of our grant from the Art’s Council’s Culture Recovery Fund we decided to make our iconic Lodekka Bus an interactive installation aimed at sending kids round the museum to find objects in our collection. 

The goal was to create a new way of revitalising Mshed with covid safe interactivity, specifically using the Lodekka Bus. The bus is a significant part of our collection and was accessible on both floors before covid, however due to the pandemic it currently remains closed for a period of nearly 2 years as of typing. 

How it works

We wanted to add to the bus and give it some life back in these times and do this in a way that if it is to be opened again, would not restrict access, but add to the experience. Therefore a project was commissioned to project interactive characters in the windows. These characters (specifically from the bottom 3 windows of the bus’s left side) can be waved at and will respond to this with a story about an object in our collection.

The interactive as shown below projects onto 9 of the windows on the entrance side of the bus, and has a conductor character on the TV next to the Lodekka, signposting people to the interactive. Each of the 3 interactive windows has a hand icon that fills up based on how close it is to being activated by waving. 

This video shows the functionality of the interactive windows.

How it works (The Nerd Version)

The system uses 3 Azure Kinects, each hooked up to their own computer, equipped with an 8 core i7 processor and RTX Graphics card. The 3 PC’s are hooked up to 4 projectors (one machine handling two projectors), this gives each machine one Azure Kinect hooked up to one of the 3 interactive windows on the bottom floor of the bus. All the PC’s run the same Touchdesigner project and talk to each other in order to coordinate what characters are doing in the bus windows depending on which sensor is triggered.

The characters are premade animations with each video circling back to the same start and end frame in order for videos to change over seamlessly, each projector covers 2 windows so 2 characters per projector. The bus windows are covered in Contravision which enables you to see the image whilst also being able to see inside the bus and outside the bus from the inside. 

Touchdesigner also allows us to projection map the videos to the windows making them work perfectly in situ. The wave detection is able to tell when the hand is both raised and moving and a threshold is set for the amount of said motion. Visual feedback is given in a hand icon which shows the level the threshold is currently at. Once the threshold is passed the sensor has detected a wave and will change the video content, the character will then tell you about an object in the museum. As the system works on changing videos over the characters can be changed over with new characters whenever we want them created. 

Side of Lodekka bus, 6 windows on show all with characters projected into them, a young man with beard top right, a woman with shopping and headdress top centre, old man top right, old woman with newspaper bottom left, teenager with phone bottom centre and boy with dinosaur bottom right.


Research/Procurement 

I was given the task of researching the technical side as a whole to make sure this will work, most notably being able to get the system to recognise waving as a trigger for content and being able to make this work with what hardware is available and find a developer who could pull this off.

This was a leviathan of a project to pull off in the timeframe and we managed to make use of some fantastic developments in interactive technology to achieve this. Most notably Azure Kinect sensors and Touchdesigner, which is a real time visual development platform that allows you to create interactive installations with less code and is visual programming which allows for quicker development. 

It’s a software package I’ve been interested in for a while as it allows you to mock up interactives using sensors at a much quicker pace than coding them, as most bits of code you would need to join up use of different devices and multimedia are built into the software. It’s also not restrictive in that you can still use Python within Touchdesigner to add functionality where there is no native way of achieving what you want. 

The timeframe to get this project on the go was set at the peak of covid and was restrictive for numerous reasons, notably electronic supply chains suffering, no ability to do testing of sensors on more than one person and restricted access to site affecting testing and brainstorming of the project concept and logistics.

In particular this made researching if a robust enough sensor for detecting a wave was readily available to us with a developer who can work with it and hardware powerful enough to run the detection software. After getting hold of sensors we decided that the most robust option was going to be to use Azure Kinects, which have solid skeletal tracking, which is what we use to detect waving. 

Due to how niche this project was, finding a developer that was able to pull this off was difficult. Freelancers were definitely the option as few companies are willing to take this on without doing the entire project (content as well as hardware), let alone not charging an astronomical fee for the project (10s if not 100s of thousands of pounds). Probably the hardest turn around i’ve done thus far here getting all this to fit together and work. 

We also had issues with procuring computers with powerful enough Graphics Cards to run the Azure Kinect sensors (a key reminder that order by request does not guarantee you that product at the end, even after payment.) Thankfully we had a computer delivered before install week, months after putting in the order. It all pulled together in the end and we got a fantastic developer Louis d’Aboville, he’s done numerous projects with Touchdesigner and has done a fantastic job in this project. 

Development/Installation 

Once we had the project green lit and the purchases made, the software development began from Louis, which with his use of Touchdesigner has proven to give us a changeable, upgradable and robust system that achieved this complex project. Following development of the software being finished, we began the install process of the hardware in July, where the bulk of the install work was done. Alongside this in July the content development was given to Kilogramme, who did a stellar job working with the constraints of the content needed in order for it to work with the system. Particularly with making content the right lengths to make triggering the interactive quick whilst keeping continuity throughout by using the same start and end frames, all whilst making the animation look convincing.   

Because of how the pandemic was at this time planning out a timeline for this that would fit in with other obligations of staff was nigh impossible, so getting the install date nailed down took awhile and remobilsation work of getting our sites reopened and fully running had to take precedent as well as exhibitions such as Vanguard Street Art and Bristol Photo Festival also draining our capacity. So I would again like to thank both Louis and Kilogramme for the work done with an ever changing set of dates for key work to be completed.

And as of October 2021 we launched the interactive to the public! 

Where we plan to go from here? 

We don’t plan to stop after the initial launch. As the system was designed with flexibility we wish to use analytics integrated by the developer to figure out how to improve the system over time. Over time we can figure out how to optimise the gesture recognition by walking the line between robustness and sensitivity to the various types of human waving. We can also use signage in the interactive area to drop visual cues on how to best interact with the system. We can also add themes to the content in festive periods such as Christmas with snow, halloween with pumpkins, easter with eggs, etc. On top of this there is still more we could do with the system over time. 

I believe this system shows the capability of Touchdesigner being used in museums. The ability for it to cover most types of interactives that would be made in museums, whilst being a piece of development software that i think most technicians could pick up themselves over time. It has numerous uses apart from using sensors, cameras and video content. It can manipulate content, projection map and do real time 3d content, all of these elements can be linked in to each other in one project, in realtime. A good video showing the use of this in museums can be seen here.

I have been learning the software myself and have been able to pull off some basic interactivity using the Azure Kinect and other sensors and in time I aim to be able to build on this knowledge and apply it in the museum where and when possible, to pioneer new interactivity on our sites.    

Bus Window with sensor in front. Window has a projected character, a little boy with a dinosaur toy.

A Special Thanks to Bristol Museums Development Trust and Arts Council England for making this possible.

Arts Council England Logo
Bristol Museum Development Trust Logo

Google Arts & Culture: an overview…also, what is it?

I have been working on the development of the Bristol Museums partner page with Google Arts & Culture for close to two years, and in October it finally went live!

Screenshot of the Bristol Museums Google Arts & Culture partner page. Header image is a painting of the Clifton Suspension Bridge and highlighted are the Online Exhibits.

Some background info about my involvement

I started working on this as a trainee on the Museum Futures programme in January 2020, this was actually one of the first projects that I participated on. Originally designed as a partnership with South West Museum Development , the idea behind it was that we would develop a page for Bristol Museums and then bring this (and the process guides) to smaller museums as a way to support getting their collections online. However, it was mutually decided that this process was more convoluted than anyone first assumed, and that didn’t end up happening.

As of April 2021, I have continued to work on this in my current role as Digital Collections Content Coordinator – a position funded by the Art Fund – as part of a larger project to make our collections accessible online. Thanks Art Fund!

This project has not necessarily gone to plan. We originally aimed to launch at some point in summer 2020. We were then offered to be a part of the Google Arts & Culture Black History Month 2020 campaign if we were ready to launch by that October. While we first worked towards meeting the deadline, we ultimately decided against going ahead with this plan as we had to rush, and we felt that these stories deserved a much longer preparation time than we could give them at that stage. Also, we felt that we didn’t need to be a part of the campaign in order to tell these stories. 

What is Google Arts & Culture?

Google Arts & Culture is still fairly new and unknown territory, and there seem to be a number of (understandable) misconceptions about what its purpose is. Is it social media? Is it an alternative to Collections Online? Is it a blog? Can we signpost to events and the shop?

No, sort of but not really, no and no. 

This doesn’t really sound appealing, does it?

The best comparison we can make is to a Collections Online service, but less extensive. And it’s shared by lots of other organisations. And also other organisations can use our images. (Yikes! But bear with me.)

It is described as an online platform through which the public can view high resolution images of objects in museums and galleries. This is accurate, does what it says on the tin. 

You might know Google Arts & Culture from the Art Selfies trend (which I would recommend checking out if you’re not easily offended, as the comparisons are usually NOT KIND) or the chance to zoom in reeeeeally close to Rembrandt’s The Night Watch. These are two of the platform’s jazzy features that haven’t really been seen anywhere before, at least not in the same way. 

Why do we want to use it?

They use incredibly sophisticated software to automatically attach these functions to uploaded content, which is good for us because it means we don’t have to do anything special to get them to work for our objects. By using the highest quality TIFFs that we have for the objects we’ve selected, we can zoom in to brushstroke level on these works and use attention grabbing features like an interactive timeline. 

Image of the interactive timeline on the Bristol Museums Google Arts & Culture page. Date range starting at 500 AD and ending at 1910

I mentioned before that other people can use our images. This sounds like a big no-no, but bear with me (again). 

When creating an exhibition or a story you can use content that you’ve previously uploaded, but you also have the opportunity to use images shared by other organisations. This is often used if an org is creating a story about a specific subject and they don’t have enough content/images to contextualise, they can use images that have been uploaded to the platform previously. As all images already have clear rights acknowledgements and redirect to the partner page they belong to, this does not breach anything nasty. 

The benefit of this is that the reach one image could potentially have is boundless, and thus, the reach of our page also has the potential to be boundless.

What do we do if they kill it?

Well, it wouldn’t be ideal. We wouldn’t lose much content, and we won’t lose any data as this all came from our CMS anyway. We don’t rely on this to attract the bulk of our audiences and we’ve approached it as a bit of an experiment. It would be a shame to lose it, but it’s so new that I honestly can’t say how much of an impact that it would have, so I suppose we’ll just have to wait and see.

What has the process been to make it a thing here?

LONG. This process has been full of learning curves and a lot of troubleshooting. There is much to be said for data consistency and quality at internal database level when working on projects such as this. Arguably, one of the longest processes is assessing groups of content to ensure that what you’re including meets data requirements. But it has been fun to experiment and uncover a process that is now…somewhat…streamlined – which looks a bit like this:

  1. Find cool things on the database
  1. Export cool things using a self-formatting report that you’ve spent weeks developing in Visual Basic (groan)
  1. Find images of cool things and group those
  1. Export images of cool things using another self-formatting report that you’ve spent weeks developing in Visual Basic (more groaning)
  1. Stitch together image metadata and object metadata
  1. Add in descriptions and dimensions data manually because of data quality issues and duplicates that you have to assess on a case by case basis
  1. Upload fully formatted and cleaned dataset to a Google Drive as a Google Sheet
  1. Add in rows from new dataset into the Google Sheet that you’ve been provided with, because instead of uploading individual CSVs (which it says you can do but this option does not work) you have to use one spreadsheet and refresh it every time you make additions from the Cultural Institute (Google A&C back end)
  1. Upload images to Google Bucket 
  1. Refresh spreadsheet from the Cultural Institute  
  1. Fix all of the errors that it comes up with because it’s a buggy system 
  1. Refresh again
  1. Repeat steps 11 and 12 as needed

So…not exactly streamlined but in fairness, I have ironed out all of the kinks that I am capable of ironing out. The systems designed by Google are more archaic in practice than I was anticipating (sorry Google, no shade) and the small yet very irritating tech issues were real roadblocks at times. And yet, we persevere.

There will always be a level of manual work involved in this process, as there should be when it comes to choosing images and reviewing content, but I think that this does highlight areas where we could do with giving our database some TLC – as if that’s an easy and quick solution that doesn’t require time, money and other resources…

We aren’t sure what the future of the Bristol Museums partner page looks like just yet, especially with a few projects in the works that might help us bridge some of the gap that Google Arts & Culture is helping to fill. At the very least, I’ve learned a fair bit about data movement and adaptability.

Do have a look! This was a labour of love and stubbornness. Maybe let us know what you think?

This work was made possible by a Respond and Reimagine grant from The Art Fund

Improving our Users Experience and Data driven decisions: Digital User Research at Bristol Culture

Introduction


I’m George, a new member of the team on a year fixed term contract, coming from a background of website UX and other IT technical experience. My former role had a high focus on web standards, accessibility and a lot of ownership over what with publish but following guidance. I have taken a big curiosity over how we meet user needs with our digital and online web content and why. One of the easiest ways we really know what our users want is by measuring our data, and analysing its successes.

I was very grateful to have an opportunity when I joined the team, to be a part of Culture24‘s Let’s Get Real project (Supported by Arts Council England). And was gladly thrown into a positive working community of other museums and professionals alike who assisted me and complimented my existing knowledge. In this blog i’ll cover some of the things I aimed to achieve throughout its time, how I achieved it and now how it is being used in our daily activities.

The project and my experiments led to a discovery of what data is important to our organisation and also paved the way of thinking of how can we use this data to consistently read and improve on our digital output.

To begin, I created a dashboard using Google Data studio to measure all our web stats and also experiments, at first I wasn’t too sure of it being easy to read at a glance, so I kept making improvements on my first version and getting feedback and testing from business and my team.

Web stats dashboard: Screenshot of the current version of our data dashboard

Now with a way to measure my experiments, I started my first idea with to add another static header button for newsletter sign ups (and measure its effectiveness of converting site users to newsletter sign ups).

Website header CTA buttons: Addition of newsletter option, which is 0 cost to the user sign up and then getnotified of everything we do – events, shop products and venue reopening times.

During this experiment I felt I could do more, I felt the need to have something more tailored to the user and what page they were viewing and looked at using page targeted pop ups for my second experiment.

Building a dashboard measure success

Before and during experimenting we needed to ensure we could visualise the data of our results.
We could of course use analytics, but we found this was not time efficient and had many different routes to find our information. The use and development of one google dashboard allowed us to build and expand upon needs of the organisation through data driven decisions and evidence.

From the whole of one domain and it’s page views, where users come from before coming onto the site (referrals), a timeline graph of noticeable hits and a map to show what different regions, countries and cities are viewing what pages the most and for how long – the use of a bespoke designed dashboard would accomplish this. And it shouldn’t need you to be a analytics genius.

I tried several different designs initially but tried to keep in simple, user friendly and at a glance. Once the design elements and graphs were in place, it was easy enough to create more of these and filter to specific organisation need. One notable website I added has exceptionally interesting and notable data for our organisation: discoveringbristol.org.uk.

Noting the below dashboard, it has users across the world, a great average reading time and approx. 5million page views. However we feel it could use an update to reflect more modern times and tech.

Web stats dashboard: 5million hits and worldwide users on discoveringbristol.org.uk

But what more can we do with this data? There is a clear need by the pageviews to provide more historical content not just to the local or national viewers of our site – a worldwide web audience.

Understanding our users and what they want

The timeline graph below shows the most viewed events web pages since September 2019.
Do note the almost flatline number of page views from March 2020 – when COVID lockdown begun.

Timeline graph of pageviews: A noticeable impact due to covid on our events pages

As a result of our venues closing we could predict that visitor donations, online ticket sales, in venue shop sales and visitors who relied on our noticeboards or promotions using banners would decrease.
 
And an opposite effect is there has been more of a demand to have a stronger web presence, offer online events and exhibitions, keeping our audiences up to date with what we’re doing etc.

It’s this thinking which gave me the idea as part of Let’s Get Real to experiment with trying to convert users to donations, shop sales or newsletter sign ups as this could help the impact that COVID has had on our organisation.

Experiments

Experiment one: Adding ‘Newsletter’ button alongside 2 other header buttons

Problem: Newsletter underperforms as an ‘outbound event click’ in Google Analytics compared to Shop/Donate buttons
Solution: Add extra button alongside static buttons within website header
Expected outcome: To increase newsletter clicks and gain bigger mailing list for future promotions
Considerations: May decrease other 2 buttons clicks as 3 options
Result: Newsletter clicks are performing better than donations clicks

Summary:

We had 2 CTA buttons for Shop and Donate. I suggested to add a ‘Newsletter’ button link as shown:

Header buttons: Shop/Donate/Newsletter – We needed to offer a 0 cost option to our users

We implemented the google tag manager tracking of all these event clicks from Jul 2nd.
At the same time, we added the button for Newsletter to our live site as shown above.
The first two buttons require a cost – buying or donating money. I felt we could be more inclusive.

For other users of certain demographics, for example those who are younger, disadvantaged, lower incomes, the very least we could offer them is a way to be notified by our work via email newsletter. This should offer just that – a way to interact and keep involved with us.

 Web stats dashboard: Header Buttons: Shop/Donate/Newsletter – Area and Pie (page) chart

The above screenshot showed us our users prefer to go to our shop, then click our newsletter link and then lastly donate. The table shows what pages had the most clicks to which buttons.

Here is our second page of the same report, with different data charts to help visualise the results:

Web stats dashboard: Header Buttons: Shop/Donate/Newsletter – table and timeline chart

Conclusion:

We have seen a major increase in newsletter sign ups since I joined in March and made these changes:

Mailchimp newsletter: Audience growth 30-11-20 to 30-11-21


Given the year of lockdown and restrictions to visit public venues, combined with data such as large page views of our venue opening times before announcements to drop restrictions, sending our users direct updates to their mailbox seemed like a better way to notify attendees.

I feel this was better than have them navigate to our site to check updates as they have been frequently changing and potentially delayed or disrupted – potentially out of date.

As these buttons are within the header area, the first things seen before reading content, and are present across all pages of the site. The opportunity to sign up is given to the user to click these whenever, regardless of the content they are viewing.

Once we added the button, we have found an increase of clicks to the newsletter sign up, where this could be a user chooses after reading the page to intentionally sign up to keep notified and updated of the museums via email newsletter.

This is a positive safeguard to prevent them missing out on more information and general relevant news which could decide or impact their attend the museum in the future.
As this was a static button this gave me the idea for my next experiment: page targeted pop ups.

Experiment two: Page targeted pop ups for The Colston WHN exhibition/survey

Problem: Inability/lack of control to target users to a specific goal campaign or CTA that interactively prompts them during their reading – static buttons already in place for our more generic targets
Solution: Find and install flexible page pop ups tool and launch with testing alongside relevant and suitable campaign/topic. In this case Colston Exhibition and Black History web pages
Expected outcome: Users see a pop up relating to page content topic and will click on the pop up as its relevant to their interest.
Considerations: ‘pop ups’ generally considered an annoyance/disruptive/negative impact
Result: Effective – content that readers have chosen to view get suggested to take an action.
This was successful as the target pages had content relevant to the suggested action in the popup

Page targeted popups: CTA prompt used for TCS: What Next survey, exhibition and campaign

Summary:

We needed a way to drive existing readers of our Black History content to the Colston statue survey.
I researched different ways to prompt users and looked at a plugin called Boxzilla pop ups.
It was obvious that users reading our Black History content would actually take the survey and also offer good, valuable feedback – if they were suggested to.

But how would these users who land on our History pages know about the Colston survey or exhibition if they were just on the article/story they had chosen to read? A box that prompts them during their reading, with a well phrased call-to-action that appears halfway through their reading.

We also needed to measure this which is what the plugin we added could offer us. This tied in with the google dashboard, where we see how many were shown, interacted and dismissed.

Popup dashboard: Measuring success on showed/interaction/dismissed data

First we needed a list of pages we needed to target, we then set the box pop up to be seen on all of those page URLs when the user scrolls above 55%. By default the box type is on the bottom right of the screen.  We can then set the expiry of the pop up if necessary, which is useful for if it is running beside a campaign – for example this could be for a donations campaign or a shop sale.

Popup settings: Where and how we can set the popup prompts

These parameters are all pretty self-explanatory and offer the option to be more present to the users viewing experience if necessary, however for the sake of less intrusion to the user and a bad experience it typically seems better to have a ‘less is more’ approach as it may seem intrusive.

Popup dashboard: Timeline total of actions taken on the popup for Colston survey campaign

Conclusion

Generally speaking, as long as the pop up call-to-action is directly relevant to the page content itself then it’s more likely that the user will not ignore or close the prompt. The longer the exhibition went on, the more popups increased too.

The last part of the campaign we saw a massive increase which is likely linked to general pubic awareness of the survey campaign, alongside the organisations other comms and other promotions plans to do with the survey. The ‘last chance’ message was likely more encouraging too.

Popup dashboard: Total interactions on the Colston Survey, July 14 to Oct 2
Survey page data: Pageviews and referral data

In comparison to the more general, static newsletter button CTA which are at the top of the page and then scrolled past (and likely missed). A popup that prompts for something like newsletter specifically is more likely to be ignored as it may be intrusive due to no direct relevance to their reading content and they have already seen it as a static button. Also, popups do not return unless the page is refreshed.

Popup dashboard: Total actions taken on the popup of Newsletter – non-specific campaign

 I think it can be useful to have a pop up newsletter but if the same users are looking at them repeatedly,  it could be intrusive and possibly an annoyance – I am in belief with more testing it can be set to not show popups to those already signed up to their newsletter (via mailchimp newsletter integration).

Popup: Used for Black History Month to drive users to events/feedback and our decol statement

For now, ultimately, alongside other campaigns ongoing within the organisation this has great potential to assist those by keeping users already on our website directed to other parts and to convert readers into actions.