QR codes! And labels! And ongoing research into on-site audience participation! (…Oh my)

If you didn’t know already, Bristol Museum & Art Gallery is home to a permanent gallery exploring the history of Egypt. This gallery hosts information belonging to the objects on touchscreen enabled kiosks. When they were installed this was the height of in-gallery audience interaction.

As we re-opened in September 2020 after lockdown, the use of touchscreens had obviously been axed. The problem was that they actually hosted most of the information in that gallery, so it was necessary to find an alternative. 

The fabulous in-house Digital Assistant team were able to develop a plugin site using WordPress from scratch, and we were able to collate the original content and shove it onto this new platform. Pages were designed by kiosk, and were available via NFC and QR code access points that were printed on stickers and stuck onto the disabled kiosk screens. Long story short – and this is very much a long and important story cut short and not explained very well – an entirely digital alternative was created and the problem was solved. 

This was a huge achievement, but not really sustainable if in future we encounter a similar conundrum and don’t have the required time/resources to complete it – which is what we encountered with the Bristol Photo Festival temporary exhibitions.

We suddenly needed to provide digital labels that could be accessed in the gallery containing biographies for each artist/photographer. Unfortunately, we had less than half the time and resources as we did with the Egypt gallery. Also, this is for a temporary exhibition rather than a permanent display. Naturally, these are very different circumstances.

Enter: Exhibitions Online.

We have a dedicated site that runs on WordPress that we do indeed use for Exhibitions Online They run on preset templates that we can tinker with to an extent, there’s not a whole lot of creative freedom but it does the job it was designed for.

We’ve used this site in a gallery setting once before; the online exhibition for Pliosaurus was available as an interactive a few years ago.

After doing some more tinkering myself, I came to the conclusion that we could take the templates out of their original context and create something that would work for these new digital labels in a fraction of the time, and without having to build or buy something new. Win/win.

By creating individual pages but without linking them to each other or to a parent page (like a landing page) we could have a number of individual pages (65 to be precise) that are technically hosted on a website that doesn’t really exist, from an audience perspective.

By doing this we could assign a QR code to each page that could be printed on a label and installed in the gallery. These pages aren’t available anywhere else (unless you look really, really hard for them) and are intended for solely mobile use while visiting the exhibitions. It turned out to be a really simple solution to something that was originally a bit daunting.

The other fundamental thing that we needed was a bunch of QR codes and a way of keeping on top of them. Jack Evans, Digital Assistant, developed a system that would both generate and also give us more flexibility and control over the now abundant number of QR codes that we use in our galleries – but he can explain this better than I:

“We realised that the demand for QR codes in the gallery was going to increase and be in place for at least a year if not permanently. We know that QR codes can be generated for free, but we knew we needed a system where QR codes could be modified after printing.

I could not come across a sustainable cost-effective system and we were opposed to basing a permanent fixture of our galleries on a system we don’t have full control over. Therefore, I created a system based on python scripting and a bit of javascript that allows us to create QR Codes that can have their destination changed after printing and uses google analytics to see how in demand particular content and the system as a whole is.”

This has been a helpful tool not only for this project, but also with the other projects where we have needed to implement QR codes since. The ability to both assess use and amend links after printing gives us a whole new range of possibilities when it comes to improving audience in-gallery experience.

This gallery opened alongside the rest of the museum on the 18th of May, so we’ve had a fair amount of time to collate data that tells us how our audience have been using these digital labels and what their experience has been. This data has informed us that our audiences…have barely used them. Oh.

Out of the 174 people who have answered the question “Did you use the QR codes in the labels next to the photos on display?” on our ongoing Audience Finder Survey, only 14% (equating to 25 people) said yes
(as of writing).

Not exactly the result that we were hoping for. Although, not sure how much of a surprise this is. Back in 2018 our User Researcher Fay posted a blog about how we use QR codes which points out that QR codes are only really used when facilitated. This more recent evidences shows that they still aren’t really being used without facilitation, even in a post-Covid (but-still-in-Covid?) world, overrun with them! Hmm…

Bonus round!

Another instance of using this platform for a QR code triggered in-gallery experience is the additional content that we provided as part of the Netsuke: Miniature masterpieces from Japan exhibition. Netsuke are small and very intricately carved figures, originally used as toggles for pouches so that they could be attached to clothing. In collaboration with the Centre for Fine Print Research, UWE Bristol we were able to showcase two of the netsuke as 3D models, hosted on Sketchfab and embedded into the Online Exhibition.

In the before times, wanted to have 3D printed models as a sort of handling collection so that our visitors, and especially our younger visitors, could further explore the objects on display – which obviously couldn’t happen in a Covid familiar world. Instead, we made the page that features the 3D models available in-gallery via QR code.

One of the 3D models available via the Online Exhibition and in the gallery.

This work was made possible thanks to support from the Art Fund

Michael Simpson, Digital Intern- Bristol Museums

About Me

I am a MA student studying Museum Cultures at Birkbeck University,(London). I have an academic interest in Black History and telling the stories of underrepresented groups and communities. A part of the course requires me to undertake a work placement (remotely) in which there was an opportunity to be a Digital Intern for Bristol Museums.

The project I am involved in- Discovering Bristol

The main focus of my work placement has been finding ways to update the content of the Discovering Bristol website. The first thing I did was to conduct an audit of the site using Google Sheets, to create summaries of the pages in the Slavery Routes section of the website. This first section of the site is approximately 85 pages of content. I also made note of the embedded images on the site to check for image quality and copyright.

What have I found?

The text: content-heavy, repetitive and outdated language.

For example, the development of the plantation system page: Link provided here:  https://www.discoveringbristol.org.uk/slavery/routes/places-involved/west-indies/plantation-system/

This is an excerpt:

Why is it problematic?

It does not explicitly state that this was the views of Europeans at the time and not the narrators. More importantly, it treats Africans as passive victims rather than highlighting the explicit intention to capture and enslave Africans based on a growing racialised system of labour.

The images- lack of historical context

Why is it problematic?

Images such as this are part of a British art historical style known as the tropical picturesque. The image presents an idealised version of plantation life. It also implies the enslaved accepted their position and diminishes the need for abolition if the brutality of plantation life is concealed.

My approach to updating content

I have watched the V&A talk on the online platform Culture Geek to find out how they maximise their user experience, focussing on the ‘Explore their Collections’ function. The main findings were :

  • They changed their approach because in 2009, when it was first created, they realised it was too self-contained and not connected enough to the main V&A website.
  • They used interaction modes ‘Understand’, ‘Explore’, ‘Develop’ and ‘Research’ to understand how different users have different user needs. The user modes ‘Explore’ and ‘Develop’ were seen as priorities because of Covid-19

What did they change?

They added a ‘You may also like’, function, recommending similar objects. Reasons behind this:

  • They wanted to connect ‘objects with objects’
  • E-commerce influence- recommended products function
  • Influence of BLM- heightened awareness of offensive language- ‘Content warning’ filters were added to their search engine

I have also looked into Liverpool Museums who have created online stories that have a similar structure in terms of the places and events mentioned. I think it is possible to streamline content in this way compared to the existing site.

Format of the old site

  • Old site consisted of clicking a series of pages which linked in a chronological order however, it often repeated the information mentioned which meant it was difficult to navigate quickly.
  • New site- would be using long form scrolling like this WordPress, which should provide easier navigation. But it would need an awareness of the order of information. The structure would also reduce excess content.

What it could look like (overview of structure)

Summary- Whats next?

Exploring engagement in a digital museum: interning at Bristol Culture

When Colston was toppled on the 7th of June 2020, the Bristol Museum website received a record number of pageviews. In contrast to the lower engagement during prior months of the pandemic, this was pretty striking. Understandably, not many people were checking the museum Opening Times or What’s On pages as museum doors were closed in March 2020 because of coronavirus. However, the digital museum was still ‘open’ – the museum websites are packed with content from Online Collections, Stories, Blog posts and now online events. 

My User Research Internship with Bristol Culture was born from a need to analyse how people were engaging with the museum websites throughout the pandemic. Having recently finished my Geography degree at the University of Bristol, where I wrote my dissertation on the inclusivity of gender-neutral toilets in university and art spaces in Bristol, the timing of this internship was perfect. I have always wanted to work in the culture and creative industry, so I was keen to lead this research project. 

Google analytics breaks down the user journey into Acquisition, Behaviour and Conversion. I was interested in understanding more about how people found the Museum’s web pages. Acquisition seemed a good place to start the research. Where were audiences coming from: Organic searches? Social Media? Newsletters? This varied across different sections of the website – have a look here

It was difficult to look at Acquisition alone without making assumptions or querying the Behaviours of these users. How long were they spending on these pages and were there any themes linking the content they were accessing? Next, I explored the Behaviour data collected by Google Analytics.

Although users flooded to the website to explore Bristol’s Black British History and involvement in the Transatlantic Slave Trade, a deeper investigation suggested that as Blogs (as opposed to Stories), this content is popular throughout the year. 

Online exhibitions showed Japanese exhibits were very popular, but users were dropping off before entering one of the three exhibits. Further testing of the online exhibition platform is needed to analyse the significance of the order the exhibits appear on the homepage. For example, a revolving showreel could be useful here.

The final stage of my research journey focussed on the relationship between online events and individual donations made. There was another spike in June 2020 with the donations data – this could be related to Colston, online events or a combination, as there were 8 online events in the final 2 weeks of June, some of which sold out. January 2021 was another record high month for donations.  

Withstanding everything the pandemic has thrown at the Arts, Bristol Museums have delivered a hugely ambitious online programme of week-long festivals, workshops, lectures and events. In total, there have been 120 events in the past 12 months. A highlight of mine was the panel discussion on the history of gender segregation in sport during the LGBTQ+ History month, because the voices of trans athletes are so often ignored. This blended data suggests that there is a relationship between online event dates and donations. 

Leaving Bristol Culture will be sad as I feel like I’ve only just scratched the surface with user research and the possibilities of Google Data Studio. As with much of coding, I had a love-hate relationship with Google Data Studio as it was often challenging to communicate how I wanted to display data using this tool.

I am coming away from the internship with a constant curiosity about the data collected by all other websites I visit and how my user journey is informing their analytics. From my short 3 months at Bristol Museums (working from home 2 days a week), I’ve improved my quantitative data manipulation and analysis skills, learnt the importance of feedback forms, and how to engage a range of audiences with the museum collections and events, during a pandemic. 

CV19 – Digital Battle Plans


Bristol Culture receives an average of 2.5 million yearly visits to its websites (not including social media). Additionally, we have different demographics specific to each social media channel, which reflect the nature of the content and how users interact with the platform features offered.

Since March 13th visits to the bristolmuseums.org.uk have fallen off sharply from a baseline of 4000/day to under 1000/day as of 6th April. This unprecedented change in website visitors is a reflection of a large scale change in user behaviour which we need to understand, presumably – due to people no longer searching to find out about visiting the museum in person, due to enforced social distancing measures. It remains to be seen how patterns of online behaviour will change in the coming weeks, however, it appears we have a new baseline which more closely matches our other websites that are more about museum objects and subject matter than physical exhibitions and events.

You can explore this graph interactively using the following link:


Before CV struck

The top 10 most visited pages in January on bristolmuseums.org.uk feature our venue homepages, specific exhibitions and our events listings

online stats January 2020

During Lockdown

From March-April we are seeing visits to our blog pages, our online stories and our collections pages feature in the top 10 most visited.

online stats March 16th-April 9th

Digital Content Strategy

Internally, we have been developing a digital content strategy to help us develop and publish content in a more systematic way. The effect of CV-19 has meant we have had to fast track this process to deal with a large demand for publishing new online content. The challenge we are faced with is how to remain true to our longer-term digital aims, whilst tackling the expectations to do more digitally. In practice, we have had to rapidly transform to a new way of working with colleagues, collaborating remotely, and develop a new fast track system of developing and signing off digital content. This has required the team to work in different ways both internally, distributing tasks between us, but also externally across departments so that our content development workflow is more transparent.

Pre-quarantine online audiences

Online we follow our social media principles: http://www.labs.bristolmuseums.org.uk/social-media-principles/

A key principle of our audience development plan is to understand and improve relationships with our audiences (physical and digital). This involves avoiding the idea that everything is for ‘everyone’. Instead of recognising that different activities suit different audiences. We seek to use data from a range of sources (rather than assumptions) to underpin decisions about how to meet the needs and wants of our audiences. 

Quarantine online audiences

Since the implementation of strict quarantine measures by the Government on Tuesday 24th March – audiences’ needs have changed.  

  • Families at home with school-age children (4 – 18) who are now home-schooling during term-time.
  • Retired people with access to computers/smart-phones who may be isolated and exploring online content for the first time.
  • People of all ages in high-risk groups advised not to leave their homes for at least the next 12 weeks.
  • People quarantining who may be lonely/anxious/angry/bored/curious or looking for opportunities to self-educate. 
  • Possible new international audiences under quarantine restrictions.

See this list created anonymously by digital/museum folk: https://docs.google.com/document/d/1MwE3OsljV8noouDopXJ2B3MFXZZvrVSZR8jSrDomf5M/edit

What should our online offer provide?


Whilst our plummeting online visitors overall tells us one story – we now have data to tell us there is a baseline of people who are visiting our web pages regularly and this audience needs consideration. Potentially a new audience with new needs but also a core group of digitally engaged visitors who are seeking content in one form or another.

Some things we need to be thinking about when it comes to our digital content:

  • What audiences are we trying to reach and what platforms are they using? 
  • What reach are we aiming for and what are other museums doing – we don’t necessarily want to publish content that is already out there. What’s our USP? 
  • What can we realistically do, and do well with limited resources?
  • What format will any resources take and where will they ‘live’? 
  • What’s our content schedule – will we be able to keep producing this stuff if we’ve grown an audience for it once we’re open again? When will we review this content and retire if/when it’s out of date?
  • We need to be thinking about doing things well (or not doing them at all – social media platforms have ways of working out what good content is, and will penalise us if we keep posting things that get low engagement. A vicious cycle)
  • We want to engage with a relevant conversation, rather than simply broadcast or repurpose what we have (though in practice we may only have resource to repurpose content)

Submitting ideas/requests for digital content during Quarantine period

We are already familiar with using trello to manage business processes so we quickly created a new board for content suggestions. This trello-ised what had been developing organically for some time, but mainly in the minds of digital and marketing teams.

Content development Process in trello

STEP 1: An idea for a new piece of digital output is suggested, written up and emailed to the digital team, and then added to the Digital Content Requests Trello.

STEP2: The suggestion is then broken down / augmented with the following information (detailed below), and added as fields to the trello card

STEP 3: This list of suggestions is circulated amongst staff on the sign off panel, for comments.

STEP 4: The card is either progressed into the To Do List, or moved back to “more info needed / see comments” list.

The following information is required in order to move a digital content suggestion forward:

Description: Top level description about what the proposal is

Content: What form does the content take? Do we already have the digital assets required or do we need to develop or repurpose and create new content? What guidelines are available around the formats needed?

Resource: What staff are required to develop the content, who has access to upload and publish it?

Audiences: Which online audiences is this for and what is their user need?

Primary platform: Where will the content live, and for how long? 

Amplification: How will it be shared?

Success: What is the desired impact / behaviour / outcome?



New and emerging content types: The lockdown period could be an opportunity to try a range of different approaches without worrying too much about their place in the long term strategy.

Online events programme

Now we can only do digital-or-nothing, we need to look at opportunities for live streaming events. Where there is no audience – how do we build enough digital audiences to know and be interested in this if we did go down that route. Related to above – online family/ adult workshops, a lot of this is happening now, are they working, how long will people be interested?

Collaborating with Bristol Cultural organisations

With other cultural organisations in Bristol facing similar situations, we’ll be looking to collaborate on exploring:

  • What is the online cultural landscape of Bristol?
  • Collaborative cultural response to Corona
  • A curated, city wide approach
  • Working with digital producers on user research questions
  • Similar to the Culture ‘Flash Sale’
  • Scheduled content in May

Arts Council England business plan

Those projects are at risk of not being able to be delivered –  can digital offer a way to do these in a different way?

Service / Museum topical issues

How can we create an online audience to move forward our decolonisation and climate change discussions?

Family digital engagement  

We’ll be working with the public programming team to develop content for a family audience

Examples of museum services with online content responding well to quarantine situation

a) they have a clear message about the Corona virus situation

b) they have adjusted their landing pages to point visitors to online content.

Examples of museums with good online content generally

Recent Guardian article by Adrian Searle lists museums for digital visits https://www.theguardian.com/artanddesign/2020/mar/25/the-best-online-art-galleries-adrian-searle


The Development Team typically manages around £12,800 in donations per month through ‘individual giving’ which goes to our charity, Bristol Museums Development Trust. This is from a variety of income streams including donation boxes, contactless kiosks, Welcome Desks and donations on exhibition tickets. Closure of our venues means this valuable income stream is lost. To mitigate this, we need to integrate fundraising ‘asks’ into our online offers. For example, when we promote our online exhibitions, ask for a donation and link back to our online donation page. 

The Development Team will work with the Digital and Marketing teams to understand plans and opportunities for digital content and scope out where and how to place fundraising messages across our platforms. We will work together to weave fundraising messages into the promotion of our online offers, across social media, as well as embed ‘asks’ within our website. 

Next Steps:

Clearly, there will be long-lasting effects from the pandemic and they’ll sweep through our statistics and data dashboards for some time. However – working collaboratively across teams, responding to change and using data to improve online services are our digital raison d’etre – we’ll
use the opportunity as a new channel for 2020 onwards instead of just a temporary fix .

snapshopt of digital stats before the pandemic

My Experience as a Digital Assistant with Bristol Museums

My name is Steffan Le Prince, I am a Digital Assistant and am primarily based at M Shed, having started the role 2 months ago. My main focus is fixing the digital interactives in the museum, I also support colleagues with technical problems and help set up and roll out new digital tech and exhibitions here, lately I’ve been working on new digital signage using the Signagelive content management system.

Living in Bristol since 6 years of age, it’s spiritually where I’m from. I have a varied educational and previous work background, I studied Computer Games Technology at University and also completed a higher education course in Music Production. I have worked from a delivery driver, to a go kart centre marshal, to a mobile arcade games area provider for events to a remote technical support agent over the phone. This role allows me to draw from all these experiences (fixing the interactives can be surprisingly similar to fixing a game!).

I was thrown in at the deep end on my first day, when the WiFi in shop and box office of M Shed went down, helping to troubleshoot this half way through my induction conversation. This straight away was so interesting and hands on, quite a challenge for me and co-Digital Assistant Jack Evans at Bristol Museum and Art Gallery, where the same WiFi issue also happened at the same time.

I love working for Bristol Culture in the Digital Team, the role combines the troubleshooting aspect that I have enjoyed in previous work and I find it really rewarding fixing technical issues and being part of the team at a local museum, M-Shed is a good fit for me as Bristol life, music, people, places and street art have all have a big impact on my life.

My Experience as a new Digital Assistant for Bristol Culture

My name is Jack Evans and I’m one of the new Digital Assistants at Bristol Culture. I am currently based at Bristol Museum & Art Gallery and I help the Museum by maintaining the technology we have in the galleries.

I am from Dursley, Gloucestershire and have lived in the South West for most of my life. After secondary school, I stayed on to do A-Levels in Computing, ICT, and Art and then went on to do a Foundation Diploma in Art and Design at SGS College. After which I went to University and as of this Summer I finished my degree in Fine Art at Falmouth University in Cornwall, where I specialised in Video Art, Photography and Installations. I did a lot of my work there using AV, Projections in particular, I put on a video art exhibition with other artists at a gallery in Falmouth and throughout my degree collaborated on many AV based art pieces.

I have always been very “techy” and have been building and fixing tech since my early teenage years. After doing my degree I still wanted to be connected to art and culture, but I also wanted to utilise my technical side. So I am incredibly happy that I am part of the Digital Team here at Bristol Culture and able to contribute to the work we do here. So far I and my colleague Steffan, who is also a new Digital Assistant, have been experimenting with ways of auditing and managing all the tech across the Museums. This will allow us to know the specifics of the technology and what tech we have in the galleries and tech available to replace older exhibits and start creating new and improved interactives throughout the next year.  

I have been maintaining and fixing the interactives we have at Bristol Museum & Art Gallery, yesterday I helped Zahid, our Content Designer fix a screen in the Magic exhibition gallery which required altering the exhibition structure and threading cables to the screen from above. We are starting to have fewer issues here at Bristol Museum and Art Gallery with interactives, as I’m now here, I’m identifying and solving issues allowing us as a team to have more time to come up with new ideas and improvements and spend less time on maintenance. 

I have also been cataloguing interactives we have in the galleries and I am starting to collect content from old interactives so we can begin to refresh old interactives. I have also helped the Curatorial team figure out what technology to purchase or rent for an upcoming exhibit next year, exhibition problem solving is something I’ve always wanted to do in my career so I was very happy to be a part of that process. My experience over the past few weeks here have been great, I have loved helping out and keeping the tech running for visitors, I look forward to more projects in the future and I am very proud to work here.

My Digital Apprenticeship with Bristol Museums so far

My name is Caroline James and I am currently in my fourth week of my Digital Marketing Apprenticeship with Bristol Museums.

I am originally from Luton and moved to the South West in 2013 when I was 18 years old to do my degree in Diagnostic Radiography, at the University of Exeter. I loved the South West so much I didn’t want to leave! So once I finished my degree and became a qualified radiographer, I moved to Bristol in 2016 and worked at Southmead Hospital. Although I absolutely loved going to university and had an interesting experience working for the NHS, after being a healthcare worker for three years, I realised it was no longer for me and wanted to have a career change. I wanted to do something more creative and have been interested in digital marketing for a long time.

I thought an apprenticeship was a good route for me as I wanted to learn new skills and use them in a real life setting. So I went on the government website and found this apprenticeship at the museum, and thought it looked great! 

I feel extremely privileged to have got this apprenticeship and I am already learning so much. I loved visiting Bristol Museum and Art Gallery and M Shed even before I moved to Bristol, so it is incredibly fulfilling to be doing digital marketing for institutions I really care about. 

So far I have helped with the launch of a project entitled “Uncomfortable Truths”. This is where a group of BAME students and alumni came together to create podcasts where they discussed their interpretation of certain objects within the museum that have an uncomfortable and controversial side to them – this includes how they were collected and what they represent.

I helped with creating a webpage presenting the project, the podcasts and its creators using WordPress. I helped upload the podcasts onto Soundcloud, and then took the WordPress code generated for each podcast and uploaded it to the webpage. I also assisted with the design of an information leaflet for the launch using a website called Canva.  

The launch itself went incredibly well and it was very interesting. I hope more podcasts discussing the complex cultural and colonial histories behind objects within the museum are created.

Additionally, I’ve been helping with the social media campaigns for the museum shop products using Hootsuite. I look forward to updating the blogs on the museum website and producing email newsletters in the near future. 

Furthermore, I get to help with the creation and the promotion of the “Stories” on the Bristol Museums website, which go in depth about black history in Bristol. 

I expect there will be many more projects and assignments that I will get to be a part of as a member of the Digital Team that will assist with my understanding of digital marketing. Furthermore, I am incredibly excited about the qualification I will be gaining from this apprenticeship and look forward to learning about the fundamentals of digital marketing such as Google Analytics and SEO. It has only been a few weeks but I am already realising what an amazing place it is to work with many teams of incredibly skilled people working together. There are so many opportunities to learn and I cannot wait to gain more skills over the next two years.




One of our digital team objectives for this year is to do more with data, to collect, share and use it in order to better understand our audiences, their behaviour and their needs. Online, Google analytics provides us with a huge amount of information on our website visitors, and we are only just beginning to scratch the surface of this powerful tool.  But for physical visitors, once they come through our doors their behaviour in our buildings largely remains a mystery. We have automatic people counters that tell us the volume of physical visits, but we don’t know how many of these visitors make their way up to the top floor, how long they stay, and how they spend their time. On a basic level, we would like to know which of our temporary exhibitions on the upper floors drive most traffic, but what further insight could we get from more data? 

We provide self complete visitor surveys via ipads in the front hall of our museums, and we can manually watch and record behaviour – but are there opportunities for automated processing and sensors to start collecting this information in a way which we can use and without infringing on people’s privacy? What are the variables that we could monitor?

Hack Time!

We like to collaborate, and welcome the opportunity to work with technical people to try things out, so the invitation to join the yearly “Lockdown” hack day at Zengenti – a 2 day event where staff form teams to work on non-work related problems. This gave us a good chance to try out some potential solutions to in gallery sensors. Armed with Raspberry Pis, webcams, an array of open source tech (and the obligatory beer) the challenge to come up with a system that can glean useful data about museum visitors at a low cost and using fairly standard infrastructure.


Atti Munir – Zengenti 

Dan Badham – Zengenti

Joe Collins – Zengenti 

Ant Doyle – Zengenti 

Nic Kilby – Zangenti

Kyle Roberts – Zengenti

Mark Pajak – Bristol Museum


  • Can we build a prototype sensor that can give us useful data on visitor behaviour in our galleries?
  • What are the variables that we would like to know?
  • Can AI automate the processing of data to provide us with useful insights?
  • Given GDPR,what are the privacy considerations? 
  • Is it possible to build a compliant and secure system that provides us with useful data without breaching privacy rights of our visitors?

Face API

The Microsoft Azure Face API is an online “cognitive service” that is capable of detecting and comparing human faces, and returning an image analysis containing data on age, gender, facial features and emotion. This could potentially give us a “happy-o-meter” for an exhibition or something that told us the distribution of ages over time or across different spaces. This sort of information would be useful for evaluating exhibition displays, or when improving how we use internal spaces for the public.

Face detection: finding faces within an image.

Face verification: providing a likeliness that the same face appears in 2 images.

Clearly, there are positive and negative ramifications of this technology as highlighted by Facebook’s use of facial recognition to automatically tag photos, which has raised privacy concerns. The automated one-to-many ‘matching’ of real-time images of people with a curated ‘watchlist’ of facial images is possible with the same technology, but this is not what we are trying to do – we just want anonymised information that can not be related back to any specific person. Whilst hack days are about experimentation and the scope is fairly open to build a rough prototype – we should spend time reviewing how regulations such as GDPR affect this technology because by nature it is a risky area even for purposes of research.

How are museums currently using facial recognition?

  • Cooper Hewitt Smithsonian Design Museum have used it to create artistic installations using computer analysis of the emotional state of visitors to an exhibit.

GDPR and the collecting and processing of personal data

The general data protection regulations focus on the collection of personal data and how it is stored or processed in some way. It defines the various players as data controllers, data processors and data subjects, giving more rights to subjects about how their personal data is used. The concerns and risks around protecting personal data mean more stringent measures need to be taken when storing or processing it, with some categories of data including biometric data considered to be sensitive and so subject to extra scrutiny.

Personal data could be any data that could be used to uniquely identify a person including name, email address, location, ip address etc, but also photographs containing identifiable faces, and therefore video. 

Following GDPR guidelines we have already reviewed how we obtain consent when taking photographs of visitors, either individually or as part of an event. Potentially any system that records or photographs people via webcams will be subject to the same policy – meaning we’d need to get consent – this could cause practical problems for deploying such a system, but the subtleties of precisely how we collect, store and process images are important, particularly when we might be calling upon cloud based services for the image analysis.

In our hypothesised solution, we will be hooking up a webcam to take snapshots of exhibition visitors which will then be presented to the image analysis engine. Since images are considered personal data, we would be classed as data controllers, and anything we do with those images as data processing, even if we are not storing the images locally or in the cloud.

Furthermore – the returned analysis of the images would be classed as biometric data under GDPR and as such we would need explicit consent from visitors to the processing of their images for this specific purpose – non consented biometric processing is not allowed.

We therefore need to be particularly careful in anything we do that might involve images of faces even if we are only converting them to anonymised demographic data without any possibility to trace the data to an individual. The problem also occurs if we want to track the same person across several places – we need to be able to identify the same face in 2 images. 

This means that whilst our project may identify the potential of currently available technology to give us useful data – we can’t deploy it in a live environment without consent. Still – we could run an experimental area in the museum where we ask for consent for visitors to be filmed for research purposes, as part of an exhibition. We’d need to assess whether the benefits of the research outweigh the effort of gaining consent.

This raises the question of where security cameras fall under this jurisdiction….time for a quick diversion: 

CCTV Cameras

As CCTV involves storing images that can be used to identify people, this comes under GDPR’s definition of personal data and as such we are required to have signage in place to inform people that we are using it, and why – the images can only be captured for this limited and specific purpose (before we start thinking we can hack into the CCTV system for some test data)

Live streaming and photography at events

 When we take photographs at events we put up signs saying that we are taking photographs, however whilst UK law allows you to take photos in a public place, passive content may not be acceptable under GDPR when collecting data via image recognition technology.

Gallery interactive displays

Some of our exhibition installations involve live streaming – we installed a cctv camera in front of a greenscreen as part of our Early Man exhibition in order. to superimpose visitors in front of a crowd of prehistoric football supporters from the film. The images are not stored but they are processed on the fly – although it is fairly obvious what the interactive exhibit is doing, should we be asking consent before the visitor approaches the camera, or displaying a privacy notice explaining how we are processing the images?

Background image © Aardman animations


Any solution that involves hooking up webcams to a network or the internet comes with a risk. For the purposes of this hackday we are going to be using raspberry pi connected to a webcam and using this to analyse the images. If this was to be implemented in the museum we’d need to assess the risk of the devices being intercepted .

Authentication and encryption:

Authentication – restrict data to authorised users – user name and password (i.e. consent given)

Encryption  – encoding of the data stream so even if unauthenticated user accesses the stream, they can’t read it without decrypting. E.g. using SSL.

Furthermore –  if we are sending personal data for analysis by a service running online, the geographic location of where this processing takes place is important.

“For GDPR purposes, Microsoft is a data processor for the following Cognitive Services, which align to privacy commitments for other Azure services”

Minimum viable product: Connecting the camera server, the face analyser, the monitoring dashboard and the visualisation. 

Despite the above practical considerations – the team have cracked on with assembling various parts of the solution – using a webcam linked to a Raspberry Pi to send images to the Azure Face API for analysis. Following on form that some nifty tools in data visualisation, and monitoring dashboard software can help users manage a number of devices and aggregate data from them. 

There are some architectural decisions to make around where the various components sit and whether image processing is done locally, on the Pi, or on a virtual server, which could be hosted locally or in the cloud. The low processing power of the Pi could limit our options for local image analysis, but sending the images for remote processing raises privacy considerations.

Step 1: Camera server

After much head scratching we had an application that could be launched on PC or linux that could be accessed over http:// to retrieve a shot from any connected webcam – this is the first part of the puzzle sorted.

By the second day we had a series of webcam devices – raspberry Pi, windows PC stick and various laptops all providing pictures from their webcams via via http requests over wifi – so far so good – next steps are how to analyse these multiple images from multiple devices.

Step 2: Face analyser.

Because the Azure Face API is a chargeable service, we don’t want to waste money by analysing images that don’t contain faces – so we implemented some open source script to first check for any faces. If an image passses the face test – we can then send it for analysis. 

The detailed analysis that is returned in JSON format includes data on age, gender, hair colour and even emotional state of the faces in the picture.

Our first readings are pretty much on point with regards to age when we tested ourselves through our laptop webcams. And seeing the structure of the returned data gives us what we need to start thinking about the potential for visualising this data.

We were intrigued by the faceid code –  does this ID relate to an individual person (which would infer the creation of a GDPR-risky person database somewhere), or simply the face within the image, and if we snapped the same people at different intervals, would they count as different people? It turns out the faceid just relates to the face in an individual image, and does not relate to tracking an individual over time – so this looks good as far as GDPR is concerned, but also limits our ability to deduce how many unique visitors we have in a space if we are taking snaphots at regular intervals.

We had originally envisaged that facial analysis of a series of images from webcams could give us metrics on headcount and dwell time. As the technology we are using requires still images captured from a webcam – we would need to take photos on a regular period to get the figures for a day. 

Taking a closer look at the “emotion” JSON data reveals a range of emotional states, which when aggregated over time could give us some interesting results and raise more questions – are visitors happier on certain days of the week? Or in some galleries? Is it possible to track the emotion of individuals, albeit anonymously, during their museum experience?

In order to answer this we’d need to save these readings in a database with each recorded against a location for the location and time of day – the number of potential variables are creeping up. 

We would also need to do some rigorous testing that the machine readings were reliable – which raises the question about how the Face API is calibrated in the  first place…but as this is just an experiment our priority is connecting the various components – fine tuning of the solution is beyond the scope of this hack.

Step 3: Data exporter 

Prometheus is the software we are using to record data over time and provide a means to query the data and make it available to incoming requests from a monitoring server. We identified the following variables that we would like to track – both to monitor uptime of each unit and also to give us useful metrics.


  • CPU gauge
  • Memory gauge
  • Disk Space gauge
  • Uptime
    • Uptime (seconds) counter
  • Services
    • Coeus_up (0/1) gauge
    • Exporter_up (0/1) gauge
  • Face count
    • current_faces (count) gauge
    • Face_id (id)
    • Total_faces (count) summary

Nice to have

  • Gender
    • male/female
      1. Gender (0/1) gauge
  • Age
    • Age buckets >18 18<>65 <65 histogram
  • Dwell duration
    • Seconds
      1. Dwell_duration_seconds gauge
  • Services
    • Coeus_up (0/1) gauge
    • Exporter_up (0/1) gauge
  • Coeus
    • API queries 
      1. API_calls (count) gauge
      2. API_request_time (seconds) gauge
  • Exporter
    • Exporter_scrape_duration_seconds gauge

Step 4: Data dashboard

Every data point carries a timestamp and so this data can be plotted along an axis of time and displayed on a dashboard to give a real time overview of the current situation.

Step 5: Data visualisation 

Using D3 we can overlay a graphic representing each face/datapoint back onto the camera feed. In our prototype mock up each face is represented by a shape giving an indication of the ir position within the fame. Upon this we could add colour or icons illustrating any of the available data from the facial analysis.


Github: Everything we did is openly available on this code repository: https://github.com/blackradley/coeus

Slack: we used this for collaboration during the project – great for chat and sharing documents and links, and breakout threads for specific conversations. This became the hive of the project.

Prometheus: monitoring remote hardware

Grafana: open source dashboard software

Azure: image recognition

Codepen:a  code playground

D3: visualization library

Final remarks

Our aim was to get all the bits of the solution working together into a minimum viable product – to get readings from the webcam into a dashboard. With multiple devices and operating systems there could be many different approaches to this in terms of deployment methods, network considerations and options for where to host the image processing technology. We also wanted a scalable solution that could be deployed to several webcam units.

Just getting the various pieces of the puzzle working would most likely take up the whole time as we sprinted towards our MVP. As we started getting the data back it was starting to become clear that the analysis of the data would present its own problems, not just for reliability, but how to structure it and what the possibilities are – how to glean a useful insight from the almost endless tranches of timestamped data points that the system could potentially generate, and the associated testing, configuring and calibrating that the finished solution would need.

Whilst the Azure Face API will merrily and endlessly convert webcam screenshots of museum visitors to data points – the problem we face is what to make of this. Could this system count individuals over time, and not just within a picture? It seems that to do this you need an idea of how to identify an individual amongst several screen shots using biometric data, and so this would require a biometric database to be constructed somewhere to tell you if the face is new, or a repeat visitor – not something we would really want to explore given the sensitive nature of this data.

So this leaves us with data that does not resolve to the unique number of people in a space over time, but the number of people in a single moment, which when plotted over time is something like an average – and so our dashboard would feature  “the average emotional state over time” or “the average gender”. As the same individual could be snapped in different emotional states. 

As ever with analytical systems the learning point here is to decide exactly on what to measure and how to analyse the data before choosing the technology – which is why hackathons are so great because the end product is not business critical and our prototype has given us some food for thought.

With GDPR presenting a barrier for experimenting with the Face API, I wonder whether we might have some fun pointing it at our museum collections to analyse the emotional states of the subjects of our paintings instead?


Thanks to Zengenti for creating / hosting the event: https://www.zengenti.com/en-gb/blog


Git repo for the project: https://github.com/blackradley/coeus


M Shed shop refit ChangeLog 2019

We’ll be writing a short update each day about how M Shed shop refit is progressing. Ahead of the refit the wider team used Basecamp to discuss the new shop design and for all decision approval. Lots of time has been spent recently on the finer details including bay depth, lighting of bays, till location and opportunities for adding value in the design. Let’s go!

Pssst you can read about our last refit changelog at Bristol Museum & Art Gallery in 2018 which has since sales increase 51% in 12months

Friday 5th July

Day 5 – voilà

install Trim, slat, glass doors, locks, first lit bay, clean, merchandise…open 16:30 and took our first sale (#236160 for £29.44 to tourists).

Thursday 4th July

Photo showing how the shop looks at the end of day 4 with glass in bays and power to the till
End of day 4 showing bays with glass shelving, power to the till and only minor work remaining

At the beginning of the day I was quietly confidently telling folks we’ll be done by end of Friday. Not 100% all singing and dancing and not a minimum viable product (MVP) but “good enough” to trade. Im not sure anyone believed me though (chuckles) as on the surface it did look far off. However much of that was due to the tools/cutting areas giving the impression of being far off.

Our electrician did a stellar job of getting power running to the till area and completing the wiring of the bay lighting and TVs. The bay lighting isn’t hooked up yet but this will be completed early next week and doesn’t pose an issue to trade over the weekend.

The shop fitters completed the final bays and slat wall. At the same time a few helpful hands (THANKS!) added the glass shelving. We only had a few book shelves on-site but ARJ-CRE8 super kindly have arranged for the rest to be delivered on Day 5. We started to pack down the temporary pop-up shop and transfer products to their new homes. By the end of the day we were all tidying up ready for a deep clean tomorrow morning. I was even reunited with my favourite tool, the pallet truck (long standing joke!) which we put to good use – a pallet truck is the workhorse of moving “stuff” with ease.

Helen and I will make a call about re-opening the shop tomorrow (end of the day) but I can’t see why we won’t be in that position by lunchtime… last big push which is mainly going to be getting product out.

Thanks to everyone who has been giving words of support, lending a hand or making jokes!

See you at 7am to help us move products right?

Wednesday 3rd July

Day 3 was very productive!

The shop fitters were able to complete the perimeter bays as shown below. Above the bays we’ve decided to make frames to hold graphics and tighten up the brand. I originally wanted to have digital screens but the cost didn’t outweigh the benefits and we can use some of our historic photos from around the harbour. Also the team didn’t like my idea which is fine!

Day 3 showing the shop with all perimeter bays in place and the till point area.
Day 3 photo showing the shop with all perimeter bays in place and the till point area.

In addition to some more decoration our brilliant in-house electrician had his first day on the project. He has the task of working out how best to power the LED bay lights, till area and our digital screens that will be behind the till. With the welcome news that the shop fitters expect to be finished early, our timeline has moved up. I asked Rob to focus first on powering our till as I have my fingers crossed we can then trade Friday. Once the till (shown below) has power we “can” be in business. After the till lighting the bays is the second priority then finally our digital screens.

Photo of our new till area.
Our new till area is now in place – woot!
Photo of a giant hole in the ground…well that was unexpected…

In the above photo there is a giant hole in the floor! This was under a bit of wall we removed and that needs to be “made good” so thankfully our ever awesome operations team arranged for a steel plug to be made today to save the day. Snags are always going to crop up and having professionals all around you makes them disappear pretty quickly. The hole needed to be filled as 1/2 of a bay unit was due to be installed at that spot.. thanks everyone who kicked into action to resolve today.

Our graphics have also gone to the printers today and will be installed Monday. In a project like this I’d much rather add new elements one by one rather than have tried to push too hard for all elements and contractors to work around each other. Having the project be ahead gives even greater buffer. Onwards!

Tuesday 2nd July

Day 2 has been rapid and already has enough done to see it coming together in real life as opposed to doodles and words! One set of contractors set about decorating and also building a new room to support our back of house function for the museum. Our design team chose the colour scheme which is designed to reinforce the brand and make the products shine.

Photo showing progress from day 2 which includes decorating and installing window bays
Day 2 progress which included wall bays and decorating. Photo by ARJ-CRE8

The shop fitters concentrated their efforts on installing the window bays and even found time to begin on the far back wall. More fittings arrived too such as glass shelving, our new custom marble table and other book tables. We expect each table to generate £5,000+ in annual sales.

We made a design decision to install bays directly in front of our windows across most of the space instead of retaining a “view” from the outside into the shop. We initially wanted to retain the “view” into the shop or use beautiful visual merchandised displays. However the displays required too much space which would reduce our shop floor area by 1m+. And keeping windows would eliminate our ability to increase overall “shoppable” space which is the aim of the game. We also took advice from a professional Visual merchandiser and between us the recommendation was “bays over beauty”. If it doesn’t add valuable then don’t do it. In the end a sensible comprise for budget reasons and to still retain some natural light at eye level was to have bays on the sides and keep the central of the windows open.

The decision to put bays in the window produced a new opportunity for us. We are now able to place graphics on the windows to promote our shop offer. Originally we planned to apply the graphics to the inside of the windows but missed the window of opportunity (bad joke) so instead will apply to the outside of the windows. This is actually a blessing in disguise as it now means we can change the graphics more frequently without a lengthy process of having to remove the bays to work. Below is the draft design we will send to the printers this week. You’ll see that two windows are to remain free and “viewable” from the outside. Natural light will still come in from the top panes.

Photo showing what the draft graphic will look like on the outer window - a giant M from our style guide
Photo showing what the draft graphic will look like on the outer window – a giant M from our style guide

In the windows that are in the foyer area there is the opportunity to expand beyond promoting retail to help explain what M Shed is and what we have on offer. Tomorrow i’ll ask our designer Fi what is possible in the four bays shown below.

The photo shows four windows that now show the back of the retail bays so we need to apply graphics to the outside windows.
The photo shows four windows that now show the back of the retail bays so we need to apply graphics to the outside windows.

Monday 1st July

ARJ-CRE8 arrived on site at 07:00 and were greeted by our Retail Manager Helen. Today’s schedule was to focus on the stripe out the current old shop fittings. The photo below shows a bare shop within just a few hours. Claudia and Helen spent the day helping where possible and getting our temporary pop-up shop ready.

 Photo of striped out M Shed shop

Once the old shop was stripped out the team then brought in lots of the new shop bays and fittings. At the same time, anther company were moving various sensors and alarms as we need to re-site lots of “bits” that were built into the original reception area. Fi, our 2D designer was also finalising graphics for the walls and windows. Onwards!

Photo showing the new shop fittings neatly laid out ready for the installation.
Photo showing the new shop fitting neatly stacked for install
Floor plan birds eye view showing a 2D pln of each part of the shop including all bays.
The floor plan of what the shop and reception areas will look like

Sunday 30th June

The last day of the current shop as we know it. Once the shop closed at 5pm a number of the team packed away all products on display. Products were either moved to the temporary “pop-up” shop area in the foyer or moved upstairs to our meeting room which is acting as our transit space during the project. At 07:00 the shopfitters from ARJ-CRE8 will be on-site to begin the project proper. Let’s hope the skip arrives earlier at day 1 is largely taking out the existing shop fittings and fixtures.