Can neural networks help us reinterpret history?

Background

Bristol City Council publishes many types of raw data to be transparent about the information they hold, and to encourage positive projects based on this data by any citizen or organisation.

One of the most recent datasets to be published by Bristol Museums is thousands of images from the British Empire and Commonwealth (BEC) collection. You can see a curated selection of these images online “Empire through the Lens.

At a hackathon hosted by Bristol’s Open Data team with support from the Jean Golding Institute, attendees were encouraged to make use of this new dataset. Our team formed around an idea of using image style transfer, a process of transforming the artistic style of one image based on another using Convolutional Neural Networks.

In layman’s terms this method breaks down images into ‘content’ components and ‘style’ components, then combines them.

We hypothesised there would be value in restyling images from the dataset to draw out themes of Bristol’s economic and cultural history when it comes to Empire and Commonwealth.

The team

  • Dave Rowe – Development Technical Lead for Bristol City Council and Open Data enthusiast
  • Junfan Huang – MSc Mathematics of Cybersecurity student in University of Bristol
  • Mark Pajak – Head of Digital at Bristol City Council Culture Team & Bristol Museums
  • Rob Griffiths – Bristol resident and Artificial Intelligence Consultant for BJSS in the South West

Aim

To assess the potential of Style Transfer as a technique for bringing attention back to historical images and exploring aspects of their modern relevance.

Method

Natalie Thurlby from the Jean Golding Institute introduced us to a method of style transfer using Lucid, a set of open source tools for working with neural networks. You can view the full Colab notebook we used here.

To start with, we hand-selected images from the collection we thought it would be interesting to transform. We tried to pair each ‘content’ image with ‘style’ images that might draw parallels with Bristol.

Dockside Cranes


A railway steam crane lowers a train engine onto a bogie on the dockside at Kilindini harbour, Mombasa, Kenya.

When we saw this image it immediately made us think of the docks at Bristol harbourside, by the Mshed.

The SS Harmonides which transported the train [likely from Liverpool actually] to Kenya is just visible, docked further along the harbour.

In addition to the images, the data set has keywords and descriptions which provide a useful way to search and filter

[‘railway’, ‘steam’, ‘crane’, ‘lower’, ‘train’, ‘engine’, ‘bogie’, ‘dockside’, ‘Kilindini’, ‘harbour’, ‘Mombasa’, ‘Kenya’]


We liked this painting by Mark Buck called the Cranes of Bristol Harbour. It says online that Mark studied for a degree in illustration at Bower Ashton Art College in Bristol, not too far from this place.

This image has been created as a result of adding the previous two images into the style tranfer engine.


We drew an obvious parallel here between these two sets of cranes in ports around the world. The Bristol cranes are from the 1950s, but the Kenya photo was taken much earlier, in the 1920s It would be interesting to look more deeply at the cargo flows between these two ports during the 19th century.

Cliftonwood Palace


This is a view of the Victoria Memorial, Kolkata, India in 1921.

It was commissioned by Lord Curzon to commemorate the death of Queen Victoria.

We were struck by the grandeur and formality of the photo.

Key words: [‘Victoria’, ‘Memorial’, ‘Kolkata’, ‘India’, ‘1921’] – see “topic modelling below”


A photo of the colourful Victorian terraces of Cliftonwood from the river, which have their own sense of formality.

The architectural significance of these buildings in their locales and link to Queen Victoria are small parallels.

It’s funny how the system seemingly tries to reconstruct the grand building using these houses as colourful building blocks, but it ends up making it look like a shanty town.

This image was created by machine intelligence by taking an historical photograph and applying a style gleaned from a bristol cityscape.

Caribbean Carnival


Carnival dancers on Nevis, the island in the Caribbean Sea, in 1965.

Two men perform a carnival dance outdoors, accompanied by a musical band. Both dancers wear crowns adorned with peacock feathers and costumes made from ribbons and scarves.

Key words: [‘perform’, ‘carnival’, ‘dance’, ‘outdoors’, ‘accompany’, ‘musical’, ‘dancer’, ‘crown’, ‘adorn’, ‘peacock’, ‘feather’, ‘costume’, ‘ribbon’, ‘scarf’, ‘Nevis’]

St Pauls Carnival is an annual African-Caribbean carnival held, usually on the first Saturday of July, in St Pauls, Bristol.

We selected this picture to see how the system would handle the colourful feathers and sequined outfits.

The resulting image (below) was somewhat abstract but we agreed was transformed by the vibrant colours and patterns of movement.

Festival colours reimagine an historical photograph using machine intelligence – but is this a valid interpretation of the past or an abstract and meaningless picture?

After generating many examples we came together to discuss some of the ethical and legal implications of this technique.

We were particularly mindful of the fact that any discussion of Empire and Commonwealth should be treated with sensitivity. For each image, it’s challenging both to appreciate fully the context and not to project novelty or inappropriate meaning onto it.

We wondered whether this form of style transfer with heritage images was an interesting technique for people who have something to say and want an eye-catching way of communicating, but not a technique that should be used lightly – particularly with this dataset.

We often found ourselves coming back to discussions of media rights and intellectual property. None of us have a legal background but we were aware that, while we wanted to acknowledge where we had borrowed other people’s work to perform this experiment, we were generating new works of art – and it was unclear where the ownership lay.

Service Design

We set out potential benefits of our service:

  • A hosted online service to make it a more efficient process
  • Advice and tips on how to calibrate and get the best results from Style Transfer
  • Ability to process images in bulk
  • Interactive ways of browsing the dataset
  • Communication tools for publishing and sharing results
  • Interfaces for public engagement with the tool – a Twitter conversational bot

On the first day we started putting together ideas for how a web service might be used to take source images from the Open Data Platform and automate the style stransfer process.

This caused us to think about potential users of the system and what debate might be sparked fromt he resulting images.

Proposition Design

A key requirement for all users would be the ability to explore and see the photographs in their original digitised form, with the available descriptions and other metadata. Those particularly interested in exploring the underlying data would appreciate having search and filter facilities that made use of fields such as location, date, and descriptions.

We would also need a simple way of choosing a set of photographs, without getting in the way of being able to continue to discover other photos. A bit like in an online shopping scenario where you add items to a basket.

The users could then choose a style to apply to their chosen photos. This would be a selection of Bristol artworks, or iconic scenes. For those wanting to apply their own style (artists, for example) we would give an option to upload their own artwork and images.

Depending on processing power, we know that such an online service could have difficulty applying style transforms in an appropriate time for people to wait. If the waiting time were over a couple of minutes it could be that the results are provided by email.

Components

Spin off products…Topic Modelling

We even successfully built a crucial component of our future service. The metadata surrounding the images includes both keywords and descriptive text. Junfan developed a script that analysed the metadata to provide a better understanding of the range of keywords that could be used to interrogate the images. This could potentially be used in the application to enable browsing by subject….

We wanted to generate a list of keywords from the long form text captions that accompanied the images. This would allow us to come up with a classification for pictures using their description. Then, users would be able to select topics and get some pictures they want.

Here in topic 2, our model has added bridge, street, river, house, gardens and some similar words into the same group.

Python is the language of choice for this particular application
Topic modelling reveals patterns of keyword abundance amongst the captions
keywords extracted from the captions can help us build an interface to allow filtering on a theme

Reflections

After generating many examples we came together to discuss some of the ethical and legal implications of this technique.

We were particularly mindful of the fact that any discussion of Empire and Commonwealth should be treated with sensitivity. For each image, it’s challenging both to appreciate fully the context and not to project novelty or inappropriate meaning onto it.

We wondered whether this form of style transfer with heritage images was an interesting technique for people who have something to say and want an eye-catching way of communicating, but not a technique that should be used lightly – particularly with this dataset.

We often found ourselves coming back to discussions of media rights and intellectual property. None of us have a legal background but we were aware that, while we wanted to acknowledge where we had borrowed other people’s work to perform this experiment, we were generating new works of art – and it was unclear where the ownership lay.

Does this have potential?

We thought, on balance, yes this was an interesting technique for both artistic historians and artists interested in history.

We imagined their needs using the following user personas:

  • Artistic Historians: ‘I want to explore the stories behind these images and bring them to life in a contemporary way for my audience.’
  • Artists interested in history: ‘I want a creative tool to provide inspiration and see what my own personal, artistic style would look like applied to heritage images’.

We spent time scoping ways we could turn our work so far into a service to support these user groups.

References & Links

  • The repo for our application: https://github.com/xihajun/Art-vs-History-Open-Data-Hackathon-Code
  • Open data platform:https://opendata.bristol.gov.uk/pages/homepage/
  • Bristol Archives (British Empire and Commonwealth Collection): https://www.bristolmuseums.org.uk/bristol-archives/whats-at/our-collections/

Acknowledgements

Thanks to Bristol Open for co-ordinating the Hackathon.

Thanks to Lucid contributors for developing the Style Transfer code.

Thanks to the following artists for source artwork:

Mark Buck: https://www.painters-online.co.uk/artist/markbuck

Ellie Pajak

https://www.etsy.com/shop/PapierBeau?section_id=21122286

Open Data

Hi, my name is Hannah Boast and I am an apprentice working in the City Innovation Team for Bristol City Council. Our aim as a team is to create a smarter digital future for Bristol. A wide range of projects are currently being worked on by the City Innovation team such as driverless cars, smart homes and ultrafast broadband. A project I would like to elaborate on which our team is also involved in is maintaining and promoting the Open Data platform.

Bristol’s open data platform’s objective is to have accessible data that is widely available to the public and to organisations. By increasing data transparency it can open opportunities for discovering new insights of the city and support our digital economy. Successfully we have recently been co-ordinating data hackathons and data jams which involve gathering people who collaboratively code over a short period of time. During this attendees will be working on a particular project and the idea is for the teams to have the ability and freedom to work on whatever he/she wants. These engagements run along with contributing partners such as organisations and the data community. The data engagements can help us understand the aims of the interested public in open data and bringing in a new generation of people who can help drive and contribute to open data in Bristol. Keep up to date on any upcoming events on our Connecting Bristol website.

Bristol Museum & Art gallery are currently digitalising their collection of artefacts to make it accessible to a wide range of people online. A great  example is The Natural History Museum data portal it has uploaded a great deal of the museums artefacts. This gives the public access to find out more detailed information on what is held at the museum.

Get in contact with us to find out more on open data in Bristol: opendata@bristol.gov.uk

Testing museum gallery mobile interpretation

smartify logo

Over the next few weeks we are running user testing of SMARTIFY at M Shed. This app provides visitors with extra information about museum objects using image recognition to trigger the content on a mobile device.

To install the free app use this link: https://smartify.org/

If you have used the app at M Shed, please could you take a few moments to complete the following survey: https://www.surveymonkey.co.uk/r/ZVTVPW9

If you would like to help further, please get in touch with our volunteer co-ordinator: https://www.bristolmuseums.org.uk/jobs-volunteering/

 

Sharing our retail performance

May I introduce the Retail Performance dashboard. Since taking on retail in 2015 we’re proud to have increased sales by 60% in three years. We’ve gone from loss making to profitable  and at the time of writings we are up 22% compared to last year. What that really means is that our retail efforts will contribute £100,000+ profit back to the service which keeps 2 or 3 staff outside of retail in employment. I jokingly say that our sales of ‘fart whistles’ are literally keeping others in gainful employment!

I regularly tweet stats of our retail performance so thought i’d now take that up a notch and share a dashboard that you can use to see the data yourself. The digital team are working on some much slicker visualisations but for now this will do.

The Retail Performance dashboard, powered by Google Data Studio .

Things to add include:

How to nail it in Team Digital by turning it off.

This post is about my recent week of reducing screen time to a minimum after seeking a fresh approach, having lost the plot deep in some troublesome code, overloaded with an email avalanche and pestered by projects going stale. In other words…have you tried turning it off? (and not on again!)

STEP 1: TURN OFF PC

Guys this is what a computer looks like when it is off

Kinda feels better already. No more spinning cogs, no more broken code, brain starting to think in more creative ways, generally mind feeling lighter.  Trip to the stationary cupboard to stock up on Post-its and sticky things, on way speak to a colleague whom I wouldn’t usually encounter and gain an insight into the user facing end of a project I am currently working on (I try to make a mental note of that).

STEP 2: RECAP ON AGILE METHODS

Agile Service Delivery concept
a great diagram about agile processes by Jamie Arnold

(admittedly you do need to turn the computer back on from here onwards, but you get the idea!)

The team here have just completed SCRUM training and we are tasked with scratching our heads over how to translate this to our own working practices. I was particularly inspired by this diagram and blog by Jamie Arnold from G.D.S.  explaining how to run projects in an agile way. I am especially prone to wanting to see things in diagrams, and this tends to be suppressed by too much screen time 🙁

“a picture paints a thousand words.”

Also for projects that are stalled or for whatever reason on the backburner – a recap (or even retrospective creation) on the vision and goals can help you remember why they were once on the agenda in the first place, or if they still should be.

STEP 3: FOCUS ON USER NEEDS

It is actually much easier to concentrate on user needs with the computers switched off. Particularly in the museum where immediately outside the office are a tonne of visitors getting on with their lives, interacting with our products and services, for better or worse.  Since several of our projects involve large scale transformation of museum technology, mapping out how the user need is acheived from the range of possible technologies is useful. This post on mapping out the value chain explaines one method.

Mapping the value chain for donation technology

Whilst the resulting spider-web can be intimidating, it certainly helped identify some key dependencies like power and wifi (often overlooked in musuem projects but then causing serious headaches down the line) as well as where extra resource would be needed in developing new services and designs that don’t yet come ‘off the shelf’.

STEP 4: DISCOVERING PRODUCT DISCOVERY

There is almost always one, or more like three of our projects in the discovery phase at any one time, and this video form Teresa Torres on product discovery explains how to take the focus away from features and think more about outcomes, but also how to join the two in a methodical way – testing many solutions at once to analyse different ways of doing things.

We are a small multidisciplinary team, and in that I mean we each need to take on several disciplines at once, from user research, data analysis, coding, system admin, content editing, online shop order fulfilment (yes you heard that right) etc. However, it is always interesting to hear from those who can concentrate on a single line of work. With resources stretched we can waste time going down the wrong route, but we can and do collaborate with others to experiment on new solutions. Our ongoing “student as producer” projects with the University of Bristol have been a great way for us to get insights in this way at low risk whilst helping to upskill a new generation.

STEP 5: GAMIFY THE PROBLEM

Some of the hardest problems are those involving potential conflict between internal teams. These are easier to ignore than fix and therefore won’t get fixed by business as usual, they just linger and manifest, continuing to cause frustration.

Matt Locke explained it elegantly in MCG’s Museums+Tech 2018: the collaborative museum. And this got me thinking about how to attempt to align project teams that run on totally different rhythms and technologies. Last week I probably would have tried to build something in Excel or web-based tech that visualised resources over time, but no, not this week….this week I decided to use ducks!

Shooting ducks on a pinboard turned out to be a much easier way to negotiate resources and was quicker to prototype than any amount of coffee and coding (its also much easier to support 😉 ). It was also clear that Google sheets or project charts weren’t going to cut it for this particular combination of teams because each had its own way of doing things.

The challenge was to see how many weeks in a year would be available after a team had been booked for known projects. The gap analysis can be done at a glance – we can now discuss the blocks of free time for potential projects and barter for ducks, which is more fun than email crossfire. The problem has now become a physical puzzle where the negative space (illustrated by red dots)  is much more apparent than it was by cross-referencing data squares vs calendars. Its also taken out the underlying agendas across departments and helped us all focus on the problem by playing the same game – helping to synchronise our internal rhythms.

REMARKS

It may have come as a surprise for colleagues to see their digital people switch off and reach for analogue tools, kick back with a pen and paper and start sketching or shooting ducks, but to be honest its been one of the most productive weeks in recent times, and we have new ideas about old problems.

Yes, many bugs still linger in the code, but rather than hunting every last one to extinction, with the benefit of a wider awareness of the needs of our users and teams, maybe we just switch things off and concentrate on building what people actually want?

 

 

 

 

 

Digital interpretation in our galleries: Discovery kick-off

Our temporary exhibitions have around a 20% conversion rate on average. While we feel this is good (temporary exhibitions are either paid entry or ‘pay what you think’, bringing in much-needed income), flip that around and it means that around 80% of people are visiting what we call our ‘permanent galleries’ – spaces that change much less often than exhibitions. With a million visitors every year across all of our sites (but concentrated at M Shed and Bristol Museum & Art Gallery), that’s a lot of people.

A lot of our time as a digital team is taken up with temporary exhibitions at M Shed and Bristol Museum. Especially so for Zahid, our Content Designer, who looks after all of our AV and whose time is taken up with installs, derigs and AV support.

But what about all of the digital interpretation in our permanent galleries? Focusing on the two main museums mentioned above, we’ve got a wide range of interp such as info screens, QR codes triggering content, audio guides and kiosks. A lot of this is legacy stuff which we don’t actively update, either in terms of content or software/hardware. Other bits are newer – things we’ve been testing out or one-off installs.

So, how do we know what’s working? How do we know what we should be replacing digital interp with when it’s come to the end of its life – *IF* we should replace it at all? How do we know where we should focus our limited time (and money) for optimal visitor experience?

We’ve just started some discovery phases to collate all of our evidence and to gather more. We want a bigger picture of what’s successful and what isn’t. We need to be clear on how we can be as accessible as possible. We want to know what tech is worth investing in (in terms of money and time) and what isn’t. This is an important phase of work for us which will inform how we do digital interpretation in the future – backed up by user research.

Discovery phases

We’ve set out a number of six week stints from August 2018 to January 2019 to gather data, starting with an audit of what we have, analytics and what evidence or data we collect.

We’ll then move onto looking at specific galleries– the Egypt Gallery at Bristol Museum and most of the galleries at M Shed which have a lot of kiosks with legacy content.  (The M Shed kiosks probably need a separate post in themselves. They were installed for when the museum opened in 2011, and since then technology and user behaviours have changed drastically. There’s a lot we could reflect on around design intentions vs reality vs content…)

We’ll also be gathering evidence on any audio content across all of our sites, looking at using our exhibitions online as interp within galleries and working on the Smartify app as part of the 5G testing at M Shed.

We’re using this trello board to manage the project, if you want to follow what we’re doing.

Auditing our digital interpretation

First off, we simply needed to know what we have in the galleries. Our apprentice Rowan kindly went around and scoured the galleries, listing every single thing she could find – from QR codes to interactive games.

We then categorised everything, coming up with the below categories. This has really helped to give an overview of what we’re working with.

Key Level of interaction Examples User control
1 Passive Auto play / looping video, static digital label, info screens User has no control
2 Initiate QR code / URL to extra content, audio guide User triggers content, mostly on own or separate device
3 Active Games and puzzles, timeline User has complete control. Device in gallery

We then went through and listed what analytics we currently gather for each item or what action we need to take to set them up. Some things, such as info screens are ‘passive’ so we wouldn’t gather usage data for. Other things such as games built with Flash and DiscoveryPENs (accessible devices for audio tours), don’t have in-built analytics so we’ll need to ask our front of house teams to gather evidence and feedback from users. We’ll also be doing a load of observations in the galleries.

Now that people have devices in their pockets more powerful than a lot of the legacy digital interpretation in our galleries, should we be moving towards a focus on creating content for use on ‘BYO devices’ instead of installing tech on-site which will inevitably be out of date in a few short years? Is this a more accessible way of doing digital interpretation?

Let us know what you think or if you have any evidence you’re happy to share with us. I’d be really interested to hear back from museums (or any visitor attractions really) of varying sizes. We’ll keep you updated with what we find out.

Fay Curtis – User Researcher

Zahid Jaffer – Content Designer

Mark Pajak – Head of Digital

My Digital Apprenticeship with Bristol Culture

Hi! My name is Cameron Hill and I am currently working as a Digital Apprentice as part of 

Cameron Hill

the Bristol City Council Culture Team, where I’ll mainly be based at Bristol Museum and helping out with all things digital.

Previously to joining Bristol City Council, I studied Creative Media at SGS College for two years as well as at school for GCSE. A huge interest of mine is social media. Whilst at college I worked with a friend who was a fashion student who sold her creations to create more of a brand for herself. After she came up with the name, I created an Instagram page for the brand and started creating various types of content. Using Instagram stories was a great way to interact with followers. Using different features such as Q&A and polls, it was easy to see what the customers like. Something else we did with stories was showing the ‘behind the scenes’. For example: from picking the fabric, making the item itself and packing the item to be shipped.

As I am writing this it is my first day and so far it has been a lot to take in. One of my first tasks was to upload an image to a folder linked to the various screens around the museum. 

Digital signage not working

Although technology can be temperamental, the first issue we came across was unexpected….

Using my iPhone, I was asked to take an image to upload into the folder but without me realising the phone camera had ‘live photos’ turned on meaning all pictures taken would create small video clips.  After waiting for five minutes or so and the image not appearing we realised that the image was taken in High-Efficiency Image File Format (HEIC). Not knowing what HEIC was I did what anyone in the twenty-first century would do and took to Google.

 

After a little research, I came across an article in a technology magazine, The Verge stating that this format that Apple has added to iOS 11 would be a problem for PC users. From reading various articles online it is clear that a lot of people have struggled 

when trying to upload their files to PCs and not being able to view and edit it. I am really looking forward to my future working here as part of the Digital Team.

 

 

Integrating Shopify with Google Sheets (magic tricks made to look easy)

In team digital we like to make things look easy, and in doing so we hope to make life easier for people. A recent challenge has been how to recreate the Top sales by product analysis from the Shopify web application in Google Docs to see how the top 10 selling products compare month by month. The task of creating a monthly breakdown of product sales had up until now been a manual task of choosing from a date picker, exporting data, copying to google sheets, etc.

Having already had some success pushing and pulling data to google sheets using google apps script and our Culture Data platform, we decided to automate the process. The goal was to simplify the procedure of getting the sales analysis into Google docs to make it as easy as possible for the user – all they should need to do would be to select the month they wish to import.

We have developed a set of scripts for extracting data using the Shopify API, but needed to decide how to get the data into Google Sheets. Whilst there is a library for pushing data from a node application into a worksheet, our trials found it to be slow and prone to issues where the sheet did not have enough rows or other unforeseen circumstances. Instead, we performed our monthly analysis on the node server and saved this to a local database. we then built an api for that database that could be queried by shop and by month.

The next step, using google script was to query the api and pull in a month’s worth of data, then save this to a new sheet by month name. This could then be set added as a macro so that it was accessible in the toolbar for the user in a familiar place for them, at their command.

As the data is required on a monthly basis, we need to schedule the server side analysis to save a new batch of data after each month – something we can easily achieve with a cron job. The diagram below shows roughly how the prototype works from the server side and google sheets side. Interestingly, the figures don’t completely match up to the in-application analysis by Shopify, so we have some error checking to do, however we now have the power to enhance the default analysis with our own calculations, for example incorporating the cost of goods into the equation to work out the overall profitability of each product line.

 

 

QR codes and triggered content in museum spaces – in 2018

Any other museum digital people getting an influx of requests for QR codes to put in galleries recently? No? IS IT JUST US?!

After thinking that QR codes had died a death a few years ago, over the last few months we’ve had people from lots of different teams ask for QR codes to trigger content in galleries, for a variety of uses such as:

  • Sending people to additional content to what’s in an exhibition, to be used while in the gallery e.g. an audio guide
  • Showing the same content that’s in the exhibition but ‘just in case’ people want to look at it on their phones
  • Sending people to content that is referenced in exhibitions/galleries that needs a screen but doesn’t have an interactive e.g. a map on Know Your Place

After an attempt to fend them off we realised that we didn’t really have any evidence that people don’t use them. At least nothing recent or since the introduction of automatic QR code scanning with iOS 11 last year (thanks for that, Apple). So, we thought we’d test it out, making sure we’re tracking everything and also always providing a short URL for people to type into browsers as an alternative.

In most cases, it’s as expected and people just aren’t using them. They’re also not using the URL alternatives either, though, which maybe suggests that people don’t really want to have to go on their phones to look at content and are happy with reading the interpretation in the gallery. Controversial, I know. (Or maybe we need to provide more appealing content.)

However, then we come to our recent Grayson Perry exhibition at Bristol Museum & Art Gallery, which had audio content which was ‘extra’ to what was in the exhibition. We provided headphones but visitors used their own devices. A key difference with this one though was that our front of house team facilitated use of the QR codes, encouraging visitors to use them and showing them what to do. As such, the six audio files (there was one with each tapestry on display) had 5,520 listens altogether over the course of the exhibition (March – June), over 900 each on average.

Whilst it’s great that they were used – it threw us a bit – the flip side of this is that it was only in an instance where it’s being facilitated. I’m not partuclarly keen on using something that we’re having to teach visitors how to use and where we’re trying to change users’ behaviours.

There’s also some more here around the crossover between online and gallery content (should we be using one thing for both, are they different user cases that need to be separate) which we’re talking about and testing more and more at the moment, but that’s one for another post.

We’d be really interested to hear your thoughts on triggered content. Do people even know what QR codes are? Are ‘just because we can and they’re no/low cost’ reasons enough to use them? How do you do triggered content? Is this unique to medium-sized museums or are the big and smaller guys grappling with this too? Or is it really just us?!

Bristol Museum & Art Gallery refit ChangeLog

Photo of newly refit shop at Bristol Museum & Art Gallery

After much planning, preparation and excitement the week of 25-29th June 2018 was the building of our shop refit at Bristol Museum & Art Gallery. The first time in our history that we’re commissioned a specialist cultural heritage shop fitting firm, ARJ CRE8. It is the end of the week and many people have worked very long hours to smash out the out shop fittings and build us a shop that we can be proud of…and most importantly increase profit.

The shop is complete and ready for customers on Saturday. We have a small snagging list and need to visual merchandise properly but this is scheduled for early next week. For now we just need to ensure 100% of products are available and nothing is missing /left in storage.

Today is a proud moment

Thank you to everybody who encouraged us throughout the week and/or lent a hand.  A special thanks also to Bristol Museums Development Trust who agreed to significantly contribute to the cost of the project. I can’t thank Andy, Jon and the team from ARJ CRE8 enough for their professionalism, problem solving ability and relentless cheerfulness!

Now let’s go out and prove you don’t need a stockroom…..hehe

ChangeLog

29th June 2018
  • 07:20-10:00  GO! GO! Go! Moved as much products as possible from storage to the shop and our holding space. Big thank you to the staff who volunteered some time to make stuff around
  • 07:45 – 17:00 Finished up adding doors to bays, shelving, lighting adjustments and painting
  • 11:00 accessories arrive from courier to enable visual merchandising of the shop
  • 12:00-16:30 a few of our international volunteers came to the rescue and helped us prepare shelving and get products out on the shelves.
  • 15:00-17:00 move the pop-up shop fittings back into the shop and setup the tills and digital signage
  • 15:30 sold to our first customer despite being technically closed! A visitor really wanted our Millerds Map so I showed him our new bay and we made the sale!
  • 17:00-18:30 vacuum, clean and move out any non-critical products and accessories
  • 18:31 Shop is ready to open Saturday morning
28th June 2018
  • 07:30-10:00 move stock from deep storage
  • 10:00-13:00 move bay units into position
  • 13:00-18:30 wire and light each bay, reconnect air-handling which appears to have been out of action for years, finish cutting ceiling tiles
  • 17:00-18:30 move products to outside shop ready for restocking Friday morning
27th June 2018
  • Build bay bases and measure out precise bay locations
  • Wire perimeter
  • Ordered accessories for displaying products
  • Wire networking to shop
  • Empty final waste to skip
26th June 2018
  • 07:30 Ceiling fitter arrives onsite to fit ceiling tiles on existing tracks. Quickly discovers that all the track is obsolete and needs to replace entire track
  • 08:00 Zak tears shirt moving pallet full of ceiling tiles
  • 08:30-10:00 set up pop up shop in front hall. Shop takes £496.35 gross during day
  • 08:30-11:00 Replace obsolete circuit board
  • 08:00-21:00 Continue work to perimeter walls. Edge of ceiling complete and 50% of ceiling track fitted
25th June 2018
  • 06:30 Skip arrives…in wrong location…… 2hr wait for move
  • 8am Contractors arrives and unloads tools
  • 08:30 Contractor begins to gut existing shop walls and ceiling
  • 10:00 Retail team begin to review products for pop-up shop which will run 26-29th June
  • 09:00 Sparks begins to review wiring and remove old…discover circuit board is ancient so we get in Carters to assess and agree to replace on 26th
  • 10:00 Waste for skip removed to front of building and loaded into waiting skip
  • 14:05 [redacted!]
  • 14:15 Building Practice team called to assess wall
  • 15:00 Large lorry of 38 shop bays arrives and is unloaded
  • 16:00 Stone mason’s make wall safe by carefully taking wall pillar apart without further damage to each stone which is then stored
  • 17:00 Second large van arrives to deliver central bay units and small fittings which is unloaded
  • 17:45 Remaining waste loaded into van
  • 17:45 to 18:15 Clean up of route
  • 19:30 Evening private hire event starts
24th June
  • Team of 6 empty all shop products and move to holding location
  • Old fittings e.g shelving removed to storage or for recycling

Photo of shop the day before refit all emptied and readyThe shop just hours before the refit to rip out the stockroom, install new bays and maximise the space