All posts by Mark Pajak

How to nail it in Team Digital by turning it off.

This post is about my recent week of reducing screen time to a minimum after seeking a fresh approach, having lost the plot deep in some troublesome code, overloaded with an email avalanche and pestered by projects going stale. In other words…have you tried turning it off? (and not on again!)

STEP 1: TURN OFF PC

Guys this is what a computer looks like when it is off

Kinda feels better already. No more spinning cogs, no more broken code, brain starting to think in more creative ways, generally mind feeling lighter.  Trip to the stationary cupboard to stock up on Post-its and sticky things, on way speak to a colleague whom I wouldn’t usually encounter and gain an insight into the user facing end of a project I am currently working on (I try to make a mental note of that).

STEP 2: RECAP ON AGILE METHODS

Agile Service Delivery concept
a great diagram about agile processes by Jamie Arnold

(admittedly you do need to turn the computer back on from here onwards, but you get the idea!)

The team here have just completed SCRUM training and we are tasked with scratching our heads over how to translate this to our own working practices. I was particularly inspired by this diagram and blog by Jamie Arnold from G.D.S.  explaining how to run projects in an agile way. I am especially prone to wanting to see things in diagrams, and this tends to be suppressed by too much screen time 🙁

“a picture paints a thousand words.”

Also for projects that are stalled or for whatever reason on the backburner – a recap (or even retrospective creation) on the vision and goals can help you remember why they were once on the agenda in the first place, or if they still should be.

STEP 3: FOCUS ON USER NEEDS

It is actually much easier to concentrate on user needs with the computers switched off. Particularly in the museum where immediately outside the office are a tonne of visitors getting on with their lives, interacting with our products and services, for better or worse.  Since several of our projects involve large scale transformation of museum technology, mapping out how the user need is acheived from the range of possible technologies is useful. This post on mapping out the value chain explaines one method.

Mapping the value chain for donation technology

Whilst the resulting spider-web can be intimidating, it certainly helped identify some key dependencies like power and wifi (often overlooked in musuem projects but then causing serious headaches down the line) as well as where extra resource would be needed in developing new services and designs that don’t yet come ‘off the shelf’.

STEP 4: DISCOVERING PRODUCT DISCOVERY

There is almost always one, or more like three of our projects in the discovery phase at any one time, and this video form Teresa Torres on product discovery explains how to take the focus away from features and think more about outcomes, but also how to join the two in a methodical way – testing many solutions at once to analyse different ways of doing things.

We are a small multidisciplinary team, and in that I mean we each need to take on several disciplines at once, from user research, data analysis, coding, system admin, content editing, online shop order fulfilment (yes you heard that right) etc. However, it is always interesting to hear from those who can concentrate on a single line of work. With resources stretched we can waste time going down the wrong route, but we can and do collaborate with others to experiment on new solutions. Our ongoing “student as producer” projects with the University of Bristol have been a great way for us to get insights in this way at low risk whilst helping to upskill a new generation.

STEP 5: GAMIFY THE PROBLEM

Some of the hardest problems are those involving potential conflict between internal teams. These are easier to ignore than fix and therefore won’t get fixed by business as usual, they just linger and manifest, continuing to cause frustration.

Matt Locke explained it elegantly in MCG’s Museums+Tech 2018: the collaborative museum. And this got me thinking about how to attempt to align project teams that run on totally different rhythms and technologies. Last week I probably would have tried to build something in Excel or web-based tech that visualised resources over time, but no, not this week….this week I decided to use ducks!

Shooting ducks on a pinboard turned out to be a much easier way to negotiate resources and was quicker to prototype than any amount of coffee and coding (its also much easier to support 😉 ). It was also clear that Google sheets or project charts weren’t going to cut it for this particular combination of teams because each had its own way of doing things.

The challenge was to see how many weeks in a year would be available after a team had been booked for known projects. The gap analysis can be done at a glance – we can now discuss the blocks of free time for potential projects and barter for ducks, which is more fun than email crossfire. The problem has now become a physical puzzle where the negative space (illustrated by red dots)  is much more apparent than it was by cross-referencing data squares vs calendars. Its also taken out the underlying agendas across departments and helped us all focus on the problem by playing the same game – helping to synchronise our internal rhythms.

REMARKS

It may have come as a surprise for colleagues to see their digital people switch off and reach for analogue tools, kick back with a pen and paper and start sketching or shooting ducks, but to be honest its been one of the most productive weeks in recent times, and we have new ideas about old problems.

Yes, many bugs still linger in the code, but rather than hunting every last one to extinction, with the benefit of a wider awareness of the needs of our users and teams, maybe we just switch things off and concentrate on building what people actually want?

 

 

 

 

 

Integrating Shopify with Google Sheets (magic tricks made to look easy)

In team digital we like to make things look easy, and in doing so we hope to make life easier for people. A recent challenge has been how to recreate the Top sales by product analysis from the Shopify web application in Google Docs to see how the top 10 selling products compare month by month. The task of creating a monthly breakdown of product sales had up until now been a manual task of choosing from a date picker, exporting data, copying to google sheets, etc.

Having already had some success pushing and pulling data to google sheets using google apps script and our Culture Data platform, we decided to automate the process. The goal was to simplify the procedure of getting the sales analysis into Google docs to make it as easy as possible for the user – all they should need to do would be to select the month they wish to import.

We have developed a set of scripts for extracting data using the Shopify API, but needed to decide how to get the data into Google Sheets. Whilst there is a library for pushing data from a node application into a worksheet, our trials found it to be slow and prone to issues where the sheet did not have enough rows or other unforeseen circumstances. Instead, we performed our monthly analysis on the node server and saved this to a local database. we then built an api for that database that could be queried by shop and by month.

The next step, using google script was to query the api and pull in a month’s worth of data, then save this to a new sheet by month name. This could then be set added as a macro so that it was accessible in the toolbar for the user in a familiar place for them, at their command.

As the data is required on a monthly basis, we need to schedule the server side analysis to save a new batch of data after each month – something we can easily achieve with a cron job. The diagram below shows roughly how the prototype works from the server side and google sheets side. Interestingly, the figures don’t completely match up to the in-application analysis by Shopify, so we have some error checking to do, however we now have the power to enhance the default analysis with our own calculations, for example incorporating the cost of goods into the equation to work out the overall profitability of each product line.

 

 

Preserving the digital

From physical to digital to…?

At Bristol Culture we aim to collect, preserve and create access to our
collections for use by present and future generations. We are increasingly dealing with digital assets amongst these collections – from photographs of our objects, to scans of the historical and unique maps and plans of Bristol, to born-digital creations such as 3D scans of our Pliosaurus fossil. We are also collecting new digital creations in the form of video artwork.

Photo credit Neil McCoubrey

One day we won’t be able to open these books because they are too fragile – digital will be the only way we can access this unique record of Bristol’s history, so digital helps us preserve the physical and provides access. Inside are original plans of Bristols most historic and well-known buildings including the Bristol Hippodrome, which require careful unfolding and digital stitching to reproduce the image of the full drawing inside.

Plans of the Hippodrome, 1912. © Bristol Culture

With new technology comes new opportunities to explore our specimens and this often means having to work with new file types and new applications to view them.  

This 3D scan of our Pliosaurus jaw allows us to gain new insights into the behavior and biology of this long-extinct marine reptile.

Horizon © Thompson & CraigheadThis digital collage by Thompson & Craghead features streaming images from webcams in the 25 time zones of the world. The work comes with a Mac mini and a USB drive in an archive box and can be projected or shown on a 42″ monitor. Bristol Museum is developing its artist film and video collection and now holds 22 videos by artists including Mariele Neudecker, Wood and Harrison, Ben Rivers, Walid Raad and Emily Jacir ranging from documentary to structural film, performance, web-based film and video and animation, in digital, video and analogue film formats, and accompanying installations.

What could go wrong?

So digital assets are helping us conserve our archives, explore our collections and experience new forms of art, but how do we look after those assets for future generations?

It might seem like we don’t need to worry about that now but as time goes by there is constant technological change; hardware becomes un-usable or non-existent, software changes and the very 1s and 0s that make up our digital assets can be prone to deteriorating by a process known as bitrot!.  Additionally, just as is the case for physical artifacts, the information we know about them including provenance and rights can become dissociated.  What’s more, the digital assets can and must multiply, move and adapt to new situations, new storage facilities and new methods of presentation. Digital preservation is the combination of procedures, technology and policy that we can use to help us prevent these risks from rendering our digital repository obsolete. We are currently in the process of upskilling staff and reviewing how we do things so that we can be sure our digital assets are safe and accessible.

Achieving standards

It is clear we need to develop and improve our strategy for dealing with these potential problems, and that this strategy should underline all digital activity where the result of that activity produces output which we wish to preserve and keep.  To rectify this, staff at the Bristol Archives, alongside Team Digital and Collections got together to write a digital preservation policy and roadmap to ensure that preserved digital content can be located, rendered (opened) and trusted well into the future.

Our approach to digital preservation is informed by guidance from national organisations and professional bodies including The National Archives, the Archives & Records Association, the Museums Association, the Collections Trust, the Digital Preservation Coalition, the Government Digital Service and the British Library. We will aim to conform to the Open Archival Information System (OAIS) reference model for digital preservation (ISO 14721:2012). We will also measure progress against the National Digital Stewardship Alliance (NSDA) levels of digital preservation.

A safe digital repository

We use EMu for our digital asset management and collections management systems. Any multimedia uploaded to EMu is automatically given a checksum, and this is stored in the database record for that asset. What this means is that if for any reason that file should change or deteriorate (which is unlikely, but the whole point of digital preservation is to have a mechanism to detect if this should happen) the new checksum won’t match the old one and so we can identify a changed file.

Due to the size of the repository, which is currently approaching 10Tb, it would not be practical to this manually, and so we use a scheduled script to pass through each record and generate a new checksum to compare with the original. The trick here is to make sure that the whole repo gets scanned in time for the next backup period because otherwise, any missing or degraded files would become the backup and therefore obscure the original. We also need a working relationship with our IT providers and an agreed procedure to rescue any lost files if this happens.

With all this in place, we know that what goes in can come back out in the same state -so far so good. But what we cant control is the constant change in technology for rendering files – how do we know that the files we are archiving now will be readable in the future? The answer is that we don’t unless we can migrate from out of date file types to new ones. A quick analysis of all records tagged as ‘video’ shows the following diversity of file types:

(See the stats for images and audio here).  The majority are mpeg or avi, but there is a tail end of various files which may be less common and we’ll need to consider if these should remain in this format or if we need to arrange for them to be converted to a new video format.

Our plan is to make gradual improvements in our documentation and systems in line with the NDSA to achieve level 2 by 2022:

 

The following dashboard gives an idea of where we are currently in terms of file types and the rate of growth:

Herding digital sheep

Its all very well having digital preservation systems in place, but the staff culture and working practices must also change and integrate with them.

The digitisation process can involve lots of stages and create many files

In theory, all digital assets should line up and enter the digital repository in an orderly and systematic manner. However, we all know that in practice things aren’t so straightforward.

Staff involved in digitisation and quality control need the freedom to be able to work with files in the applications and hardware they are used to without being hindered by rules and convoluted ingestion processes. They should to be allowed to work in a messy (to outsiders) environment, at least until the assets are finalised. Also there are many other environmental factors that affect working practices including rights issues, time pressures from exhibition development, and skills and tools available to get the job done. By layering new limitations based on digital preservation we are at risk of designing a system that wont be adopted, as illustrated in the following tweet by @steube:

So we’ll need to think carefully about how we implement any new procedures that may increase the workload of staff. Ideally, we’ll be able to reduce the time staff take in moving files around by using designated folders for multimedia ingestion – these would be visible to the digital repository and act as “dropbox” areas which automatically get scanned and any files automatically uploaded an then deleted. For this process to work, we’ll need to name files carefully so that once uploaded they can be digitally associated with the corresponding catalogue records that are created as part of any inventory project. Having a 24 hour ingestion routine would solve many of the complaints we hear from staff about waiting for files to upload to the system.

 

Automation can help but will need a human element to clean up and anomalies

 

Digital services

Providing user-friendly, online services is a principle we strive for at Bristol Culture – and access to our digital repository for researchers, commercial companies and the public is something we need to address.

We want to be able to recreate the experience of browsing an old photo album using gallery technology. This interactive uses the Turn JS open source software to simulate page turning on a touchscreen featuring in Empire Through the Lens at Bristol Museum.

Visitors to the search room at Bristol Archives have access to the online catalogue as well as knowledgeable staff to help them access the digital material. This system relies on having structured data in the catalogue and scripts which can extract the data and multiemdia and package them up for the page turning application.

But we receive enquiries and requests from people all over the world, in some cases from different time zones which makes communication difficult. We are planning to improve the online catalogue to allow better access to the digital repository, and to link this up to systems for requesting digital replicas. There are so many potential uses and users of the material that we’ll need to undertake user research into how we should best make it available and in what form.

 

Going digital with our Exhibition Scheduling Timeline

 

 

developing a digital timeline for scheduling exhibitions

BACKGROUND

Having a visual representation of upcoming exhibitions, works, and major events is important in the exhibition planning process. Rather than relying on spotting dates that clash using lists of data, having a horizontal timeline spread out visually allows for faster cross-checking and helps collaboratively decide on how to plan for exhibition installs and derigs.

 

Until recently we had a system that used excel to plan out this timeline, by merging cells and coloring horizontally it was possible to manually construct a timeline. Apart from the pure joy that comes from printing anything from Excel, there were a number of limitations of this method.

  • When dates changed the whole thing needed to be rejigged
  • Everyone who received a printed copy at meetings stuck that to the wall and so date changes were hard to communicate.
  • We need to see the timeline over different scales – short term and long term, so this means using 2 separate excel tabs for each, hence duplication of effort.
  • We were unable to apply any permissions
  • The data was not interoperable with other systems

TIMELINE SOFTWARE (vis.js)

Thanks to Almende B.V. there is an open source timeline code library available at visjs.org/docs/timeline so this offers a neat solution to the manual task of having to recast the timeline using some creative Excel skills each time. We already have a database of Exhibition dates following our digital signage project and so this was the perfect opportunity to reuse this data, which should be the most up to date version of planned events as it is what we display to the public internally in our venues.

IMPLEMENTATION

The digital timeline was implemented using MEAN stack technology and combines data feeds from a variety of sources. In addition to bringing in data for agreed exhibitions, we wanted a flexible way to add installations, derigs, and other notes and so a new database on the node server combines these dates with exhibitions data. We can assign permissions to different user groups using some open source authentication libraries and this means we can now release the timeline for staff not involved in exhibitions, but also let various teams add and edit their own specific timeline data.

The great thing about vis is the ease of manipulation of the timeline, users are able to zoom in and out, and backward and forwards in time using with mouse, arrow or touch/pinch gestures.

 

Zoomed out view for the bigger picture
Zoomed in for the detail…

EMU INTEGRATION

The management of information surrounding object conservation, loans and movements is fundamental to successful exhibition development and installation. As such we maintain a record of exhibition dates in EMu, our collections management software. The EMu events module is used to record when exhibitions take place and also the object list where curators select and deselect objects for exhibition. Using the EMU API we are able to extract a structured list of Exhibitions information for publishing to the digital timeline.

HOW OUR TIMELINE WORKS

Each gallery or public space has its own horizontal track where exhibitions are published as blocks. These are grouped into our 5 museums and archives buildings and can be selected/deselected from the timeline to cross reference each. Once logged in a user is able ot manually add new blocks to the timeline and these are pre-set to “install”, “derig” and “provisional date”. Once a block is added our exhibitions team are able to add notes that are accessible on clicking the block. It is also possible to reorder and adjust dates by clicking and dragging.

IMPACT

The timeline now means everyone has access to an up to date picture of upcoming exhibitons installations to no one is out of date. The timeline is on a public platform and is mobile accessible so staff can access it on the move, in galleries or at home. Less time is spent on creative Excel manipulation and more work on spotting errors. It has also made scheduling meetings more dynamic, allowing better cross referencing and moving to different positions in time. An unexpected effect is that we are spotting more uses for the solution and are currently investigating the use of it for booking rooms and resources. There are some really neat things we can do such as import a data feed from the timeline back into our MS Outlook calendars  (“oooooh!”). The addition of thumbnail pictures used to advertise exhibitions has been a favorite feature among staff and really helps give an instant impression of current events, since it reinforces the exhibition branding which people are already familiar with.

ISSUES

It is far from perfect! Several iterations were needed to develop the drag and drop feature fo adding events. Also, we are reaching diminishing returns in terms of performance – with more and more data available to plot, the web app is performing slowly and could do with further optimisation to improve speed. Also due to our IT infrastructure, many staff use Internet Explorer and whilst the timeline works OK, many features are broken on this browser without changes to compatibility and caching settings on IE.

WHAT’S NEXT

Hopefully optimisation will improve performance and then it is full steam ahead with developing our resource booking system using the same framework.

 

 

How we did it: automating the retail order forms using Shopify.

*explicit content warning* this post makes reference to APIs.

THE PROBLEM:  Having set ourselves the challenge of improving the buying process  , our task in Team Digital was to figure out where we can do things more efficiently and smartly. Thanks to our implementation of Shopify, we have no shortage of data on sales to help with this, however the process of gathering the required information to place an order of more stock is time consuming – retail staff need to manually copy and paste machine-like product codes, look up supplier details and compile fresh order forms each time, all the while attention is taken away from what really matters, i.e. which products are currently selling, and which are not.

In a nutshell, the problem can be addressed by creating a specific view of our shop data – one that combines the cost of goods, with the inventory quantity (amount of stock left) in a way that factors in a specific period of time and which can be combined with supplier information so we know who to order each top selling product from, without having to look anything up. We were keen to get in to the world of Shopify development and thanks to the handy Shopify developer programme documentation & API help it was fairly painless to get a prototype up and running.

SETTING UP: We first had to understand the difference between public and private apps with Shopify.  A private app lets you hard code it to speak to a specific shop, whereas the public apps need to be able to authenticate on the fly to any shop. With this we felt a private app was the way to go, at least until we know it works!

Following this and armed with the various passwords and keys needed to programmatically interact with our store, the next step was to find a way to develop a query to give us the data we need, and then to automate the process  and present it in a meaningful way. By default Shopify provides its data as JSON, which is nice, if you are a computer.

TECHNICAL DETAILS: We set up a cron job on an AWS virtual machine running Node and MongoDB. Using the MEAN stack framework and some open source libraries to integrate with Google Sheets, and notably to handle asynchronous processes in a tidy way. If you’d like to explore the code – that’s all here. In addition to scheduled tasks we also built an AngularJS web client which allows staff to run reports manually and to change some settings.

Which translates as: In order to process the data automatically, we needed a database and computer setup that would allow us to talk to Shopify and Google Docs, and to run at a set time each day without human intervention.

The way that Shopify works means we couldn’t develop a single query to do the job in one go as you might in SQL (traditional database language). Also, there are limitations in how many times you can query the store. What emerged from our testing was a series of steps, and an algorithm which did multiple data extractions and recombination’s, which I’ll attempt to describe here. P.S. do shout if there is an easier way to do this ;).

STEP 1: Get a list of all products in the store. We’ll need these to know which supplier each product comes from, and the product types might help in further analysis.

STEP 2: Combine results of step one with the cost of goods. This information lives in a separate app and needs to be imported from a csv file. We’ll need this when we come to build our supplier order form.

STEP 3: Get a list of all orders within a certain period. This bit is the crucial factor in understanding what is currently selling. Whilst we do this, we’ll add in the data from the steps above so we can generate a table with all the information we need to make an order.

STEP 4: Count how many sales of each product type have taken place. This converts our list of individual transactions into a list of products with a count of sales. This uses the MongoDB aggregation pipeline and is what turns our raw data into something more meaningful. It looks a bit like this, (just so you know):

STEP 5: Add the data to a Google Sheet. What luck there is some open source code which we can use to hook our Shopify data up to Google. There are a few steps needed in order for the Google sheet to talk to our data – we basically have our server act as a Google user and share editing access with him, or her?. And while we are beginning to personify this system, we are calling it ‘Stockify’, the latest member of Team Digital, however Zak prefers the lofty moniker Dave.

The result is a table of top selling products in the last x number of days, with x being a variable we can control. The whole process takes quite a few minutes, especially if x >60, and this is due to limitations with each integration – you can only add a new line to a Google sheet once / second, and there are over 500 lines. The great thing about our app is that he/she doesn’t mind working at night or early in the morning, and on weekends or at other times when retail managers probably shouldn’t be looking at sales stats, but probably are. With Stockify/Dave scheduled for 7am each morning we know that when staff look at the data to do the ordering it will be an up to date assessment of the last 60 days’ worth of sales.

We now have the following columns in our Google Sheet, some have come directly from their corresponding Shopify table, whereas some have been calculated on the fly to give us a unique view of our data and on we can gain new insights from.

  • product_type: (from the product table)
  • variant_i:d (one product can have many variants)
  • price: (from the product table)
  • cost_of_goods: (imported from a csv)
  • order_cost: (cost_of_goods * amount sold)
  • sales_value: (price * amount sold)
  • name: (from the product table)
  • amount sold: (transaction table compared to product table / time)
  • inventory_quantity: (from the product table)
  • order_status: (if inventory_quantity < amount sold /time)
  • barcode: (from the product table)
  • sku: (from the product table)
  • vendor: (from the product table)
  • date_report_ru:n (so we know if the scheduled task failed)

TEST, ITERATE, REFINE:  For the first few iterations we failed it on some basic sense checking – not enough data was coming through. This turned out to be because we were running queries faster than the Shopify API would supply the data and transactions were missing. We fixed this with some loopy code, and now we are in the process of tweaking the period of time we wish to analyse – too short and we miss some important items, for example if a popular book hasn’t sold in the last x days, this might not be picked up in the sales report. Also – we need to factor in things like half term, Christmas and other festivals such as Chinese New Year, which Stockify/Dave can’t predict. Yet.

AUTOMATIC ORDER FORMS: To help staff compile the order form we used our latest Google-sheet-fu using  a combination of pick lists, named ranges and the query function to lookup all products tagged with a status of “Re-order”

A list of suppliers appears on the order form template:

and then this formula looks up the products for the chosen supplier and populates the order table:

“=QUERY(indirect(“last_60_days”&”!”&”11:685″),”select G where M='”&$B2&”‘ and J=’re-order'”)”

The trick is  for our app to check if the quantity sold in the last x days is less than the inventory quantity, in which case it goes on the order form.

NEXT STEPS: Oh we’re not done yet! with each step into automation we take, another possibility appears on the horizon…here’s some questions we’ll be asking our system in the coming weeks..

  • -How many products have not sold in the last x days?
  • -If the product type is books, can we order more if the inventory quantity goes below a certain threshold?
  • Even if a particular product has not sold in the last 60 days, can we flag this product type anyway so it gets added to our automatic order form?
  • While we are at it, do we need to look up supplier email addresses each time – cant we just have them appear by magic.

…furthermore we need to integrate this data with our CRM…..looks like we will be busy for a while longer.

 

 

 

Digital Curating Internship – an update

By David Wright (Digital Curating Intern, Bristol Culture)

Both Macauley Bridgman and I are now into week six of our internship as Digital Curating Assistants here at Bristol Culture (Bristol Museums) . At this stage we have partaken in a wide array of projects which have provided us with invaluable experiences as History and Heritage students (a discipline that combines the study if history with its digital interpretation) at the University of the West of England. We have now been on several different tours of the museum both front of house and behind the scenes. Most notably our store tour with Head of Collections Ray Barnett, which provided us with knowledge of issues facing curators nationwide such as conservation techniques, museum pests and the different methods of both utilisation and presentation of objects within the entirety of the museum’s collection.

pic from stores

In addition we were also invited to a presentation by the International Training Programme in which Bristol Museums is a partner alongside the British Museum. Presentations given by Ntombovuyo Tywakadi, Collections Assistant at Ditsong Museum (South Africa), followed by Wanghuan Shi, Project Co-ordinator at Art Exhibitions China and Ana Sverko, Research Associate at the Institute of Art History (Croatia). All three visitors discussed their roles within their respective institutions and provided us with a unique insight into curating around the world. We both found these presentations both insightful and thought provoking as we entered Q&A centred on restrictions and limitations of historical presentation in different nations.

Alongside these experiences we have also assumed multiple projects for various departments around the museum as part of our cross disciplinary approach to digital curating.

Our first project involved working with Natural Sciences Collections Officer Bonnie Griffin to photograph, catalogue and conserve Natural History specimens in the store. This was a privileged assignment which we have perhaps found the most enjoyable. The first hand curating experience and intimate access with both highly experienced staff and noteworthy artefacts we both found inspiring in relation to our respective future careers.

David Wright
David Wright – Digital Curating Intern

Following on from this we undertook a project assigned by Lisa Graves, Curator for World Cultures, to digitise the outdated card index system for India. The digital outcome of this will hopefully see use in an exhibition next year to celebrate the seventieth anniversary of Indian independence in a UK-India Year of Culture. At times we found this work to be somewhat tedious and frustrating however upon completion we have come to recognise the immense significance of digitising museum records for both the preservation of information for future generations and the increased potential such records provide for future utilisation and accessibility.

We have now fully immersed ourselves into our main Bristol Parks project which aims to explore processes by which the museum’s collections can be recorded and presented through geo-location technology. For the purposes of this project we have limited our exploration to well-known local parks, namely Clifton and Durdham Downs with the aim of creating a comprehensive catalogue of records that have been geo-referenced to precise sites within the area. With the proliferation of online mapping tools this is an important time for the museum to analyse how it records object provenance, and having mappable collections makes them suitable for inclusion in a variety of new and exciting platforms – watch this space!. Inclusive of this we have established standardised procedures for object georeferencing which can then be replicated for the use of future ventures and areas. Our previous projects for other departments have provided the foundation for us to explore and critically analyse contemporary processes and experiment with new ways to create links between objects within the museum’s collections.

id cards

As the saying goes “time flies when you are having fun”, and this is certainly true for our experience up to date. We are now in our final two weeks here at the museum and our focus is now fervently on completing our Bristol Parks project.

Digital Curating Internship

We are currently uni students at UWE (University of the West of England) studying history with heritage as the first students on this programme of study. We have been given the fantastic opportunity to work with the digital department at Bristol Culture which runs the various museums and heritage sites in and around Bristol as its first digital curating internship. These fully compliment what we have been and continue to study within our degrees and will allow us to put into practical use what we have studied.

Over the course of the next eight weeks will be working alongside various different departments, collections and projects, offering us a unique insight into the heritage industry.

What does digital curating mean to us?

For us digital curation is the future of 21st century museology the implementation and development of which allows for four significant benefits:

• Democratisation of information reduces barriers to entry.
• Increases the potential use of collections.
• Stimulates further research.
• Widens community engagement to ever greater and diverse audiences.

As fantastic as these systems can be there is still room for further advancement. We have already learnt in our short time here that a few issues include inconsistencies across departments, collection backlog, dirty data also the lack of secure data sharing detailed information between institutions. Despite these hurdles the drive to expand and improve digital curation continues with great hope for what can be achieved in this field.

Expectations for the role:

Through this role we aim to:

• Engage and critique existing cataloguing methods and SPECTRUM standard archival systems such as EMu.
• To develop strategies for increasing engagement with both collections and institutions.
• Develop the necessary skills and experience to pursue a career within the heritage industry.
• Work closely and network with a variety of different heritage professionals within the South West.

We both look forward to expanding both our knowledge and experience, as well as eagerly anticipating what this internship has in store for the next eight week’s .

Google Drive for Publishing to Digital Signage

Having taken an agile development approach to our digital screen technology, it has been interesting as the various elements emerge based on our current needs. Lately there has been the need for quick ways to push posters and images to the screens for private events and one-off occasions.

Due to the complexity of the various modes, and the intricacies of events-based data and automatic scheduling it has been difficult incorporating these needs into the system. Our solution was to use Google Drive as a means to override the screens with temporary content. This means our staff can manage content for private events using tables and mobile devices, and watch the updates push through in real time.

The pathway of routes now looks like this

Untitled Diagram (1)

HOW?

There are two main elements to the override process – firstly, we are using BackboneJS as the application framework because this provides a routing structure that controls the various signage modes. We added a new route at the beginning of the process to check for content added to Google Drive – if there is no content the signs follow their normal modes of operation.

Google Drive Integration

Google provide a nice way to publish web services, hidden amongst the scripts editor inside Google sheets. We created a script that loops through a Drive directory and publishes a list of contents as JSON –  you can see the result of that script here. By making the directory public, any images we load into the drive are picked up by the script. The screens then check the script for new content regularly. The good thing about this is that we can add content to specially named folders – if the folder names match either the venue or the specific machine name – all targeted screens will start showing that content.

Google drive integration

It seems that this form of web hosting will be deprecated in Google Drive at the end of August 2016. But the links we are using to get the image might still work. If not we can find a workaround – possibly by listing urls to content hosted elsewhere in the Google sheet and looking that up.

The main benefits of this solution are being able to override the normal mode of operation using Google Drive on a mobile device. This even works with video – we added some more overrides so the poster mode doesn’t loop till the next slide until after the video has finished – video brings in several issues when considering timings for digital signage. One problem with hosting via Google Drive is that files over 25MB don’t work due to Google’s antivirus checking warning which prevents the files being released.

We’ll wait to see if this new functionality gets used – and if it is reliable after August 2016. In fact – this mode might be usable on its own to manage other screens around the various venues which until now were not up datable. If successful it will vastly reduce the need to run around with memory sticks before private events – and hopefully let us spend more time generating the wonderful content that the technology is designed to publish for our visitors.

You can download the latest release and try it for yourself here.

 

 

 

 

Digital Object Labels

At Bristol Museums we use EMu to manage digital interpretation, and have several galleries with touchscreen kiosks displaying object narratives. We haven’t yet settled on a single technology, framework or data model as each new project gives us opportunities to test out new ideas, based on what our audiences want and on our previous learning. The refurbishment of our European Old Masters Gallery has given us the opportunity to extend the printed interpretation into digital.

(C) John Seaman, Bristol Culture
(C) John Seaman, Bristol Culture

The classic look of the gallery means label space is kept to a minimum, and this had reduced the amount of printed interpretation available on the physical labels. Digital gives our curators the opportunity to expand on the depth of interpretation by writing more detailed descriptions of paintings. Our challenge was to come up with a solution that provided in-gallery mobile digital interpretation that was easy to access and fast to load, and that made sense in context.

Taking a user-focused approach, we were keen to provide appropriate technology to the sorts of visitors to the gallery. Our audience research shows that mobile technology is a standard anong these visitors, as explained by Darren Roberts, our user researcher.

Our Audience segmentation shows that three of the Core Audience Segments for Rembrandt – City Sophisticates, Career Climbers, and Students – are all over 20% more likely than average visitors to use their mobile phone to access educational web content or apps.  All three groups are also over 20% more likely than average to agree with the statement ‘I couldn’t live without the internet on my mobile’. These three segments account for over a third of the general audience for the museum.

Ranked in order of segments that are both most likely to have an interest in Antiques and Fine Art and use their mobile phone to access free educational content or apps:

  1. Student Life

  2. Lavish Lifestyles

  3. City Sophisticates

  4. Career Climbers

  5. Executive Wealth

The top three are over 40% more likely than average visitors to engage in both these activities. All five are expected to be part of the core audience for the Rembrandt exhibition.

Picture2

With this in mind, we set about analysing the printed labels – looking at where data could be brought in from our collections management system (EMu) automatically to minimise effort in writing content. As it turns out we already had most of this data (artist name, birth date, death date etc.) and so the main curatorial effort could be focused on text wiring for the labels, while we designed the template to bring the data together.

Picture3

Thanks to some preliminary experiments, we already had a working framework to use – we are using AngularJS on the client side for rapid prototyping, templating, routing  and deployment.

Our next challenge was to optimise performance and maximise up-time. Having been inspired by the linked open data movement, we opted for having the data sit in structured JSON files that could be reused multiple times by various apps without querying the database directly. This had the double effect of reliability and speed. We did a similar thing with multimedia, running a regular content refresh cycle and packing everything up for the app to use, with images saved at sizes for thumbnail and detail views.

The finished template was as follows – we opted for a minimalist design for east of reading, and with responsive elements the pages work across multiple devices.

Mobile object label

The process of selecting source fields and mapping them to the template has inevitably thrown up areas where our database use could be improved, and where before we had data across many fields, now we have laid out better guidelines for object cataloguing that should ease this issue – for the app to work we needed set fields to extract information about the painting and artists.

We also had to deal with inconsistencies in terminology, for example the various ways dates could be written – on printed labels these variations are permitted, but we need to define the semantic patterns in order for this to work in digital. Now we have a workflow for improving the way we catalogue our objects as a result of this process.

Where some terms were abbreviated on the labels e.g “b” and”d”  for birth and death – we expanded these on the digital labels as space was not an issue and we also felt this was easier for users to read and understand – digital allows us to implement some of our user focused principles without disrupting the printed gallery interpretation.

Call to action

Through in-gallery user testing we found that whilst some features were obvious to us, visitors were not always getting to the bits we wanted them to see – we therefore added a call to action to make it clear what was available…

“Find out more about the objects in this gallery”

Something we are interested in finding out is how users navigate to their chosen painting. User stories and personas are one method we could use to get a better understanding of this. To facilitate various user journeys, we provide different routes to each digital label, either by searching by painting name, filtering on the artist’s name, or through browsing through the list view.

list view

Technical details:

The routing mechanism of AngularJS gave us a simple way to navigate through from the list view to the record view by altering the # parameter as follows:

List view: museums.bristol.gov.uk/labels

Record view: museums.bristol.gov.uk/labels/#/id/14135/narcissus

We also included some libraries for smooth page loading to improve the user experience. At this stage we don’t know whether the digital labels have a use outside the gallery, but in case the do we wanted the pictures to be zoomable, and there was a code library that allows this. N.B. this is not yet deep zoomable, but we are on the road to achieving that.

Data stuff

We want to be able to reuse our structured data on paintings and artists and their info and dates whenever new technology comes along, and so our data layer exists independently of the application, and it also sits outside our database on a  publicly accessible endpoint. If you want to use any of it, in JSON form you can take a look here:

We store lists of objects in separate index.json files here:

museums.bristol.gov.uk/labels/data

And for details info about an object you can load up records by their id here:

museums.bristol.gov.uk/labels/id

Structures and paths may change as we develop the system so apologies if these are not accessible at any point. We change bits in order to improve issues with loading time and reliability, but we aim to resolve this to a standard approach to our data layer with time.

We are also figuring out what structure out object (json) records need to contain in order to maximise their use outside of our collection management system. Where dates and places exist in several source fields, we can prioritise these on export to choose which dates are most suitable, and similarly for places.

We construct a standard object schema in JSON as a result of a scheduled content refresh script which queries the IMu api, prioritises which fields to include the and saves as a JSON…

json object

Next steps

We have implemented this in one gallery so far, and for one object type. We are now looking to roll this out to other galleries and look forward to similar challenges with different types of objects.

We are also extending the design of the prototype to bring in timelines and mapping functionality. These bring an interactive element to the experience and also provide new ways of visualising objects in time and space.

We included the TimelineJS3 library into our framework, and hooked it up to the same data powering the object labels. This provides a comparison of artists’ lives with each other, and with the paintings they produced.

We need to tweak the css a little, but out of the box it works well, thanks to the kind people at Knightlab.

Interactive artist timeline

take a look at our alpha for the digital timeline here

Remarks

The project has made us rethink some of our cataloging standards – we are aligning our internal data capture and export to be better equipped to make use of new web tools for public engagement.

We have decoupled the tasks of writing label text, and reusing object data and applying narrative metadata. We also have a process that would allow new layers of interpretation to be written and published to the same application architecture, and we can present a simplified data entry process to staff for this label writing process.

Picture2

Although we haven’t solved the problem of how to improve uptake of the application in-gallery, we’ll be ready when someone does. If its ibeacons that do it – and we think it might be, we can direct users to a single object label using a unique url to our digital label.

For now though it is just a trusty old url to point people to the page where they then navigate further, but we’d love to remove this barrier at some point.

 

 

 

 

 

Getting an archival tree-view to sort properly online

The digital team at Bristol Culture face new challenges every day, and with diverse collections come a diverse range of problems when it comes to publishing online. One particularly taxing issue we encountered recently was how to represent and navigate through an archives collection appropriately on the web.

Here’s what Jayne Pucknell, an archivist at the Bristol Record Office, has to say:

“To an archivist, individual items such as photographs are important but it is critical that we are able to see them within their context. When we catalogue a collection, we try to group records into series to reflect their provenance, and the original order in which they were created. These series or groups are displayed as a hierarchical ‘tree view’ which shows that arrangement.”

So far so good – we needed to display this tree-view online, and it just so happens there is a useful open source jquery plugin to help us achieve that, called jsTree.

Capture

The problem we found when we implemented this online, was that the tree view did not display the archive records in the correct order. The default sort was the order in which the records had been created, and although we were able to apply a sort to the records in our source database (EMu), we were unable to find a satisfactory sorting method that returned a numerical sort for the records based on their archival reference number. This is because the archival reference number is made up from a series of sub-numbers reflecting sub collections.

So this gave us a challenge to fix, and the opportunity to fix it was possible because of the EMu API and programming  in between the source database and collections online.  The trick was to write a php function that could reorder the archive tree before it was displayed.

Well, we did that and here’s a breakdown of what that function does:

The function takes 2 arguments – the archival number as a text string, and the level in the archive as an integer.

1.) split the reference number into an its subnumbers
2.) construct a new array from the subnumbers
3.) perform a special sort on the new array that takes into account each subnumber in turn

in theory that’s it – but looking at the code in hindsight there are a whole heap of complexities that would take longer to articulate here than just to past in the code, so lets make it open source and leave you to delve if you wish – here’s the code on Github

Another subtle complexity in this work is described further by Jayne:

“You may search and find an individual photograph and its catalogue entry will explain the specific content of that image, but to understand its wider context it is helpful to be able to consider the collection as a whole. Or you may search and find one photograph of interest but then want to explore other items which came in with that photograph. By displaying the hierarchy, you are more easily able to navigate your way through the whole collection.”

Because of the way our collections online record pages are built – a record does not immediately contain links to all its parents or children. This is problematic when building the archives tree as ideally we wish each node to link to the parent or child depicted. We therefore needed a way to get the link for each related record whilst constructing the tree. Luckily we maintain the tree structure in EMu via the parent field.

The solution was to query the parent field and get the children of that parent, then loop through each child record and add a node to the tree. This process could be repeated up the parents until a record with no parents was reached and this would then become the root node. Because the html markup was the same for each node, this process could be written as a set of functions:

1.) has_parent: take a record number and perfom a  search to see if it has a parent, if it does return the parent id.

2.) return_children: take a record number, search for its child records and return them as an array

2.) child_html: take an array of child records and construct the links for each in html

Taking advice from Jonathan Ainsworth from the University of Leeds Special Collections, who went through similar issues when building their online pages, we decided not to perform this recursively due to the chance of entering an infinite loop or incurring too much processing time. Instead I decided to call the functions for a set number of levels in the tree – this works as we did not expect more than seven levels. The thing to point out is that when you land on a particular record, the hierarchical level could be anything, but the programmed function to build the tree remains the same.

Here’s the result – using some css and the customisable features in jsTree we can indicate which is the selected record by highlighting. We also had to play around with the jsTree settings to enable the selected record to appear, by expanding each of its parent nodes in turn – to be honest it all got a bit loopy!

Capture

….here’s the link to this record on our Collections Online.

Hope this is of use to anyone going through similar issues – on the face of it the problem is a simple one, but as we are coming to learn in team digital – nothing is really ever just simple.