All posts by Mark Pajak

Digital Curating Internship

We are currently uni students at UWE (University of the West of England) studying history with heritage as the first students on this programme of study. We have been given the fantastic opportunity to work with the digital department at Bristol Culture which runs the various museums and heritage sites in and around Bristol as its first digital curating internship. These fully compliment what we have been and continue to study within our degrees and will allow us to put into practical use what we have studied.

Over the course of the next eight weeks will be working alongside various different departments, collections and projects, offering us a unique insight into the heritage industry.

What does digital curating mean to us?

For us digital curation is the future of 21st century museology the implementation and development of which allows for four significant benefits:

• Democratisation of information reduces barriers to entry.
• Increases the potential use of collections.
• Stimulates further research.
• Widens community engagement to ever greater and diverse audiences.

As fantastic as these systems can be there is still room for further advancement. We have already learnt in our short time here that a few issues include inconsistencies across departments, collection backlog, dirty data also the lack of secure data sharing detailed information between institutions. Despite these hurdles the drive to expand and improve digital curation continues with great hope for what can be achieved in this field.

Expectations for the role:

Through this role we aim to:

• Engage and critique existing cataloguing methods and SPECTRUM standard archival systems such as EMu.
• To develop strategies for increasing engagement with both collections and institutions.
• Develop the necessary skills and experience to pursue a career within the heritage industry.
• Work closely and network with a variety of different heritage professionals within the South West.

We both look forward to expanding both our knowledge and experience, as well as eagerly anticipating what this internship has in store for the next eight week’s .

Google Drive for Publishing to Digital Signage

Having taken an agile development approach to our digital screen technology, it has been interesting as the various elements emerge based on our current needs. Lately there has been the need for quick ways to push posters and images to the screens for private events and one-off occasions.

Due to the complexity of the various modes, and the intricacies of events-based data and automatic scheduling it has been difficult incorporating these needs into the system. Our solution was to use Google Drive as a means to override the screens with temporary content. This means our staff can manage content for private events using tables and mobile devices, and watch the updates push through in real time.

The pathway of routes now looks like this

Untitled Diagram (1)

HOW?

There are two main elements to the override process – firstly, we are using BackboneJS as the application framework because this provides a routing structure that controls the various signage modes. We added a new route at the beginning of the process to check for content added to Google Drive – if there is no content the signs follow their normal modes of operation.

Google Drive Integration

Google provide a nice way to publish web services, hidden amongst the scripts editor inside Google sheets. We created a script that loops through a Drive directory and publishes a list of contents as JSON –  you can see the result of that script here. By making the directory public, any images we load into the drive are picked up by the script. The screens then check the script for new content regularly. The good thing about this is that we can add content to specially named folders – if the folder names match either the venue or the specific machine name – all targeted screens will start showing that content.

Google drive integration

It seems that this form of web hosting will be deprecated in Google Drive at the end of August 2016. But the links we are using to get the image might still work. If not we can find a workaround – possibly by listing urls to content hosted elsewhere in the Google sheet and looking that up.

The main benefits of this solution are being able to override the normal mode of operation using Google Drive on a mobile device. This even works with video – we added some more overrides so the poster mode doesn’t loop till the next slide until after the video has finished – video brings in several issues when considering timings for digital signage. One problem with hosting via Google Drive is that files over 25MB don’t work due to Google’s antivirus checking warning which prevents the files being released.

We’ll wait to see if this new functionality gets used – and if it is reliable after August 2016. In fact – this mode might be usable on its own to manage other screens around the various venues which until now were not up datable. If successful it will vastly reduce the need to run around with memory sticks before private events – and hopefully let us spend more time generating the wonderful content that the technology is designed to publish for our visitors.

You can download the latest release and try it for yourself here.

 

 

 

 

Digital Object Labels

At Bristol Museums we use EMu to manage digital interpretation, and have several galleries with touchscreen kiosks displaying object narratives. We haven’t yet settled on a single technology, framework or data model as each new project gives us opportunities to test out new ideas, based on what our audiences want and on our previous learning. The refurbishment of our European Old Masters Gallery has given us the opportunity to extend the printed interpretation into digital.

(C) John Seaman, Bristol Culture
(C) John Seaman, Bristol Culture

The classic look of the gallery means label space is kept to a minimum, and this had reduced the amount of printed interpretation available on the physical labels. Digital gives our curators the opportunity to expand on the depth of interpretation by writing more detailed descriptions of paintings. Our challenge was to come up with a solution that provided in-gallery mobile digital interpretation that was easy to access and fast to load, and that made sense in context.

Taking a user-focused approach, we were keen to provide appropriate technology to the sorts of visitors to the gallery. Our audience research shows that mobile technology is a standard anong these visitors, as explained by Darren Roberts, our user researcher.

Our Audience segmentation shows that three of the Core Audience Segments for Rembrandt – City Sophisticates, Career Climbers, and Students – are all over 20% more likely than average visitors to use their mobile phone to access educational web content or apps.  All three groups are also over 20% more likely than average to agree with the statement ‘I couldn’t live without the internet on my mobile’. These three segments account for over a third of the general audience for the museum.

Ranked in order of segments that are both most likely to have an interest in Antiques and Fine Art and use their mobile phone to access free educational content or apps:

  1. Student Life

  2. Lavish Lifestyles

  3. City Sophisticates

  4. Career Climbers

  5. Executive Wealth

The top three are over 40% more likely than average visitors to engage in both these activities. All five are expected to be part of the core audience for the Rembrandt exhibition.

Picture2

With this in mind, we set about analysing the printed labels – looking at where data could be brought in from our collections management system (EMu) automatically to minimise effort in writing content. As it turns out we already had most of this data (artist name, birth date, death date etc.) and so the main curatorial effort could be focused on text wiring for the labels, while we designed the template to bring the data together.

Picture3

Thanks to some preliminary experiments, we already had a working framework to use – we are using AngularJS on the client side for rapid prototyping, templating, routing  and deployment.

Our next challenge was to optimise performance and maximise up-time. Having been inspired by the linked open data movement, we opted for having the data sit in structured JSON files that could be reused multiple times by various apps without querying the database directly. This had the double effect of reliability and speed. We did a similar thing with multimedia, running a regular content refresh cycle and packing everything up for the app to use, with images saved at sizes for thumbnail and detail views.

The finished template was as follows – we opted for a minimalist design for east of reading, and with responsive elements the pages work across multiple devices.

Mobile object label

The process of selecting source fields and mapping them to the template has inevitably thrown up areas where our database use could be improved, and where before we had data across many fields, now we have laid out better guidelines for object cataloguing that should ease this issue – for the app to work we needed set fields to extract information about the painting and artists.

We also had to deal with inconsistencies in terminology, for example the various ways dates could be written – on printed labels these variations are permitted, but we need to define the semantic patterns in order for this to work in digital. Now we have a workflow for improving the way we catalogue our objects as a result of this process.

Where some terms were abbreviated on the labels e.g “b” and”d”  for birth and death – we expanded these on the digital labels as space was not an issue and we also felt this was easier for users to read and understand – digital allows us to implement some of our user focused principles without disrupting the printed gallery interpretation.

Call to action

Through in-gallery user testing we found that whilst some features were obvious to us, visitors were not always getting to the bits we wanted them to see – we therefore added a call to action to make it clear what was available…

“Find out more about the objects in this gallery”

Something we are interested in finding out is how users navigate to their chosen painting. User stories and personas are one method we could use to get a better understanding of this. To facilitate various user journeys, we provide different routes to each digital label, either by searching by painting name, filtering on the artist’s name, or through browsing through the list view.

list view

Technical details:

The routing mechanism of AngularJS gave us a simple way to navigate through from the list view to the record view by altering the # parameter as follows:

List view: museums.bristol.gov.uk/labels

Record view: museums.bristol.gov.uk/labels/#/id/14135/narcissus

We also included some libraries for smooth page loading to improve the user experience. At this stage we don’t know whether the digital labels have a use outside the gallery, but in case the do we wanted the pictures to be zoomable, and there was a code library that allows this. N.B. this is not yet deep zoomable, but we are on the road to achieving that.

Data stuff

We want to be able to reuse our structured data on paintings and artists and their info and dates whenever new technology comes along, and so our data layer exists independently of the application, and it also sits outside our database on a  publicly accessible endpoint. If you want to use any of it, in JSON form you can take a look here:

We store lists of objects in separate index.json files here:

museums.bristol.gov.uk/labels/data

And for details info about an object you can load up records by their id here:

museums.bristol.gov.uk/labels/id

Structures and paths may change as we develop the system so apologies if these are not accessible at any point. We change bits in order to improve issues with loading time and reliability, but we aim to resolve this to a standard approach to our data layer with time.

We are also figuring out what structure out object (json) records need to contain in order to maximise their use outside of our collection management system. Where dates and places exist in several source fields, we can prioritise these on export to choose which dates are most suitable, and similarly for places.

We construct a standard object schema in JSON as a result of a scheduled content refresh script which queries the IMu api, prioritises which fields to include the and saves as a JSON…

json object

Next steps

We have implemented this in one gallery so far, and for one object type. We are now looking to roll this out to other galleries and look forward to similar challenges with different types of objects.

We are also extending the design of the prototype to bring in timelines and mapping functionality. These bring an interactive element to the experience and also provide new ways of visualising objects in time and space.

We included the TimelineJS3 library into our framework, and hooked it up to the same data powering the object labels. This provides a comparison of artists’ lives with each other, and with the paintings they produced.

We need to tweak the css a little, but out of the box it works well, thanks to the kind people at Knightlab.

Interactive artist timeline

take a look at our alpha for the digital timeline here

Remarks

The project has made us rethink some of our cataloging standards – we are aligning our internal data capture and export to be better equipped to make use of new web tools for public engagement.

We have decoupled the tasks of writing label text, and reusing object data and applying narrative metadata. We also have a process that would allow new layers of interpretation to be written and published to the same application architecture, and we can present a simplified data entry process to staff for this label writing process.

Picture2

Although we haven’t solved the problem of how to improve uptake of the application in-gallery, we’ll be ready when someone does. If its ibeacons that do it – and we think it might be, we can direct users to a single object label using a unique url to our digital label.

For now though it is just a trusty old url to point people to the page where they then navigate further, but we’d love to remove this barrier at some point.

 

 

 

 

 

Getting an archival tree-view to sort properly online

The digital team at Bristol Culture face new challenges every day, and with diverse collections come a diverse range of problems when it comes to publishing online. One particularly taxing issue we encountered recently was how to represent and navigate through an archives collection appropriately on the web.

Here’s what Jayne Pucknell, an archivist at the Bristol Record Office, has to say:

“To an archivist, individual items such as photographs are important but it is critical that we are able to see them within their context. When we catalogue a collection, we try to group records into series to reflect their provenance, and the original order in which they were created. These series or groups are displayed as a hierarchical ‘tree view’ which shows that arrangement.”

So far so good – we needed to display this tree-view online, and it just so happens there is a useful open source jquery plugin to help us achieve that, called jsTree.

Capture

The problem we found when we implemented this online, was that the tree view did not display the archive records in the correct order. The default sort was the order in which the records had been created, and although we were able to apply a sort to the records in our source database (EMu), we were unable to find a satisfactory sorting method that returned a numerical sort for the records based on their archival reference number. This is because the archival reference number is made up from a series of sub-numbers reflecting sub collections.

So this gave us a challenge to fix, and the opportunity to fix it was possible because of the EMu API and programming  in between the source database and collections online.  The trick was to write a php function that could reorder the archive tree before it was displayed.

Well, we did that and here’s a breakdown of what that function does:

The function takes 2 arguments – the archival number as a text string, and the level in the archive as an integer.

1.) split the reference number into an its subnumbers
2.) construct a new array from the subnumbers
3.) perform a special sort on the new array that takes into account each subnumber in turn

in theory that’s it – but looking at the code in hindsight there are a whole heap of complexities that would take longer to articulate here than just to past in the code, so lets make it open source and leave you to delve if you wish – here’s the code on Github

Another subtle complexity in this work is described further by Jayne:

“You may search and find an individual photograph and its catalogue entry will explain the specific content of that image, but to understand its wider context it is helpful to be able to consider the collection as a whole. Or you may search and find one photograph of interest but then want to explore other items which came in with that photograph. By displaying the hierarchy, you are more easily able to navigate your way through the whole collection.”

Because of the way our collections online record pages are built – a record does not immediately contain links to all its parents or children. This is problematic when building the archives tree as ideally we wish each node to link to the parent or child depicted. We therefore needed a way to get the link for each related record whilst constructing the tree. Luckily we maintain the tree structure in EMu via the parent field.

The solution was to query the parent field and get the children of that parent, then loop through each child record and add a node to the tree. This process could be repeated up the parents until a record with no parents was reached and this would then become the root node. Because the html markup was the same for each node, this process could be written as a set of functions:

1.) has_parent: take a record number and perfom a  search to see if it has a parent, if it does return the parent id.

2.) return_children: take a record number, search for its child records and return them as an array

2.) child_html: take an array of child records and construct the links for each in html

Taking advice from Jonathan Ainsworth from the University of Leeds Special Collections, who went through similar issues when building their online pages, we decided not to perform this recursively due to the chance of entering an infinite loop or incurring too much processing time. Instead I decided to call the functions for a set number of levels in the tree – this works as we did not expect more than seven levels. The thing to point out is that when you land on a particular record, the hierarchical level could be anything, but the programmed function to build the tree remains the same.

Here’s the result – using some css and the customisable features in jsTree we can indicate which is the selected record by highlighting. We also had to play around with the jsTree settings to enable the selected record to appear, by expanding each of its parent nodes in turn – to be honest it all got a bit loopy!

Capture

….here’s the link to this record on our Collections Online.

Hope this is of use to anyone going through similar issues – on the face of it the problem is a simple one, but as we are coming to learn in team digital – nothing is really ever just simple.

 

 

 

Bristol Museum Egypt Exhibition Web-App

Hi, I’m Dhruv, and I’m a second year Computer Scientist at the University of Bristol. Along with 5 other team members, as part of our Software Product Engineering module, we are creating an interactive web-app for the Egypt Exhibition at the Bristol Museum.

The purpose of this web-app is to allow visitors to the museum to browse the exhibition whilst viewing more information about each of the exhibits on their phones, instead of the currently implemented kiosks. The following is a light technical overview of how it works.

The web-app is built on a full javascript stack involving Node.js and Express on the back-end and AngularJS on the front-end. Using frameworks based around the same language made it even easier for all members of our team to get involved with all parts of the application, as these skills easily transfer. Our system builds the website based on data exported from EMu, meaning that any updates to exhibit contents are easily displayed – be that tweaks to artefact data, or entire cabinet changes. We make this happen by designing templates for the specific types of page that exist, and use AngularJS to dynamically inject the appropriate content when the page is requested.

We decided to create a solution in this way as we felt it allowed a closer interaction with the content, along with dealing with the issue of multiple people using the kiosk at the same time. It also allows for user’s current accessibility settings (such as larger text for those with visual impairments) to be carried over.

The web-app is still in development, but some screenshots of the current implementation can be seen below.

We’ve been carrying out some user testing, and have had quite a bit of good feedback. Thanks to anyone who took the time to fill out our feedback forms!

Overall, the project has been thoroughly interesting, as it’s allowed me to expand my technical skills, but also through seeing bits of what makes Bristol Museum work smoothly.

Anatomy of our Digital Signage Web App

At this stage in the development of our digital signage, we have a working release of the software in the live environment, and we are focussing on training, improvements to the design and data structure for the next version. This post is about the nuts and bolts of how the client-side app works, while it is still fresh.

Mode Schematic

Firstly, it is a single page web application – loaded upon calling index.html from a web browser.  Inside the index.html are just the basics you’d expect. The magic is all controlled via a master JavaScript file called require.js. This library is used to pull together all of the source code in the right order and makes sure files don’t get loaded twice etc. All of the content of the app is loaded and removed via a single content div in the body.

index.html 
(... some bits removed...check the GitHub page for the whole lot)


<html>
  <head><title>BMGA Digital Signage</title>
     <link rel="stylesheet" href="css/styles.css"> 
     <script data-main="js/main" src="js/libs/require/require.js"/>    
  </head>
  <body class="nocursor">
   <div id="mainContent" > </div></div>
  </body>
</html>

The first JavaScript to load up is main.JS. This simple file follows theRequireJS format, which is used to alias some of the code libraries which will get used the most such as JQuery.

//main.js 

require.config({

 paths:{
     jquery:'libs/jquery/jquery-min',
     underscore:'libs/underscore/underscore-min',
     backbone:'libs/backbone/backbone-min', 
     templates: '../templates'
 }
 })

require([

"app"], function(App) {
App.initialize();
});

Next up is main.js. This loads up the code libraries required to start the app, and brings in our first global function – used to close each ‘view’. For a single page app it is really important to destroy any lingering event handlers and other bits which can take up memory and cause the app to go a bit crazy – something that Backbone apps have difficulties with, and otherwise known as Zombie Views. Killing Zombies is important.

//main.js
define([
 'jquery', 
 'underscore', 
 'backbone',
 'router'

], function($, _, Backbone, Router){
var initialize = function(){

 
  Backbone.View.prototype.close = function () { //KILL ZOMBIE VIEWS!!!!
      this.undelegateEvents();
      this.$el.empty();
      this.unbind();
  };
 

   Router.initialize();
 };

 return { 
     initialize: initialize
 };
});

It gets a bit more fun next as we call the backbone ‘router’ – and from now on I’ll only add snippets from the files, to see the lot head to GitHub. The router is what drives navigation through each of the modes that the screens can display. Each route takes its parameters from the url and so this means we can control the modes by appending the text ‘sponsors’, ‘posters’ or ‘events’ to the index.html in the browser.

In addition to the mode we can pass in parameters – which poster to display, which page of sponsors, which venue etc. This was a solution to the problem of how to remember which posters have not yet been shown. If you only wish the poster mode to last 40 seconds, but you’ve got lots of posters – you need to remember which posters come next in the sequence. Additionally as you loop through modes, you need to pass along each parameter until you are back on poster mode. This is why every route has all the parameters for venue and poster.

This slightly convoluted situation has arisen as we are using a page refresh to flip between modes and so without relying on local storage our variables are only around as long as the page lasts

//router.js 

 var AppRouter = Backbone.Router.extend({
 routes: { 
 'sponsors(/venue:venue)(/stick:stick)(/logo:page)(/poster:page)(/machine:machine)': 'sponsors', 
 'posters(/venue:venue)(/stick:stick)(/logo:page)(/poster:page)(/machine:machine)': 'posters', 
 'events(/venue:venue)(/stick:stick)(/logo:page)(/poster:page)(/machine:machine)(/date:all)':'events',

 }
 });

The code for a single route looks a bit like this and works as follows.  We start off with an option to stick or move – this allows us to have a screen stay on a particular mode. Then we look at our settings.JSON file which contains the machine specific settings for all of the signs across each venue. The machine name is the only setting help locally on the system and this is used to let each machine find their node of settings (loop times, etc.).

...
 app_router.on('route:posters', function(venue,stick,logoOffset,posterOffset,machine){
 
 
 var stick = stick || "move"
 var logoOffset=logoOffset||0
 var posterOffset=posterOffset||0;
 
 machineName=machine||'default'
 Allsettings=(JSON.parse(Settings))
 settings=Allsettings.machineName
 settings=('Allsettings',Allsettings[machineName])
 
 
 var venue = settings.location;
 
 if(Globals.curentView){
 Globals.curentView.close()
 }
 
 var venue = venue || "ALL"
 self.venue=venue
 
 var posterView = new PosterView({venue:self.venue,stick: stick,logoOffset:logoOffset,posterOffset:posterOffset,machine:machine,settings:settings,type: settings.eventTypes});
 
 posterView.addPostersFromLocaLFile();
 Globals.curentView=posterView
 
 

 }),
....

With all settings loaded, and filtered by machine name and the mode specified – we are ready to load up the view. This contains all of the application logic for a particular mode, brings in the html templates for displaying the content, and performs the data fetches and other database functions needed to display current events/posters…more on that in a bit

Amongst the code here are some functions used to check which orientation the image supplied is, and then cross reference that with the screen dimensions, and then check if that particular machine is ‘allowed’ to display mismatched content. Some are and some aren’t, it kinda depends. When we push a landscape poster to a portrait screen, we have lots of dead space. A4 looks OK on both but anything squished looks silly. So in the dead space we can display a strapline, which is nice, until there is only a tiny bit of dead space. Oh yep, there is some code to make the font smaller for a bit if there is just enough for a caption..etc.   ….turns out poster mode wasn’t that easy after all!

//view.js
 
define([
 'jquery',
 'underscore',
 'backbone',
 'text!templates/posters/posterFullScreenTemplate_1080x1920.html',
 'text!templates/posters/posterFullScreenTemplate_1920x1080.html',
 'collections/posters/PostersCollection',
 'helpers/Globals',
], function($, _, Backbone, posterFullScreenTemplate ,posterFullScreenTemplateLandscape,PostersCollection,Globals){

 var PosterView = Backbone.View.extend({
 
 el: $("#eventsList"),
 
  addPostersFromLocaLFile: function(){ 
 
 var self = this;
 self.PostersCollection = new PostersCollection({parse:true}) 
 self.PostersCollection.fetch({ success : function(data){
 self.PostersCollection.reset(data.models[0].get('posters'))
 self.PostersCollection=(self.PostersCollection.byEventType(self.settings.eventTypes));
 self.PostersCollection=(self.PostersCollection.venueFilter(self.venue));
 self.renderPosters(self.PostersCollection)
 
 $( document ).ready(function() {
 
 setInterval(function(){ 
 
 self.renderPosters(self.PostersCollection)
 if(self.stick=="move"){ 
 setTimeout(function() { 
 self.goToNextView(self.posterOffset)
 }, settings.posterMode_time * 1000);
 }
 }, settings.posterLoop_time * 1000);
 })
 
 }, dataType: "json" });
 
 },
 
 renderPosters: function (response) { 

 if( self.posterOffset>= response.models.length){self.posterOffset=0}
 
 var width = (response.models[self.posterOffset].get('width'))
 var height = (response.models[self.posterOffset].get('height'))
 LANDSCAPE=(parseInt(width)>=parseInt(height))
 ImageProportion = width/height 
 
 if(LANDSCAPE==true){break;}
 self.posterOffset++ 
 }
 }
 
 if(self.orientationSpecific==2){
 
 //enforced orientation lock
 while(LANDSCAPE==false ){ 
 
 if( self.posterOffset>= response.models.length){self.posterOffset=0}
 
 var width = (response.models[self.posterOffset].get('width'))
 var height = (response.models[self.posterOffset].get('height'))
 LANDSCAPE=(parseInt(width)>=parseInt(height))
 if(LANDSCAPE==true){break;}
 self.posterOffset++ 
 }
 }
 
 ImageProportion = width/height 
 if(ImageProportion<=0.7){miniFont='miniFont'}
 if(ImageProportion<=0.6){miniFont='microFont'}
 if(ImageProportion<=0.5){miniFont='hideFont'}
 if(ImageProportion>=1.4){miniFont='hideFont'}
 console.log('ImageProportion'+ImageProportion) 
 self.$el.html(self.PostertemplateLandscape({poster: response.models[self.posterOffset],displayCaption:displayCaption,miniFont:miniFont},offset=self.posterOffset,TemplateVarialbes=Globals.Globals)); 
 

 ....


return PosterView;
 
});

Referenced by the view is the file which acts as a database would do, called the collection, and there is a collection for each data type. The poster collection looks like this, and its main function is to point at a data source, in this case a local file, and then to allow us to perform operations on that data. We want to be able to filter on venue, and also on event type -(each machine can be set to filter on different event types)  and so below you see the functions which do this… and they cater for various misspellings of our venues just in case 🙂

//postercollection.js 

define([
 'underscore',
 'backbone',
 'models/poster/posterModel'
], function(_, Backbone, SponsorModel){

 var PosterCollection = Backbone.Collection.extend({
 
 sort_key: 'startTime', // default sort key
 

 url : function() {
 var EventsAPI = 'data/posters.JSON'; 
 return EventsAPI
 },
 
 byEventType: function(typex) { 
 typex=typex.toUpperCase()
 filteredx = this.filter(function(box) {
 
 var venuetoTest = box.get("type")
 
 if( box.get("type")){
 venuetoTest = (box.get("type").toUpperCase())}
 
 
 return typex.indexOf(venuetoTest) !== -1;
 }); 
 return new PosterCollection(filteredx);
 },
 
 

 venueFilter: function(venue) { 

 if(venue.toUpperCase()=="M SHED"){venue = "M SHED"}
 if(venue.toUpperCase()=="BMAG"){venue = "BRISTOL MUSEUM AND ART GALLERY"}
 if(venue.toUpperCase()=="MSHED"){venue = "M SHED"}
 filteredx = this.filter(function(box) {
 var venuetoTest = box.get("venue")
 
 if( box.get("venue")){
 venuetoTest = (box.get("venue").toUpperCase())}
 
 return venuetoTest==venue ||box.get("venue")==null
 }); 
 return new PosterCollection(filteredx);
 
 },
 
 parse : function(data) { 
 return data 
 }

 
 });

 return PosterCollection;

});

Referenced by the collection is the model – this is where we define the data that each poster record will need. One thing to watch here is that the field names match exactly those in the data source. When backbone loads in data from a JSON file or API, it looks for these field names in the source data and loads up the records accordingly (models in backbone speak) . So once the source data is read, we populate our poster collection with models, each model contains the data for a single poster etc.

//postermodel.js


 define([
 'underscore',
 'backbone'
], function(_, Backbone) {

 PosterModel = Backbone.Model.extend({

 defaults: {
 
 category: 'exhibition',
 irn: '123456' ,
 startDate: '01/01/2015' ,
 endDate: '01/01/2015' ,
 venue: 'MSHED' ,
 caption: 'caption' ,
 strapline: 'strapline' ,
 copyright: '© Bristol Museums Galleries and Archives' 
 

 },
 initialize: function(){
 //alert("Welcome to this world");
 },
 adopt: function( newChildsName ){
 // this.set({ child: newChildsName });
 }
 })

 return PosterModel;

});

With the collection loaded with data, and all the necessary venue and event filters applied, it is time to present the content – this is where the templates come in. A template is an html file, with a difference. The poster template contains the markup and styling needed to fill the screen, and uses the underscore library to insert and images into the design.

/*posterFullScreenTemplate_1080x1920.html */

<style>

body{
    background-color:black;
    color: #BDBDBD;
}
  
#caption{
    position: relative;
    margin-top: 40px;
    width:100%;
   z-index:1;
  /*padding-left: 20px;*/
}

.captionText{
    font-weight: bold;
    font-size: 51.5px;
    line-height: 65px;
}

.miniFont{
   font-size:35 !important;
   line-height:1 !important;
}

...

</style>


<div id="sponsorCylcer"> 
 <% 
 var imageError= TemplateVarialbes.ImageRedirectURL+ poster.get('irn') + TemplateVarialbes.ImageSizePrefix
 var imageError= TemplateVarialbes.ImageRedirectURL+poster.get('irn') + TemplateVarialbes.ImageSizePrefix 
 %>
 <div id="poster_1" class="">
 <img onError="this.onerror=null;this.src='<% print(imageError) %>';" src="images/<%= poster.get('irn') %>.jpg" />
 <div id="imageCaption"> <%= poster.get('caption') %><br> <%= poster.get('copyright') %></div>
 </div>
 


 <% if (poster.get('type').indexOf("poster") !== -1 && displayCaption==true){ %>
 <div id="datesAndInfo">
 <h1>from <%= poster.get('startDate') %> till <%= poster.get('endDate') %></h1>
 </div>

 <%} else{ 
 if ( displayCaption==true){ 

 %>
 <div id="caption">
 <div class="captionText <% if( miniFont!=false){print(miniFont)} %>" > <%= poster.get('strapline').replace(/(?:\r\n|\r|\n)/g, '<br />') %> </div>
 <%} } %>
 </div>
</div>>
 


 <% if (poster.get('type').indexOf("poster") !== -1 && displayCaption==true){ %>
<div id="datesAndInfo">
<h1>from <%= poster.get('startDate') %> till <%= poster.get('endDate') %></h1>
</div>

<%} else{ 
if ( displayCaption==true){ 

%>
<div id="caption">
<div class="captionText <% if( miniFont!=false){print(miniFont)} %>" > <%= poster.get('strapline').replace(/(?:\r\n|\r|\n)/g, '<br />') %> </div>
<%} } %>

Once the template is loaded, the poster displays, and that’s pretty much job done for that particular mode, except that we want posters to be displayed on a loop, and so the view reloads the template every x seconds depending on what has been set for that machine using the digital signage administration panel. A master timer controls how long the poster loop has been running for and moves to the next mode after that time. Additionally a counter keeps a note of the number of posters displayed and passes that number across to the next mode so when poster mode comes back round, the next poster in the sequence is loaded.

Remarks

folder structureUsing the require backbone framework for the application has kept things tidy throughout the project and has meant that extending new modes and adding database fields is as hassle free as possible. It is easy to navigate to the exact file to make the changes – which is pretty important once the app gets beyond a certain size. Another good thing is that bugs in one mode don’t break the app, and if there is no content for a mode the app flips to the next without complaining – this is important in the live environment where there are no keyboards in easy reach to ‘OK’ any error messages.

 

 

Furthermore the app is robust – we have it running on Ubuntu, Windows 7 [in Chinese], and a Raspberry PI, and it hasn’t crashed so far. Actually if it does its job right, the application architecture  won’t get noticed at all (which is why I am writing this blog)  – and the content will shine through…. one reason I have avoided any scrolling text or animations so far – posters look great just as they are, filling the screen.

Now that our content editors are getting to grips with the system, we are starting to gather consensus about which modes should be prominent, in which places – after all if you have different modes, not every visitor will see the same content – so it there any point in different modes?  Let the testing commence!

 

Acknowledgements

Thanks to Thomas Davis for the helpful info at backbonetutorials.com and Andrew Henderson for help Killing Zombies.

 

 

 

DESIGNING AN ADMINISTRATION SYSTEM FOR SERVICE WIDE DIGITAL SIGNAGE

BACKGROUND

We recently launched a system for service-wide digital signage across multiple devices, operating systems, screen sizes and screen orientations. I developed the solution with flexibility as a priority to allow us to adapt as new situations and requirements arise. In practise, going live was the best form of testing and we continue to tweak the signs based on their position, content and user needs.

If there was a take home message from this process, it is not to underestimate the amount of variables in even the simplest form of display. That’s to say, if the system is to be flexible, then these variables need to be made available to the administrators to tinker with, without the need for them to change the source code. This calls for an administration system specifically designed for the purpose of managing the variables for the digital displays, which I have called the DIGITAL SIGNAGE ADMINISTRATION PANEL

Here’s an overview of the process by which content pushes through to the signs:

 

Digital Sign Administration

ADMINISTRATION PANEL – INTERFACE

The interface is a basic HTML table displaying a list of each digital sign and the sign specific settings. Each sign is given a name which is used by the client machines to choose settings applicable to them on power up. The location is used to change the overall look and branding of the signs at different buildings. Then follows a series of time settings which control how long each mode is displayed for. The signs flip between sponsor, poster and events list modes. In order to control the sorts of content to display on each sign, for example to restrict one sign to just display exhibition details we use a comma separated list of event types which match those used in the content management system (EMu). To keep a handle on which settings relate to which machine, a comments field allows us to make notes on this – even with just 3 identical machines deployed, it is a useful reminder to know which is which in case we wish them to behave differently in future.

Panel interface In addition to the settings displayed are some hidden columns which contain further settings, such as the urls of the various APIs used to harvest data, which could one day change. These hidden settings are made available to be edited at the click of a button.

CHANGING AND STORING SETTINGS

To prevent accidental changes being made to the table, users must click the padlock icon and enter a password. Then all data in the table becomes editable, and changes are fed back in real time to be stored on the server. To allow users to see the effects of their changes on the content of each machine, the machine names become links which navigate to a web page which emulates that particular digital sign.

EXPORTING SIGN SETTINGS

As part of the scheduled content update, the sign settings are extracted from MYSQL and saved as a JSON text file. A similar additional file is required to store the arrow settings. As each digital screen knows its name and it can access the settings by matching its machine name with the relevant node in the settings.JSON file.

settings.JSON

WAYFINDING ARROW SYSTEM

One of the biggest challenges in the solution was the requirement to build in a system of wayfinding arrows for each event. Not only does each arrow need to be configured for each room location, but each digital sign is in a different location and so the problem is compounded. This called for an entity relation between the event spaces and the digital signs. As we are using MYSQL to store the sign settings, I added a new table in the database specifically to handle the arrows, and because each sign had multiple events, and each event can have multiple arrow directions depending on the sign location, we needed an additional interface to allow us to configure these settings.

To do this I extended the framework used to build the administration panel to include another panel for the arrows:

Arrow settings

A nifty JavaScript plugin (http://designwithpc.com/plugins/ddslick#demo) allowed me to incorporate the wayfinding icons into a dropdown list to make it easy for administrators to change the settings:

icons in dropdown

ADMIN PANEL – CLIENT SIDE

The administration panel is built using the Backbone JavaScript framework, and with RequireJS to manage the dependencies. This allows for easy extendibility, for example to incorporate the arrow way finding system.

Folder structure

Backbone’s model syncing methods also make it more straightforward to add new settings as new requirements arise and to match these with the database and perform updates:

backbone sync

SERVER SIDE

A PHP script on the server listens out for updates from the admin panel and saves these into MYSQL. The same script returns the new settings in JSON and it is this that is used to refresh the admin panel once changes have been made, and also to make the settings available to the scripts involved in updating the content.

The next steps for this are to include icons for upstairs and downstairs, as I have observed museum visitors reading the up symbol to mean directly ahead when in actual fact it was meant to direct people to the upper level.

NB: as ever, the devil is in the detail and far more logic for this application has been baked into the source code than could be practicably explained here , and so we hope to release the digital signage administration panel on GitHub once this development phase is over.

RESOURCES

http://backbonejs.org/

https://github.com/BristolMuseumsGalleriesandArchives

 

 

 

Working with the University and the Museum

This post is a short summary of how us lot at Aardman found it working in partnership with Bristol Museum and Bristol University.

Partnerships

We are well versed in building partnerships with our various clients – be it to produce TV commercials, video games or digital tools. The Hidden Museum project pushed the partnership model to another level – with three equal partners, all aiming to achieve a goal that we defined ourselves, and trying to figure it all out together as we went along.

The Digital R&D Fund for the Arts requires project teams to have 3 partners from set disciplines: An Arts partner, a Research partner and a Tech partner. This interdisciplinary trinity forms a super-stable foundation on which to work; the Museum provided the requirements, context and content, the University provided the objective framework, and we provided the means, management and distillation of everyone’s ideas. Although of course in reality these distinctions blur a lot…

Project management

As a group of 3 very busy organisations – each with our own respective teams – it was invaluable to have one point for all project management. And luckily we were very privileged to have two partners who were happy to bend their usual way of working to how we do things at Aardman – organising the work and collaboration around our usual agile structure.

It really helped to have everyone local, who were able to meet up on short notice (and at minimal expense), and shared a willingness to use all used the same communication tools. We generally used Basecamp for communications, Trello for task lists, and met regularly for sprint planning and closedowns.

Emphasis on research

The research goal of the project really helped shape our user stories by those that would best answer the research questions, rather than getting carried away by the technical wizardry at our fingertips, or the huge breadth of content at the Museum. And the R&D focus liberated us all to genuinely respond to user testing results – a rare privilege when working commercially.

Integrated expertise

Our close partnership enabled the museum’s senior curator Gail Boyle to be such a key member of the team – helping design the product in full context of the space, and providing in-depth knowledge about the museum and its collections – as well as a massive willingness and effort to create all the content… a huge undertaking!

It’s not over yet…

For the time being we’ve now come to the end of our role in the project, and it’s now ready for the research team to finish their testing to find out how this kind of technology really impacts museum visits. It’s been a great process – both liberating and focused – which has provided huge insights into each of our different worlds. Long may the partnership continue!

5.5.3 Technical accessibility review – The Hidden Museum App

Early on in our production we discussed levels of accessibility required for this app.

As a company, Aardman feel very connected to accessible digital products – we have created some highly accessible products in the past, the most accessible of which had to be the Something Special games which include a range of settings for users of all cognitive and physical abilities. We are well versed in the creation of accessible apps as well so we felt well prepared to advise as to suitable levels of accessibility for this product. And in one of our most recent mobile games CBBC’s Escargot Escape Artistes, as well as using only the most simple of gestures to play the game, players can choose to play it using their voice alone – without any physical input.

Since this is a research project and full accessibility can become a project all in itself, we decided that our goal would be to be ‘as inclusive as possible’ within the constraints of time and budget.

As a result, we decided that if we were going to target a single device for this research phase, that we should target iOS devices since these are known that these have the best accessibility features as standard. As a result we decided to deploy to a test base of iPad Air 2’s for this research testing.

We also ensured that all text was a minimum height on screen and that colours used in the designs were compliant with the general colour blindness guidelines for web design.

In terms of the app’s design, at first when we were leading users around the museum, care was taken to establish whether the user was able to use stairs or not. As the game design has moved away from a ‘led’ tour into more of a general guide we have not had to establish this but have always offered users routes via stairs or lifts in the map view of the app to ensure that all abilities are catered for in that way.

4.2 Testing the Hidden Museum app by Mark Pajak

I’m Mark Pajak, a documentation officer for the Bristol Culture service. I have just tested the hidden museum app before starting work today. This is my first experience with the app so its all new and I have no preconceptions to cloud my first impressions of it.

Design

A simple and colourful ‘oversized’ design was very easy to navigate with big buttons.

Usability

I didn’t read any instructions except for those written inside each button, so following the steps the app wanted me to take was straightforward. In some cases real life got in the way of my game play, such as an impromptu meeting, but I can’t fault the app for not knowing the museum was closed before 10 so the upper gallery was locked – or can I??

Fun

Yes, as a VERY regular museum visitor I am fairly locked into a routine so anything out of the ordinary is novel, and there are still many galleries I rarely visit – so a random object hunt was fun, and cut through the usual formalities of gallery interpretation & object arrangement to surprise me. not just with an object, but with new information about something I would normally not stop to look at.

Bugs

There was a lag on the scrolling when picking an avatar, other than that I didn’t detect anything the app looked like it wasn’t supposed to be doing.

Other stuff

It took a while to realise the app could tell which direction I was pointing in, though with hindsight my iPhone can do that so that’s just what they do,  which led me to consider how and why it might use that information,  and it gave a certain ‘big brother’ feeling, but doesn’t everything these days? Also I have a slight aversion to taking photos with an ipad, but that’s just me :).

Features

I could imagine someone wanting to chose a different object just because they arent fussed about climbing many stairs, but I guess that’s where kids come in – the challenge of winning the game is probably enough to get feet moving.

Overall

Simple, quick, attractive and fun – which is impressive and means there are some clever things going on ‘behind the scenes’, or at least that’s my preconception.

Supporting evidence for milestone 4.2 – informal user testing