Category Archives: Uncategorized

Exhibitions online

We recently (softly softly) went live with Exhibitions Online.

A place to translate our in-house exhibitions for an online audience, we worked with Mike and Luke at Thirty8 Digital to create a narrative structure with scroll-through content and click-through chapters on WordPress. They built in lovely features such as object grids, timelines, slideshows, maps and quotes.

There are a few exhibitions already up, past (death: the human experience) present (Empire through the Lens) and future (What is Bristol Music?). We’ve most recently used it for our European Old Masters gallery to showcase a beautiful painting we have on loan for two years: St Luke Drawing the Virgin and Child by Dieric Bouts (I discovered the Pantone app with this one, taking the red from the gallery to use online. V satisfying). I’m currently working with the exhibition team to get our Pliosaurus! exhibition up – watch this space for some fun things with that one, which we’re hoping to use for interp in our Sea Dragons gallery at Bristol Museum & Art Gallery too.

(For the What is Bristol Music? exhibition opening in May 2018, we’re using WP plugin Gravity Forms to collate peoples’ experiences and pictures of the Bristol music scene to be featured in the physical exhibition. Chip in if you have a story to tell.)

So far, we’ve found the content and arrangement really depends on the exhibition. The idea isn’t to simply put the physical exhibition online (I say ‘simply’, as if it would be) but instead to use the format and content of the exhibition to engage with people in a different environment: albeit one where we’re competing with a thousand other things for people’s attention. Exhibitions which have been and gone have been slightly more challenging, as the content was never intended for this use and has needed some wrangling. The more we use it though the smoother the process is getting, now that we know what we need and it being on teams’ plans as something to consider.

We’re still in the early stages of reviewing analytics to see how people are using it. Initial results are heartening, though, with a few thousand visits having had minimal promotion. At the moment most people are finding it from our what’s on pages (where most of our traffic to the main website is anyway) and we’re thinking about what campaigns we can do to get it out there more.

Any feedback or thoughts, hmu → fay.curtis@bristol.gov.uk

Digital Curating Internship

We are currently uni students at UWE (University of the West of England) studying history with heritage as the first students on this programme of study. We have been given the fantastic opportunity to work with the digital department at Bristol Culture which runs the various museums and heritage sites in and around Bristol as its first digital curating internship. These fully compliment what we have been and continue to study within our degrees and will allow us to put into practical use what we have studied.

Over the course of the next eight weeks will be working alongside various different departments, collections and projects, offering us a unique insight into the heritage industry.

What does digital curating mean to us?

For us digital curation is the future of 21st century museology the implementation and development of which allows for four significant benefits:

• Democratisation of information reduces barriers to entry.
• Increases the potential use of collections.
• Stimulates further research.
• Widens community engagement to ever greater and diverse audiences.

As fantastic as these systems can be there is still room for further advancement. We have already learnt in our short time here that a few issues include inconsistencies across departments, collection backlog, dirty data also the lack of secure data sharing detailed information between institutions. Despite these hurdles the drive to expand and improve digital curation continues with great hope for what can be achieved in this field.

Expectations for the role:

Through this role we aim to:

• Engage and critique existing cataloguing methods and SPECTRUM standard archival systems such as EMu.
• To develop strategies for increasing engagement with both collections and institutions.
• Develop the necessary skills and experience to pursue a career within the heritage industry.
• Work closely and network with a variety of different heritage professionals within the South West.

We both look forward to expanding both our knowledge and experience, as well as eagerly anticipating what this internship has in store for the next eight week’s .

Digital Object Labels

At Bristol Museums we use EMu to manage digital interpretation, and have several galleries with touchscreen kiosks displaying object narratives. We haven’t yet settled on a single technology, framework or data model as each new project gives us opportunities to test out new ideas, based on what our audiences want and on our previous learning. The refurbishment of our European Old Masters Gallery has given us the opportunity to extend the printed interpretation into digital.

(C) John Seaman, Bristol Culture
(C) John Seaman, Bristol Culture

The classic look of the gallery means label space is kept to a minimum, and this had reduced the amount of printed interpretation available on the physical labels. Digital gives our curators the opportunity to expand on the depth of interpretation by writing more detailed descriptions of paintings. Our challenge was to come up with a solution that provided in-gallery mobile digital interpretation that was easy to access and fast to load, and that made sense in context.

Taking a user-focused approach, we were keen to provide appropriate technology to the sorts of visitors to the gallery. Our audience research shows that mobile technology is a standard anong these visitors, as explained by Darren Roberts, our user researcher.

Our Audience segmentation shows that three of the Core Audience Segments for Rembrandt – City Sophisticates, Career Climbers, and Students – are all over 20% more likely than average visitors to use their mobile phone to access educational web content or apps.  All three groups are also over 20% more likely than average to agree with the statement ‘I couldn’t live without the internet on my mobile’. These three segments account for over a third of the general audience for the museum.

Ranked in order of segments that are both most likely to have an interest in Antiques and Fine Art and use their mobile phone to access free educational content or apps:

  1. Student Life

  2. Lavish Lifestyles

  3. City Sophisticates

  4. Career Climbers

  5. Executive Wealth

The top three are over 40% more likely than average visitors to engage in both these activities. All five are expected to be part of the core audience for the Rembrandt exhibition.

Picture2

With this in mind, we set about analysing the printed labels – looking at where data could be brought in from our collections management system (EMu) automatically to minimise effort in writing content. As it turns out we already had most of this data (artist name, birth date, death date etc.) and so the main curatorial effort could be focused on text wiring for the labels, while we designed the template to bring the data together.

Picture3

Thanks to some preliminary experiments, we already had a working framework to use – we are using AngularJS on the client side for rapid prototyping, templating, routing  and deployment.

Our next challenge was to optimise performance and maximise up-time. Having been inspired by the linked open data movement, we opted for having the data sit in structured JSON files that could be reused multiple times by various apps without querying the database directly. This had the double effect of reliability and speed. We did a similar thing with multimedia, running a regular content refresh cycle and packing everything up for the app to use, with images saved at sizes for thumbnail and detail views.

The finished template was as follows – we opted for a minimalist design for east of reading, and with responsive elements the pages work across multiple devices.

Mobile object label

The process of selecting source fields and mapping them to the template has inevitably thrown up areas where our database use could be improved, and where before we had data across many fields, now we have laid out better guidelines for object cataloguing that should ease this issue – for the app to work we needed set fields to extract information about the painting and artists.

We also had to deal with inconsistencies in terminology, for example the various ways dates could be written – on printed labels these variations are permitted, but we need to define the semantic patterns in order for this to work in digital. Now we have a workflow for improving the way we catalogue our objects as a result of this process.

Where some terms were abbreviated on the labels e.g “b” and”d”  for birth and death – we expanded these on the digital labels as space was not an issue and we also felt this was easier for users to read and understand – digital allows us to implement some of our user focused principles without disrupting the printed gallery interpretation.

Call to action

Through in-gallery user testing we found that whilst some features were obvious to us, visitors were not always getting to the bits we wanted them to see – we therefore added a call to action to make it clear what was available…

“Find out more about the objects in this gallery”

Something we are interested in finding out is how users navigate to their chosen painting. User stories and personas are one method we could use to get a better understanding of this. To facilitate various user journeys, we provide different routes to each digital label, either by searching by painting name, filtering on the artist’s name, or through browsing through the list view.

list view

Technical details:

The routing mechanism of AngularJS gave us a simple way to navigate through from the list view to the record view by altering the # parameter as follows:

List view: museums.bristol.gov.uk/labels

Record view: museums.bristol.gov.uk/labels/#/id/14135/narcissus

We also included some libraries for smooth page loading to improve the user experience. At this stage we don’t know whether the digital labels have a use outside the gallery, but in case the do we wanted the pictures to be zoomable, and there was a code library that allows this. N.B. this is not yet deep zoomable, but we are on the road to achieving that.

Data stuff

We want to be able to reuse our structured data on paintings and artists and their info and dates whenever new technology comes along, and so our data layer exists independently of the application, and it also sits outside our database on a  publicly accessible endpoint. If you want to use any of it, in JSON form you can take a look here:

We store lists of objects in separate index.json files here:

museums.bristol.gov.uk/labels/data

And for details info about an object you can load up records by their id here:

museums.bristol.gov.uk/labels/id

Structures and paths may change as we develop the system so apologies if these are not accessible at any point. We change bits in order to improve issues with loading time and reliability, but we aim to resolve this to a standard approach to our data layer with time.

We are also figuring out what structure out object (json) records need to contain in order to maximise their use outside of our collection management system. Where dates and places exist in several source fields, we can prioritise these on export to choose which dates are most suitable, and similarly for places.

We construct a standard object schema in JSON as a result of a scheduled content refresh script which queries the IMu api, prioritises which fields to include the and saves as a JSON…

json object

Next steps

We have implemented this in one gallery so far, and for one object type. We are now looking to roll this out to other galleries and look forward to similar challenges with different types of objects.

We are also extending the design of the prototype to bring in timelines and mapping functionality. These bring an interactive element to the experience and also provide new ways of visualising objects in time and space.

We included the TimelineJS3 library into our framework, and hooked it up to the same data powering the object labels. This provides a comparison of artists’ lives with each other, and with the paintings they produced.

We need to tweak the css a little, but out of the box it works well, thanks to the kind people at Knightlab.

Interactive artist timeline

take a look at our alpha for the digital timeline here

Remarks

The project has made us rethink some of our cataloging standards – we are aligning our internal data capture and export to be better equipped to make use of new web tools for public engagement.

We have decoupled the tasks of writing label text, and reusing object data and applying narrative metadata. We also have a process that would allow new layers of interpretation to be written and published to the same application architecture, and we can present a simplified data entry process to staff for this label writing process.

Picture2

Although we haven’t solved the problem of how to improve uptake of the application in-gallery, we’ll be ready when someone does. If its ibeacons that do it – and we think it might be, we can direct users to a single object label using a unique url to our digital label.

For now though it is just a trusty old url to point people to the page where they then navigate further, but we’d love to remove this barrier at some point.

 

 

 

 

 

Bristol Museum Egypt Exhibition Web-App

Hi, I’m Dhruv, and I’m a second year Computer Scientist at the University of Bristol. Along with 5 other team members, as part of our Software Product Engineering module, we are creating an interactive web-app for the Egypt Exhibition at the Bristol Museum.

The purpose of this web-app is to allow visitors to the museum to browse the exhibition whilst viewing more information about each of the exhibits on their phones, instead of the currently implemented kiosks. The following is a light technical overview of how it works.

The web-app is built on a full javascript stack involving Node.js and Express on the back-end and AngularJS on the front-end. Using frameworks based around the same language made it even easier for all members of our team to get involved with all parts of the application, as these skills easily transfer. Our system builds the website based on data exported from EMu, meaning that any updates to exhibit contents are easily displayed – be that tweaks to artefact data, or entire cabinet changes. We make this happen by designing templates for the specific types of page that exist, and use AngularJS to dynamically inject the appropriate content when the page is requested.

We decided to create a solution in this way as we felt it allowed a closer interaction with the content, along with dealing with the issue of multiple people using the kiosk at the same time. It also allows for user’s current accessibility settings (such as larger text for those with visual impairments) to be carried over.

The web-app is still in development, but some screenshots of the current implementation can be seen below.

We’ve been carrying out some user testing, and have had quite a bit of good feedback. Thanks to anyone who took the time to fill out our feedback forms!

Overall, the project has been thoroughly interesting, as it’s allowed me to expand my technical skills, but also through seeing bits of what makes Bristol Museum work smoothly.

Anatomy of our Digital Signage Web App

At this stage in the development of our digital signage, we have a working release of the software in the live environment, and we are focussing on training, improvements to the design and data structure for the next version. This post is about the nuts and bolts of how the client-side app works, while it is still fresh.

Mode Schematic

Firstly, it is a single page web application – loaded upon calling index.html from a web browser.  Inside the index.html are just the basics you’d expect. The magic is all controlled via a master JavaScript file called require.js. This library is used to pull together all of the source code in the right order and makes sure files don’t get loaded twice etc. All of the content of the app is loaded and removed via a single content div in the body.

index.html 
(... some bits removed...check the GitHub page for the whole lot)


<html>
  <head><title>BMGA Digital Signage</title>
     <link rel="stylesheet" href="css/styles.css"> 
     <script data-main="js/main" src="js/libs/require/require.js"/>    
  </head>
  <body class="nocursor">
   <div id="mainContent" > </div></div>
  </body>
</html>

The first JavaScript to load up is main.JS. This simple file follows theRequireJS format, which is used to alias some of the code libraries which will get used the most such as JQuery.

//main.js 

require.config({

 paths:{
     jquery:'libs/jquery/jquery-min',
     underscore:'libs/underscore/underscore-min',
     backbone:'libs/backbone/backbone-min', 
     templates: '../templates'
 }
 })

require([

"app"], function(App) {
App.initialize();
});

Next up is main.js. This loads up the code libraries required to start the app, and brings in our first global function – used to close each ‘view’. For a single page app it is really important to destroy any lingering event handlers and other bits which can take up memory and cause the app to go a bit crazy – something that Backbone apps have difficulties with, and otherwise known as Zombie Views. Killing Zombies is important.

//main.js
define([
 'jquery', 
 'underscore', 
 'backbone',
 'router'

], function($, _, Backbone, Router){
var initialize = function(){

 
  Backbone.View.prototype.close = function () { //KILL ZOMBIE VIEWS!!!!
      this.undelegateEvents();
      this.$el.empty();
      this.unbind();
  };
 

   Router.initialize();
 };

 return { 
     initialize: initialize
 };
});

It gets a bit more fun next as we call the backbone ‘router’ – and from now on I’ll only add snippets from the files, to see the lot head to GitHub. The router is what drives navigation through each of the modes that the screens can display. Each route takes its parameters from the url and so this means we can control the modes by appending the text ‘sponsors’, ‘posters’ or ‘events’ to the index.html in the browser.

In addition to the mode we can pass in parameters – which poster to display, which page of sponsors, which venue etc. This was a solution to the problem of how to remember which posters have not yet been shown. If you only wish the poster mode to last 40 seconds, but you’ve got lots of posters – you need to remember which posters come next in the sequence. Additionally as you loop through modes, you need to pass along each parameter until you are back on poster mode. This is why every route has all the parameters for venue and poster.

This slightly convoluted situation has arisen as we are using a page refresh to flip between modes and so without relying on local storage our variables are only around as long as the page lasts

//router.js 

 var AppRouter = Backbone.Router.extend({
 routes: { 
 'sponsors(/venue:venue)(/stick:stick)(/logo:page)(/poster:page)(/machine:machine)': 'sponsors', 
 'posters(/venue:venue)(/stick:stick)(/logo:page)(/poster:page)(/machine:machine)': 'posters', 
 'events(/venue:venue)(/stick:stick)(/logo:page)(/poster:page)(/machine:machine)(/date:all)':'events',

 }
 });

The code for a single route looks a bit like this and works as follows.  We start off with an option to stick or move – this allows us to have a screen stay on a particular mode. Then we look at our settings.JSON file which contains the machine specific settings for all of the signs across each venue. The machine name is the only setting help locally on the system and this is used to let each machine find their node of settings (loop times, etc.).

...
 app_router.on('route:posters', function(venue,stick,logoOffset,posterOffset,machine){
 
 
 var stick = stick || "move"
 var logoOffset=logoOffset||0
 var posterOffset=posterOffset||0;
 
 machineName=machine||'default'
 Allsettings=(JSON.parse(Settings))
 settings=Allsettings.machineName
 settings=('Allsettings',Allsettings[machineName])
 
 
 var venue = settings.location;
 
 if(Globals.curentView){
 Globals.curentView.close()
 }
 
 var venue = venue || "ALL"
 self.venue=venue
 
 var posterView = new PosterView({venue:self.venue,stick: stick,logoOffset:logoOffset,posterOffset:posterOffset,machine:machine,settings:settings,type: settings.eventTypes});
 
 posterView.addPostersFromLocaLFile();
 Globals.curentView=posterView
 
 

 }),
....

With all settings loaded, and filtered by machine name and the mode specified – we are ready to load up the view. This contains all of the application logic for a particular mode, brings in the html templates for displaying the content, and performs the data fetches and other database functions needed to display current events/posters…more on that in a bit

Amongst the code here are some functions used to check which orientation the image supplied is, and then cross reference that with the screen dimensions, and then check if that particular machine is ‘allowed’ to display mismatched content. Some are and some aren’t, it kinda depends. When we push a landscape poster to a portrait screen, we have lots of dead space. A4 looks OK on both but anything squished looks silly. So in the dead space we can display a strapline, which is nice, until there is only a tiny bit of dead space. Oh yep, there is some code to make the font smaller for a bit if there is just enough for a caption..etc.   ….turns out poster mode wasn’t that easy after all!

//view.js
 
define([
 'jquery',
 'underscore',
 'backbone',
 'text!templates/posters/posterFullScreenTemplate_1080x1920.html',
 'text!templates/posters/posterFullScreenTemplate_1920x1080.html',
 'collections/posters/PostersCollection',
 'helpers/Globals',
], function($, _, Backbone, posterFullScreenTemplate ,posterFullScreenTemplateLandscape,PostersCollection,Globals){

 var PosterView = Backbone.View.extend({
 
 el: $("#eventsList"),
 
  addPostersFromLocaLFile: function(){ 
 
 var self = this;
 self.PostersCollection = new PostersCollection({parse:true}) 
 self.PostersCollection.fetch({ success : function(data){
 self.PostersCollection.reset(data.models[0].get('posters'))
 self.PostersCollection=(self.PostersCollection.byEventType(self.settings.eventTypes));
 self.PostersCollection=(self.PostersCollection.venueFilter(self.venue));
 self.renderPosters(self.PostersCollection)
 
 $( document ).ready(function() {
 
 setInterval(function(){ 
 
 self.renderPosters(self.PostersCollection)
 if(self.stick=="move"){ 
 setTimeout(function() { 
 self.goToNextView(self.posterOffset)
 }, settings.posterMode_time * 1000);
 }
 }, settings.posterLoop_time * 1000);
 })
 
 }, dataType: "json" });
 
 },
 
 renderPosters: function (response) { 

 if( self.posterOffset>= response.models.length){self.posterOffset=0}
 
 var width = (response.models[self.posterOffset].get('width'))
 var height = (response.models[self.posterOffset].get('height'))
 LANDSCAPE=(parseInt(width)>=parseInt(height))
 ImageProportion = width/height 
 
 if(LANDSCAPE==true){break;}
 self.posterOffset++ 
 }
 }
 
 if(self.orientationSpecific==2){
 
 //enforced orientation lock
 while(LANDSCAPE==false ){ 
 
 if( self.posterOffset>= response.models.length){self.posterOffset=0}
 
 var width = (response.models[self.posterOffset].get('width'))
 var height = (response.models[self.posterOffset].get('height'))
 LANDSCAPE=(parseInt(width)>=parseInt(height))
 if(LANDSCAPE==true){break;}
 self.posterOffset++ 
 }
 }
 
 ImageProportion = width/height 
 if(ImageProportion<=0.7){miniFont='miniFont'}
 if(ImageProportion<=0.6){miniFont='microFont'}
 if(ImageProportion<=0.5){miniFont='hideFont'}
 if(ImageProportion>=1.4){miniFont='hideFont'}
 console.log('ImageProportion'+ImageProportion) 
 self.$el.html(self.PostertemplateLandscape({poster: response.models[self.posterOffset],displayCaption:displayCaption,miniFont:miniFont},offset=self.posterOffset,TemplateVarialbes=Globals.Globals)); 
 

 ....


return PosterView;
 
});

Referenced by the view is the file which acts as a database would do, called the collection, and there is a collection for each data type. The poster collection looks like this, and its main function is to point at a data source, in this case a local file, and then to allow us to perform operations on that data. We want to be able to filter on venue, and also on event type -(each machine can be set to filter on different event types)  and so below you see the functions which do this… and they cater for various misspellings of our venues just in case 🙂

//postercollection.js 

define([
 'underscore',
 'backbone',
 'models/poster/posterModel'
], function(_, Backbone, SponsorModel){

 var PosterCollection = Backbone.Collection.extend({
 
 sort_key: 'startTime', // default sort key
 

 url : function() {
 var EventsAPI = 'data/posters.JSON'; 
 return EventsAPI
 },
 
 byEventType: function(typex) { 
 typex=typex.toUpperCase()
 filteredx = this.filter(function(box) {
 
 var venuetoTest = box.get("type")
 
 if( box.get("type")){
 venuetoTest = (box.get("type").toUpperCase())}
 
 
 return typex.indexOf(venuetoTest) !== -1;
 }); 
 return new PosterCollection(filteredx);
 },
 
 

 venueFilter: function(venue) { 

 if(venue.toUpperCase()=="M SHED"){venue = "M SHED"}
 if(venue.toUpperCase()=="BMAG"){venue = "BRISTOL MUSEUM AND ART GALLERY"}
 if(venue.toUpperCase()=="MSHED"){venue = "M SHED"}
 filteredx = this.filter(function(box) {
 var venuetoTest = box.get("venue")
 
 if( box.get("venue")){
 venuetoTest = (box.get("venue").toUpperCase())}
 
 return venuetoTest==venue ||box.get("venue")==null
 }); 
 return new PosterCollection(filteredx);
 
 },
 
 parse : function(data) { 
 return data 
 }

 
 });

 return PosterCollection;

});

Referenced by the collection is the model – this is where we define the data that each poster record will need. One thing to watch here is that the field names match exactly those in the data source. When backbone loads in data from a JSON file or API, it looks for these field names in the source data and loads up the records accordingly (models in backbone speak) . So once the source data is read, we populate our poster collection with models, each model contains the data for a single poster etc.

//postermodel.js


 define([
 'underscore',
 'backbone'
], function(_, Backbone) {

 PosterModel = Backbone.Model.extend({

 defaults: {
 
 category: 'exhibition',
 irn: '123456' ,
 startDate: '01/01/2015' ,
 endDate: '01/01/2015' ,
 venue: 'MSHED' ,
 caption: 'caption' ,
 strapline: 'strapline' ,
 copyright: '© Bristol Museums Galleries and Archives' 
 

 },
 initialize: function(){
 //alert("Welcome to this world");
 },
 adopt: function( newChildsName ){
 // this.set({ child: newChildsName });
 }
 })

 return PosterModel;

});

With the collection loaded with data, and all the necessary venue and event filters applied, it is time to present the content – this is where the templates come in. A template is an html file, with a difference. The poster template contains the markup and styling needed to fill the screen, and uses the underscore library to insert and images into the design.

/*posterFullScreenTemplate_1080x1920.html */

<style>

body{
    background-color:black;
    color: #BDBDBD;
}
  
#caption{
    position: relative;
    margin-top: 40px;
    width:100%;
   z-index:1;
  /*padding-left: 20px;*/
}

.captionText{
    font-weight: bold;
    font-size: 51.5px;
    line-height: 65px;
}

.miniFont{
   font-size:35 !important;
   line-height:1 !important;
}

...

</style>


<div id="sponsorCylcer"> 
 <% 
 var imageError= TemplateVarialbes.ImageRedirectURL+ poster.get('irn') + TemplateVarialbes.ImageSizePrefix
 var imageError= TemplateVarialbes.ImageRedirectURL+poster.get('irn') + TemplateVarialbes.ImageSizePrefix 
 %>
 <div id="poster_1" class="">
 <img onError="this.onerror=null;this.src='<% print(imageError) %>';" src="images/<%= poster.get('irn') %>.jpg" />
 <div id="imageCaption"> <%= poster.get('caption') %><br> <%= poster.get('copyright') %></div>
 </div>
 


 <% if (poster.get('type').indexOf("poster") !== -1 && displayCaption==true){ %>
 <div id="datesAndInfo">
 <h1>from <%= poster.get('startDate') %> till <%= poster.get('endDate') %></h1>
 </div>

 <%} else{ 
 if ( displayCaption==true){ 

 %>
 <div id="caption">
 <div class="captionText <% if( miniFont!=false){print(miniFont)} %>" > <%= poster.get('strapline').replace(/(?:\r\n|\r|\n)/g, '<br />') %> </div>
 <%} } %>
 </div>
</div>>
 


 <% if (poster.get('type').indexOf("poster") !== -1 && displayCaption==true){ %>
<div id="datesAndInfo">
<h1>from <%= poster.get('startDate') %> till <%= poster.get('endDate') %></h1>
</div>

<%} else{ 
if ( displayCaption==true){ 

%>
<div id="caption">
<div class="captionText <% if( miniFont!=false){print(miniFont)} %>" > <%= poster.get('strapline').replace(/(?:\r\n|\r|\n)/g, '<br />') %> </div>
<%} } %>

Once the template is loaded, the poster displays, and that’s pretty much job done for that particular mode, except that we want posters to be displayed on a loop, and so the view reloads the template every x seconds depending on what has been set for that machine using the digital signage administration panel. A master timer controls how long the poster loop has been running for and moves to the next mode after that time. Additionally a counter keeps a note of the number of posters displayed and passes that number across to the next mode so when poster mode comes back round, the next poster in the sequence is loaded.

Remarks

folder structureUsing the require backbone framework for the application has kept things tidy throughout the project and has meant that extending new modes and adding database fields is as hassle free as possible. It is easy to navigate to the exact file to make the changes – which is pretty important once the app gets beyond a certain size. Another good thing is that bugs in one mode don’t break the app, and if there is no content for a mode the app flips to the next without complaining – this is important in the live environment where there are no keyboards in easy reach to ‘OK’ any error messages.

 

 

Furthermore the app is robust – we have it running on Ubuntu, Windows 7 [in Chinese], and a Raspberry PI, and it hasn’t crashed so far. Actually if it does its job right, the application architecture  won’t get noticed at all (which is why I am writing this blog)  – and the content will shine through…. one reason I have avoided any scrolling text or animations so far – posters look great just as they are, filling the screen.

Now that our content editors are getting to grips with the system, we are starting to gather consensus about which modes should be prominent, in which places – after all if you have different modes, not every visitor will see the same content – so it there any point in different modes?  Let the testing commence!

 

Acknowledgements

Thanks to Thomas Davis for the helpful info at backbonetutorials.com and Andrew Henderson for help Killing Zombies.

 

 

 

DESIGNING AN ADMINISTRATION SYSTEM FOR SERVICE WIDE DIGITAL SIGNAGE

BACKGROUND

We recently launched a system for service-wide digital signage across multiple devices, operating systems, screen sizes and screen orientations. I developed the solution with flexibility as a priority to allow us to adapt as new situations and requirements arise. In practise, going live was the best form of testing and we continue to tweak the signs based on their position, content and user needs.

If there was a take home message from this process, it is not to underestimate the amount of variables in even the simplest form of display. That’s to say, if the system is to be flexible, then these variables need to be made available to the administrators to tinker with, without the need for them to change the source code. This calls for an administration system specifically designed for the purpose of managing the variables for the digital displays, which I have called the DIGITAL SIGNAGE ADMINISTRATION PANEL

Here’s an overview of the process by which content pushes through to the signs:

 

Digital Sign Administration

ADMINISTRATION PANEL – INTERFACE

The interface is a basic HTML table displaying a list of each digital sign and the sign specific settings. Each sign is given a name which is used by the client machines to choose settings applicable to them on power up. The location is used to change the overall look and branding of the signs at different buildings. Then follows a series of time settings which control how long each mode is displayed for. The signs flip between sponsor, poster and events list modes. In order to control the sorts of content to display on each sign, for example to restrict one sign to just display exhibition details we use a comma separated list of event types which match those used in the content management system (EMu). To keep a handle on which settings relate to which machine, a comments field allows us to make notes on this – even with just 3 identical machines deployed, it is a useful reminder to know which is which in case we wish them to behave differently in future.

Panel interface In addition to the settings displayed are some hidden columns which contain further settings, such as the urls of the various APIs used to harvest data, which could one day change. These hidden settings are made available to be edited at the click of a button.

CHANGING AND STORING SETTINGS

To prevent accidental changes being made to the table, users must click the padlock icon and enter a password. Then all data in the table becomes editable, and changes are fed back in real time to be stored on the server. To allow users to see the effects of their changes on the content of each machine, the machine names become links which navigate to a web page which emulates that particular digital sign.

EXPORTING SIGN SETTINGS

As part of the scheduled content update, the sign settings are extracted from MYSQL and saved as a JSON text file. A similar additional file is required to store the arrow settings. As each digital screen knows its name and it can access the settings by matching its machine name with the relevant node in the settings.JSON file.

settings.JSON

WAYFINDING ARROW SYSTEM

One of the biggest challenges in the solution was the requirement to build in a system of wayfinding arrows for each event. Not only does each arrow need to be configured for each room location, but each digital sign is in a different location and so the problem is compounded. This called for an entity relation between the event spaces and the digital signs. As we are using MYSQL to store the sign settings, I added a new table in the database specifically to handle the arrows, and because each sign had multiple events, and each event can have multiple arrow directions depending on the sign location, we needed an additional interface to allow us to configure these settings.

To do this I extended the framework used to build the administration panel to include another panel for the arrows:

Arrow settings

A nifty JavaScript plugin (http://designwithpc.com/plugins/ddslick#demo) allowed me to incorporate the wayfinding icons into a dropdown list to make it easy for administrators to change the settings:

icons in dropdown

ADMIN PANEL – CLIENT SIDE

The administration panel is built using the Backbone JavaScript framework, and with RequireJS to manage the dependencies. This allows for easy extendibility, for example to incorporate the arrow way finding system.

Folder structure

Backbone’s model syncing methods also make it more straightforward to add new settings as new requirements arise and to match these with the database and perform updates:

backbone sync

SERVER SIDE

A PHP script on the server listens out for updates from the admin panel and saves these into MYSQL. The same script returns the new settings in JSON and it is this that is used to refresh the admin panel once changes have been made, and also to make the settings available to the scripts involved in updating the content.

The next steps for this are to include icons for upstairs and downstairs, as I have observed museum visitors reading the up symbol to mean directly ahead when in actual fact it was meant to direct people to the upper level.

NB: as ever, the devil is in the detail and far more logic for this application has been baked into the source code than could be practicably explained here , and so we hope to release the digital signage administration panel on GitHub once this development phase is over.

RESOURCES

http://backbonejs.org/

https://github.com/BristolMuseumsGalleriesandArchives

 

 

 

bristolmuseums.org.uk – phase two, milestone two

Well it seems it’s March already. This means we’re now two milestones into project website phase two.

We’ve done a chunk of work on events filtering, which you can try out here: http://www.bristolmuseums.org.uk/whats-on/ Hopefully you’ll agree it’s pretty simple and useful. Of course we did a spot of user testing for it and got lots of positive noises from people – let us know what you think of it.

broWe also worked a bit on improving how our opening times are displayed. We added the option to add ‘notes’ to particular days, which is mainly for Bristol Record Office who have a range of opening times across any given week or month. We’re really trying to make it as clear as possible when our sites are open (and of course each of the six sites have different opening times across different seasons over any given year).

Other stuff for milestone one included nicer 404 pages, WordPress upgrade and some other bits and bobs from phase one.

So, onto milestone two. During February we held three workshops – for venue hire, what’s on and learning. In these we got a load of people from all over the service together to map out who our users are and what they need from us for each. Ben over at fffunction is going to talk more about how we get from the workshops to the prototypes in a future post, but for now I’ll leave you with a couple of images to show where we are with our venue hire section. At the moment we’re testing the prototype and putting together some visual designs for it. I’m sure it won’t be long until it’s live, and in the meantime we’re starting to think about how we show our learning offer and enabling users to book workshops online.

visual
Visual designs for venue hire
prototype
Venue hire prototype

 

Video Subtitling – making content more accessible

We aim  to make the museum more accessible for all visitors and adding subtitles goes a long way to help us achieve this. For example see the video below, of the audio description DiscoveryPens, in use in the French Art Gallery, at their launch event.

DiscoveryPENS launch – at Bristol Museum and Art Gallery

I realised that our museum videos too, can and should be made more accessible! – and I don’t just mean spreading all over the world web – but for all our visitors of our website!
Our aim: to increase the accessibility of a video’s content -by using subtitles for all video and audio content.

The simple task of subtitling.
I’ll tell you now: It isn’t too difficult, but it can be considered a monotonous task. I have detailed what I did, with some advice how you can do this effectively. So bear in mind, that in my personal experience and if you have found a better way then let me know:

About 1min of video = 1hour of transcribing

Therefore, before you start, you can estimate how long it will take to subtitle your video. It does depend how long your video is, and how much of it is dialogue! Essentially it is a small thing to do, to advance the accessibility of your video content.

What I did:
• Watched the video and listened to the person talking.
• Paused the video and wrote down the speech in a document.
-this is for record keeping and to make sure there will be no spelling and grammar mistakes! When I copied this to subtitling my video on Youtube.
• I needed to make the subtitles match up to the visuals (usually the person speaking).
• But to have to enough time on screen to be read.

*Helpful tip: -to check if the timing was correct I would turn off the sound and see if I had enough time to read it.

• I would cut the dialogue to a sentence length which I thought looked well on the screen,
• fitted appropriately to the natural pause in the speech,
• and to not repel the editing cuts when the subtitles would change from sentence to sentence.

The cuts between shot to shot should be in accordance to the cuts between sentence to sentence.

Both words and visual cuts should appear smooth and not resisting each other.

• When the sentences were added to the video, in the appropriate place, I would record their start and end time.
• I would add this to the word document next to the deciphered speech.

*Helpful tip: To avoid your transcription document looking like a mass of unapproachable entries, I suggest separating the different parts of your video.

Written dialogue

• For example: Beginning, Middle, and End.
• Or of different people speaking if they are speaking in separate chunks.

This is usually where the video cuts, to a different part – your word document should try and reflect this.

• I also suggest doing this as you go! Your document will look a lot better and doing at the end will be a harder task.

Raspberry Pi as aTouchscreen Kiosk

I am currently downloading a web application onto a Rasperry Pi in the hope that it will work. The idea is to use dropbox to sync a web directory when the device loads up which will then be accessed by a touchscreen interface. All files are held and referenced locally so if the internet goes down there is no downtime for people in the gallery.

pi

The system is up and running already on a Mini-mac, and working well. Our problem is that only one Mini-mac in the gallery has an operating system that can run a modern browser such as chrome, which is needed to run the gallery app.

The main test is if the Chromium web browser on the Pi behaves in the same way as Chrome on the mac – if this fails we will need to rethink things – perhaps the javascript could be tweaked to make it work, but maintaining two versions of the app would be rather time consuming.

If and once the app works – the test will be one of perfomance and whether any css effects run too slow to make this a feasable replacement for the Mini-macs.*

* this blog post comes mid way through the project so some background information is needed: we are in the process of migrating a gallery interactive solution from a stand alone system into the main collections database. In doing so we are redeveloping the legacy flash-based applications into a more sustainable javascript web application. During the project it has become clear that new hardware is required in order to run modern web browsers, and the budget implications of replacing 14 mini-macs has got us experimentinf with the Pi.

Watch this space….

Installing Dropbox on a Raspberry Pi