All posts by Mark Pajak

Developing a Prototype Digital Signage Application

Capture

 

We are soon to upgrade digital signage across various museum sites, and my role has been to develop the various software mechanisms to gather and display the data for our prototypes. This is a brief post about how our prototype currently works. As a bit of background our legacy signage is based on flash which, although pretty and robust under certain circumstances, has several limitations making it no longer a valid option.

Use Cases

The software would be used by both museum staff wishing to publish events, and users who need to access information about the timings and locations of events. We also have other uses such as those wishing to display messages from sponsors or front of house staff.

Client Side

We chose to implement the signs in html/JavaScript as we already had a working model for doing this which could be adapted, and this would give us the most flexibility and control for future developments. I decided to use the Backbone JavaScript framework to organise the application because of the way it would allow different templates to be used for our different designs, and also because of the way the sign data could be defined and extracted from various sources before being published. This would allow us to be flexible about which systems we use to manage the data – some of these are still in specification, and so we have the option to change data sources quite easily in future. I also used the RequireJS plugin to manage the various other plugins and dependencies we may encounter during development. With this framework and application structure in place before work began it made building the application fairly straightforward and the modular design means we can troubleshoot effectively and adapt the designs easily in future.

Server Side

Because we already use the Events Module of the KE EMu Collections Management Software to manage the exhibition object and multimedia workflow, most of the data we wanted to publish to the signs already exists as event records – so we just needed a way to publish this straight from EMu. I developed a PHP API which returns a JSON list of events (title, description, dates, etc.) which can be accessed over Wi-Fi (hopefully!). To make the system more robust we also wanted the data and images to be held locally on the digital signs, so we also needed another way to send and store the data. I adapted the API to also save the events list to a file which could be stored locally on the signs to achieve this. Similarly for the multimedia this also needed to be saved locally in case of the Wi-Fi going down. To make life easier for staff we have commissioned a new tab in EMu specifically for digital signage – this brings together just the fields used to manage and display sign data, but it also means we can harness records that already exist in the system, in keeping with the ‘Create Once, Publish Everywhere’ ethos.

Additionally I also wanted to open up other options for source data to go to the signs, for staff that would not normally have access to our collections database, so I developed an API in Google application script to allow us to manage and publish data using a Google Docs spreadsheet, if needed.

Update Scripts

We needed a mechanism to transfer the application and its content over to the signs to be held locally. Our digital team were experimenting with Ubuntu for the sign OS so I built the data loader engine using Linux shell scripts. These scripts would download a zipped version of the software on power up, and unzip the files. This would also allow us to carry out upgrades to fix bugs and improve the design during testing. I decided to use a switch, contained in a settings file which could be used to control whether the whole sign application got updated, or just the images and text to be displayed. This way I can update signs individually for testing new releases. These settings would also control which mode the sign was in – so we can specify landscape vs portrait, or which museum building the sign was in so the branding could be adjusted. This settings file would have to live outside of the main application in order for us to use one app for all signs, and this process would need to be documented in the installation instructions.

So, the update scripts had logic for upgrading, or updating the sign data as well as some failsafe code in case of only a partial download or no internet connection. The various update scripts were controlled by a master script which would be set to run each time the sign was powered on, and this would also start Chrome in full screen kiosk mode with the various parameters for local file access and other bits.

Design

I used Chrome Dev tools to build the front end, working from a design supplied by our in house team. As the signs are pretty large and tall the Chrome screen emulator helped to get the proportions right. We decided not to go with a responsive design because tests had already showed problems with css media queries when connecting to digital screens, also there was not any use cases for small screens, and again our framework makes different designs easy to implement in the same app. The main issue so far with the designs is not knowing how many events records there will be on any one day, and so we don’t yet know if we will have to scroll / rotate the records, or if we will have trouble filling all the slots.  For testing though I added some code to beef up the records in case there were not enough to fill each entry. The html was fairly simple – just a table and an image, but this was getting created from the source data using Underscore, a prerequisite of Backbone. The designs also specified images to fade in and out on rotation to represent the events, but not all events would have images, so I used a separate template and Backbone collection for images – this means the system won’t crash if not all events have images, (unlike our legacy flash software).

Further Information

Here’s a link to the latest release of the software on GitHub

Next steps

To work with team digital to refine and test the installation process, and see what our users think.

 

 

 

iBeacons – our experience in the Hidden Museum app

This post is a longer-than-normal summary of our experience using iBeacons in the Hidden Museum project – intended to document a few pointers for anyone considering iBeacons for their own indoor navigation system. Caution…. this does veer more toward the technical underbelly of the project, rather than the user-facing experience… which is covered in a separate post.

To give this all a bit more context, our basic setup is this: 1) A whole load of iBeacons placed around all three floors of the museum, 2) a device which uses their signals to calculate where it is in the museum, which 3) also uses its own compass to know which way it’s pointing. With these three tools our users can navigate a generated tour of the museum, getting lead from room to room, floor to floor, with an app that reacts when they reach each destination on the tour.

Spoiler alert… the system works!

The beginning

From the outset our fundamental technical goal was to accurately guide a single device on a physical journey around the museum, and have it react when it reaches multiple, flexible locations.

iBeacons?

iBeacons emit a signal that can be picked up by a mobile device, and the strength of that signal tells the device a rough distance between it and the iBeacon. With a few assumptions, this suggests that the technology allows a mobile device to pinpoint it’s position within a known indoor space – the two most obvious methods being triangulation-style positioning and hotspot check-in systems.

We opted for the triangulation method – as in theory if it was successful we would be able to apply to the system to any space, and cater for all sorts of navigation methods…. Particularly when used in conjunction with the device compass.

Brands

If you’ve started looking into procuring iBeacons you’ll know there are loads of suppliers to pick from, and it’s not easy to see the difference (in many cases there isn’t much). After assessing a range of brands including BlueSense, Glimworm and the beautifully presented Esitmote, we opted for Kontakt… primarily as they have easily replaceable batteries, easily configurable, are the right price, supply in volume (we needed a lot), and are visually discrete. Here’s a list of them:

Supplier URL Volume pricing Price per 100 (ex VAT + Shipping)
Kontakt http://kontakt.io/product/beacon/  Yes  ~$2200 (need to contact for discount)
BlueSense Networks http://bluesensenetworks.com/product/bluebar-beacon/  Yes  £1499
Glimworm beacons http://glimwormbeacons.com/buy/20-x-packages-of-4-glimworm-ibeacons-white-gloss-finish/  Yes  €1980
 Sensorberg http://www.sensorberg.com/en/  No  €89 per 3
 Sticknfind https://www.sticknfind.com/indoornavigation.aspx  No  $389 per 20
 Estimote http://estimote.com  No  $99 per 3
 Gelo http://www.getgelo.com/beacons/  No  $175 per 5

Placement and security

The triangulation method requires a large number of iBeacons throughout the museum building in precise locations – effectively creating a 3D grid of signals. These need to be out of reach and ideally invisible to both the public and staff, as otherwise they might be accidentally moved, tampered-with or even taken… any of which will cause serious navigation bugs in our software. This meant that colourful and attractive iBeacons such as Estimote were out of the picture for this project.

Software choices

We decided to implement the navigation system in Unity 3D. Although it’s primarily a game engine, it is where our core mobile experience lies, it satisfies the cross-platform requirements of real world implementations, it is popular and has super-low barrier to entry with developers, and has very little reliance on proprietary tech.

Triangulation method in Unity

Triangulation mathsSo… how best to implement triangulation in Unity? We take the perceived distance from all ‘visible’ iBeacons, and from that we work out the precise position of the device. After a few sprints of getting neck deep in advanced mathematics, we opted to use Unity’s built-in physics engine to do the heavy lifting for us… using Spring Joints from each iBeacon to automagically position the device on a virtual map, based on perceived distances from each iBeacon in range, allowing Unity to perform the complex maths for us.

Maths and Unity

Early internal testingBelow is a vid of an early test in the Aardman atrium – displaying the device’s perceived position and direction within a virtual 3D model of the building, as the user walks around. The bright-coloured areas on the mobile display are the two doorways. We’re not embarrassed to say that when we got this working it blew our minds a little bit.

Internal early testing

Reliability

For a triangulation system to work effortlessly the distance data it’s based on needs two things: to be accurate, and to be updated frequently.

IBeacon distance readings tend to be fairly inaccurate – with meaningful variance even in the best conditions (up to 3 metres out), and much worse in bad conditions (physical interference such as pillars or people, and electrical interference such as laptops or mobile devices). Accuracy does tend to increase the closer the iBeacons are to the device.

Frequency is also an issue. Users move around a museum space surprisingly fast… and with our system only able to read signals once a second or so it requires a lot of smoothing on the positioning data to avoid flip-outs every time an update occurs.

The compass

The compass is a tricky little system to wrangle. It is 100% reliant on the accuracy of the device’s hardware and software… which isn’t great when it comes to smart phones and tablets. Even in the best conditions digital compasses are likely to be anywhere up to 20% inaccurate (see http://www.techhive.com/article/2055380/six-iphones-tested-and-they-cant-agree-on-true-north.html) – and in bad conditions (such as an indoor space with lots of electrical interference and organic, metal or stone structures everywhere) we’ve witnessed the reading to be out by up to 90 degrees… really not ideal for leading users around a space accurately.

Three-dimensional placement

Map of Bristol Museum and Art GalleryWe knew that iBeacons work on distance, and so therefore the height at which we placed them would make a difference. But – perhaps naively -we didn’t expect this to cause much of an issue, so long as it was consistent. We didn’t take into consideration how powerfully the signals could penetrate through floors and ceilings… and certainly didn’t foresee issues caused by open atriums and balconies.

The Bristol museum and Art Gallery is a complicated building, with vast rooms (some without ceilings), small rooms, corridors,  stairwells, about different 6 levels over three defined floors, and even galleries overlooking other rooms as balconies.

In such a space not only is it difficult to find a consistent position in which to place the iBeacons – there are many opportunities for the device to get a bit confused about what floor it’s on…. particularly when it’s picking up signals from the floors above and below, which happen to be stronger than the closest signals from the room it’s physically in.

With a standard GPS system this would be like expecting it to tell you not just what side of the multi-storey carpark you’re in, but which level of it you’re on. And while iBeacon triangulation is vastly easier in simple environments that can be mapped in two dimensions, it is still possible in three – and we actually did it in the end.

 

Handling  shortcomings

So… there are a number of technical issues covered in this post, and each of them has led us to simplify and adapt the experience – even though the underlying tech is largely the same. We quickly learned to accept the huge variance in the quality, accuracy and timeliness of the data our navigation is based on, and soften the blow as much as possible so that the user’s experience isn’t affected:

  1. The inaccuracies and signal latency of iBeacons led us to free our user experience from relying on pin-point positioning – and rather round up to the user’s position to just the room they’re definitely in.
  2. The compass inaccuracies lead us to not rely on the compass to lead user around footstep by footstep – but rather to just occasionally find their bearings when stationary.
  3. The issues caused by three dimensional inaccuracies lead us to create navigation logic that only recognises movement between adjacent rooms…. So if that if the triangulation data suddenly starts suggesting the device has changed floor, it only recognises it if the user has just left an appropriate stairwell or lift area.

What’s brilliant about these solutions are how they each have significant emergent benefits to the overall user experience. Our users are not staring at the phone constantly, causing them to trip over bags and other visitors, and they’re using their brains and senses and communication to guide themselves.

All of these user experience developments and considerations will be covered in a separate post on this blog.

Summary

While iBeacons may not provide the perfect navigation system, they really aren’t a bad approach for both indoor and outdoor navigation – particularly if you go in with your eyes open to the potential issues. We achieved a slick and functional product, learning loads as we went… and hopefully this post has highlighted the issues to watch out for in your own iBeacon implementations. Thanks for reading!

This blog post acts as ‘milestone 3’ evidence Doc 3.1, 3.2 3.3, 3.4 & 3.5 (video test gif)

Creating characters for our Hidden Museum app

We’ve been beavering away creating characters for our app. Creating a style for this exercised our grey matter somewhat!

We needed to create a look which complimented the very clean visual style that our lead designer for the project, Sarah Matthews created for the app. However, from our years of experience in creating characters for apps and games we know that younger children react much better to characters with an element of realism to them.

In user testing we trialed 3 potential styles:

A photographic style, using photographs of artifacts from around the museum:

Photographic character

A silhouetted style, which fits very well with the UI design, but is much less realistic in style:

Silhouette character

A geometric style, which is stylised like our UI design, but has much more realism:

Geometric character style

The overwhelming vote was for the geometric style, which thankfully our production team all liked too – so we decided to go down this route.

Next we had to decide what our characters should be – we had some ideas and Gail was great at helping us to hone these down. We wanted them to really represent the diversity of the exhibits at the museum, and also be appealing to all age groups and both males and females. So we settled on the ubiquitous dinosaur, George Peppard the boxkite pilot who flies the plane in the main hall, a chinese dragon head as represented in the chinese dragons over the stairs int he main hall, a roman goddess to represent the museum’s wealth of roman artifacts, a female Egyptian Mummy with gorgeous coloured paint and… Alfred the Gorilla (obviously!)

Here are the results!

All characters

Still a bit of work to do on the Mummy to make her a little more female, but more or less there.

Testing the stories/tour – Kids in Museums User Testing Day

At it’s core, our Hidden Museum app takes users on a tour around the Bristol Museum, guiding them to places in the museum they might not ordinarily go, and revealing hidden information as they progress through the game.

To simulate this experience our assistant creative director, Rich Thorne, behaved as the app, using one of the story-tours we had created; he did this by leading the group around the museum around a range of artifacts on a theme (in this case ‘horses’) and engaging them in conversation about each artifact as they went – explaining how they were linked and telling them an interesting story about each artifact once it was found.Kids on tour questions

The aim of this was to:

  • See how long it took them to get round the journey as a group.
  • Get feedback from the children on any standout points of interest on the tour.

Key observation points for the supervisors of the tour were:

  • Is the tour engaging, interesting?
  • Is the tour too long/short?
  • Which object was the favourite on the journey?
  • Have they visited the museum before?
  • Have they ever been to the top of the museum?
  • Is it more fun having a checklist? As opposed to walking around the museum looking at everything.
  • Do they have ideas for trail themes which would they like to go on?

Kids in museums kids look at cabinet

What we found:

We discovered that the tours which we had devised contained too much identification in advance about what the group were going to see. For example, the horse trail said in advance that the group are going to find objects about horses in them. This was taking some of the excitement out of the tour. We needed to find a way to work on the themes in order to broaden them out to become more subjective and feel less curated.

We also learnt that this kind of curation meant that we were not making the most of the ‘hidden’ metaphor of our app. Whilst we were leading the group to areas they might not otherwise have gone to, it did not allow for enough free exploration of rooms and free thought around the objects themselves – getting lost in the museum and ‘accidentally’ discovering something hidden should be a desirable side effect of the app so we needed to find a way to allow for this.

The groups, particularly the kids, were very interested in collecting and counting. They were also particularly keen on emotive subjects – picking out items which were ‘weird’ or ‘strange’ or ‘scary’ or ‘cute’.

Kids in Museums – User Testing Day – Overview

On Friday 21st November, the Hidden Museum team were lucky enough to have the opportunity to be a part of a Kids in Museums take over day. Kids in Museums are an independent charity dedicated to making museums open and welcoming to all families, in particular those who haven’t visited before.

Kids in Museums Logo

This was a great opportunity for us to test our app in production with real users, over 30 kids and 6 adults who had never seen the app before! A real coup for us as developers – a chance to get some real insight into how our app might be received on completion.

However we were conscious that we did not want to take advantage of the day and it’s main aim of making museums more open to kids and families. So we took great care to plan the day with the education coordinators at the museum, Naif and Karen, to ensure that we were providing a fun experience for the kids, as well as testing elements of our app. As there were adults supervising the kids we also tested all elements with the supervising adults to get an impression of a family group’s opinion of each element (rather than kids opinions only) – very important as our app is aimed at a mixed-age group.

Kids on tour

Naif and Karen suggested that we warmed them up with a ‘fingertips explorers’ activity – where the kids felt an artifact blindfold, and had to describe it to their friends – their friend’s guessing what it could be. (A fun game which the kids really enjoyed and we have used as an influence for one of our games as a result!)

We decided that after the fingertips explorers warm up the kids would be ready for app testing!

Kids in museums kids looking at zebra

As the app was not yet complete, we decided to test elements of the app broken down, rather than the experience as a whole. We decided to break our testing down into 4 elements:

Testing the stories/tour

Testing the iBeacons/compass interface

Testing the UI

Testing the games

This decision was reached mainly through necessity since the app was not complete. However, we found that it really worked for us, and allowed us to get some really in depth insight into our app. This was for two reasons – firstly it allowed us to break the group of children down into smaller more manageable groups so we were able to have real conversations with each of the children in turn – and secondly it allowed us to assess which elements of the app they struggled with the most and so exactly where we should be making our improvements.

There were lots of great testing outcomes to the day around each of the app elements outlined above – I’ll update the blog with how we tested each of the app elements (and the associated learnings) over the coming days.

 

Running Google Chrome in Kiosk Mode – Tips, Tricks and Workarounds

We are using Google Chrome to publish collections based information and multimedia to the galleries in M Shed using a web application. Here are a few pointers which have helped us get the system up and running

How to run chrome in kiosk mode?

Kiosk screenshot

Google Chrome comes with a built in kiosk mode which makes it load up as a full screen browser and without the usual menu bars and features that would normally let you navigate away from or close down the app. There are various ways you can do this, all of which involve tagging on the –kiosk argument to the command that starts Chrome. N.B. don’t try this out this unless you can CRT-ALT-DEL out of it! The script below can be saved as a .bat file in windows, run from the command line, or the extra arguments can be inserted into the proprties of the Chrome icon used to load up the application.

The following batch script loads up chrome in kiosk mode at a specific page:

start “Chrome” chrome file:///C:/Kiosk2014/CaseLayout.htm?KIOSK=LB-DS-ICT02 –kiosk

So far so good, but there are lots of reasons why this alone is not sufficient for the gallery environment. Here are some problems you may run in to, and the workarounds we have found….

1.) Chrome comes with an array of shortcut key combinations that let you access its hidden features, such as the chrome task manager (shift + ESC) and downloads (Crt + J). This means that if your gallery pcs have keyboards then users may be able to hack their way out of chrome and into the PC or off into the web (why a gallery keyboard would have shift and escape keys is beyond me, but ours do). To prevent this problem needs a minimal amount of JavaScript to catch the key press events and convert them to nothing before they are passed to the browser for interpretation. Here’s what is working for us right now:

$(document).keydown(function(e){ //when any key is pressed

if(e.keyCode == 27||e.keyCode ==18) { // if the key is CTRL or ALT

e.preventDefault(); } }); // do nothing

2.) How to run a website saved on the local file system? By default Chrome wont access scripts in files held locally (although Firefox will). Our kiosk applications are all held locally on each machine as a safety precaution in case of network downtime. To overcome the default behaviour in Chrome add the following argument to the startup command above

–allow-file-access-from-files

So the command we now have is:

start “Chrome” chrome file:///C:/Kiosk2014/CaseLayout.htm?KIOSK=LB-DS-ICT02 –kiosk –allow-file-access-from-files

3.) When the machine is rebooted or you force quit chrome, it restarts with the message “Chrome didn’t’ shut down correctly” in a yellow bar at the top of the screen which must be manually closed. This is unsightly for users and likely to be a common occurrence in the gallery environment. To overcome this we pass another parameter to the start command which causes Chrome to start in incognito mode and prevents the message. So our command is now this:

start “Chrome” chrome file:///C:/Kiosk2014/CaseLayout.htm?KIOSK=LB-DS-ICT02 –kiosk –allow-file-access-from-files –incognito

4.) When the web application crashes for whatever reason, there may be no way for a user to reload the page, or for the page to reload itself automatically. There are many different sorts of crashes that can occur, such as the ‘aw snap’ error where Chrome gives an unsmily face and a link to reload the page or navigate away off into the net. Since fixing some bugs and optimising our code we have not seen this error for a while, but we do have a method to return to the web app if something goes wrong. One method is to use windows task scheduler to close and reopen the web app after a set period of time, and handily we already have most of the code for this in the above command. We set the task to be triggered when the computer is idle for 5 minutes, and make the task run the following code – which kills any running chrome process and restarts the app at the right page:

@echo off

taskkill /F /IM chrome.exe /T

start “Chrome” chrome file:///C:/Kiosk2014/CaseLayout.htm?KIOSK=LB-DS-ICT02 –kiosk –allow-file-access-from-files –incognito

Incidentally this is the exact same script we have in our start up folder so that whenever the computer is rebooted, the application starts.

5.) Despite the above, we were still suffering from a ‘grey screen of death’ every once in a while, which loaded the background page in the right colour, but failed to load anything else. This was probably due to the complexity of the application and its various plugins, but it was very undesirable and almost impossible to replicate in our development environment. What was clear was that when the grey screen happened, none of the JavaScript files for the app had been loaded, rendering it useless and stuck. The workaround we have used for this was to bind an onClick event to the document body which forces a page reload, and to remove this event using JavaScript. This means that if the script files fail to load, the page will reload when someone touches it, and chances are everything will be ok, and if everything is ok – the reload click event is removed and the application functions normally.

So, at the top of the document we have this:

<body onClick=”location.reload()”>

and right before the closing body tag we have this:

<script type=”text/javascript”> $(function() { setTimeout(function () { if($(“#VisitorStoriesHelpText”).length>0){ $(‘body’).attr(“onClick”,””) } }, 1 * 1 * 1000);}); </script>

…..actually we haven’t seen the grey screen of death for a while, but a least it is no longer a show stopper.

So in conclusion – Google Chrome can be run in full screen mode as a gallery kiosk application, but it is not plain sailing, and in the gallery environment expect to see strange things happening. We are not out of the woods yet, and the legacy hardware keeps us on our toes, but at least in terms of the web application we can overcome many of the issues that this solution has presented us with.

 

*UPDATE*

We have seen on a number of machines some strange artefacts, pixellation and tearing occurring – when coloured spekles appear, or portions of black screen, in some cases the whole machine becomed unresponsive.  This only happens in the live environment and may be something to do with chrome kiosk mode vs windows 7. To solve this I have added another flag to the chrome command – –disable-gpu. With this added the pixellation goes away, it returns when the flag is removed. Time will tell if this solution holds water, or if it puts too much load on the cpu. We still have room for optimisation (using chrome dev tools) so we should be able to reduce the load if problems persist.

 

Raspberry Pi as aTouchscreen Kiosk

I am currently downloading a web application onto a Rasperry Pi in the hope that it will work. The idea is to use dropbox to sync a web directory when the device loads up which will then be accessed by a touchscreen interface. All files are held and referenced locally so if the internet goes down there is no downtime for people in the gallery.

pi

The system is up and running already on a Mini-mac, and working well. Our problem is that only one Mini-mac in the gallery has an operating system that can run a modern browser such as chrome, which is needed to run the gallery app.

The main test is if the Chromium web browser on the Pi behaves in the same way as Chrome on the mac – if this fails we will need to rethink things – perhaps the javascript could be tweaked to make it work, but maintaining two versions of the app would be rather time consuming.

If and once the app works – the test will be one of perfomance and whether any css effects run too slow to make this a feasable replacement for the Mini-macs.*

* this blog post comes mid way through the project so some background information is needed: we are in the process of migrating a gallery interactive solution from a stand alone system into the main collections database. In doing so we are redeveloping the legacy flash-based applications into a more sustainable javascript web application. During the project it has become clear that new hardware is required in order to run modern web browsers, and the budget implications of replacing 14 mini-macs has got us experimentinf with the Pi.

Watch this space….

Installing Dropbox on a Raspberry Pi