Category Archives: Digital services

Integrating Shopify with Google Sheets (magic tricks made to look easy)

In team digital we like to make things look easy, and in doing so we hope to make life easier for people. A recent challenge has been how to recreate the Top sales by product analysis from the Shopify web application in Google Docs to see how the top 10 selling products compare month by month. The task of creating a monthly breakdown of product sales had up until now been a manual task of choosing from a date picker, exporting data, copying to google sheets, etc.

Having already had some success pushing and pulling data to google sheets using google apps script and our Culture Data platform, we decided to automate the process. The goal was to simplify the procedure of getting the sales analysis into Google docs to make it as easy as possible for the user – all they should need to do would be to select the month they wish to import.

We have developed a set of scripts for extracting data using the Shopify API, but needed to decide how to get the data into Google Sheets. Whilst there is a library for pushing data from a node application into a worksheet, our trials found it to be slow and prone to issues where the sheet did not have enough rows or other unforeseen circumstances. Instead, we performed our monthly analysis on the node server and saved this to a local database. we then built an api for that database that could be queried by shop and by month.

The next step, using google script was to query the api and pull in a month’s worth of data, then save this to a new sheet by month name. This could then be set added as a macro so that it was accessible in the toolbar for the user in a familiar place for them, at their command.

As the data is required on a monthly basis, we need to schedule the server side analysis to save a new batch of data after each month – something we can easily achieve with a cron job. The diagram below shows roughly how the prototype works from the server side and google sheets side. Interestingly, the figures don’t completely match up to the in-application analysis by Shopify, so we have some error checking to do, however we now have the power to enhance the default analysis with our own calculations, for example incorporating the cost of goods into the equation to work out the overall profitability of each product line.

 

 

Preserving the digital

From physical to digital to…?

At Bristol Culture we aim to collect, preserve and create access to our
collections for use by present and future generations. We are increasingly dealing with digital assets amongst these collections – from photographs of our objects, to scans of the historical and unique maps and plans of Bristol, to born-digital creations such as 3D scans of our Pliosaurus fossil. We are also collecting new digital creations in the form of video artwork.

Photo credit Neil McCoubrey

One day we won’t be able to open these books because they are too fragile – digital will be the only way we can access this unique record of Bristol’s history, so digital helps us preserve the physical and provides access. Inside are original plans of Bristols most historic and well-known buildings including the Bristol Hippodrome, which require careful unfolding and digital stitching to reproduce the image of the full drawing inside.

Plans of the Hippodrome, 1912. © Bristol Culture

With new technology comes new opportunities to explore our specimens and this often means having to work with new file types and new applications to view them.  

This 3D scan of our Pliosaurus jaw allows us to gain new insights into the behavior and biology of this long-extinct marine reptile.

Horizon © Thompson & CraigheadThis digital collage by Thompson & Craghead features streaming images from webcams in the 25 time zones of the world. The work comes with a Mac mini and a USB drive in an archive box and can be projected or shown on a 42″ monitor. Bristol Museum is developing its artist film and video collection and now holds 22 videos by artists including Mariele Neudecker, Wood and Harrison, Ben Rivers, Walid Raad and Emily Jacir ranging from documentary to structural film, performance, web-based film and video and animation, in digital, video and analogue film formats, and accompanying installations.

What could go wrong?

So digital assets are helping us conserve our archives, explore our collections and experience new forms of art, but how do we look after those assets for future generations?

It might seem like we don’t need to worry about that now but as time goes by there is constant technological change; hardware becomes un-usable or non-existent, software changes and the very 1s and 0s that make up our digital assets can be prone to deteriorating by a process known as bitrot!.  Additionally, just as is the case for physical artifacts, the information we know about them including provenance and rights can become dissociated.  What’s more, the digital assets can and must multiply, move and adapt to new situations, new storage facilities and new methods of presentation. Digital preservation is the combination of procedures, technology and policy that we can use to help us prevent these risks from rendering our digital repository obsolete. We are currently in the process of upskilling staff and reviewing how we do things so that we can be sure our digital assets are safe and accessible.

Achieving standards

It is clear we need to develop and improve our strategy for dealing with these potential problems, and that this strategy should underline all digital activity where the result of that activity produces output which we wish to preserve and keep.  To rectify this, staff at the Bristol Archives, alongside Team Digital and Collections got together to write a digital preservation policy and roadmap to ensure that preserved digital content can be located, rendered (opened) and trusted well into the future.

Our approach to digital preservation is informed by guidance from national organisations and professional bodies including The National Archives, the Archives & Records Association, the Museums Association, the Collections Trust, the Digital Preservation Coalition, the Government Digital Service and the British Library. We will aim to conform to the Open Archival Information System (OAIS) reference model for digital preservation (ISO 14721:2012). We will also measure progress against the National Digital Stewardship Alliance (NSDA) levels of digital preservation.

A safe digital repository

We use EMu for our digital asset management and collections management systems. Any multimedia uploaded to EMu is automatically given a checksum, and this is stored in the database record for that asset. What this means is that if for any reason that file should change or deteriorate (which is unlikely, but the whole point of digital preservation is to have a mechanism to detect if this should happen) the new checksum won’t match the old one and so we can identify a changed file.

Due to the size of the repository, which is currently approaching 10Tb, it would not be practical to this manually, and so we use a scheduled script to pass through each record and generate a new checksum to compare with the original. The trick here is to make sure that the whole repo gets scanned in time for the next backup period because otherwise, any missing or degraded files would become the backup and therefore obscure the original. We also need a working relationship with our IT providers and an agreed procedure to rescue any lost files if this happens.

With all this in place, we know that what goes in can come back out in the same state -so far so good. But what we cant control is the constant change in technology for rendering files – how do we know that the files we are archiving now will be readable in the future? The answer is that we don’t unless we can migrate from out of date file types to new ones. A quick analysis of all records tagged as ‘video’ shows the following diversity of file types:

(See the stats for images and audio here).  The majority are mpeg or avi, but there is a tail end of various files which may be less common and we’ll need to consider if these should remain in this format or if we need to arrange for them to be converted to a new video format.

Our plan is to make gradual improvements in our documentation and systems in line with the NDSA to achieve level 2 by 2022:

 

The following dashboard gives an idea of where we are currently in terms of file types and the rate of growth:

Herding digital sheep

Its all very well having digital preservation systems in place, but the staff culture and working practices must also change and integrate with them.

The digitisation process can involve lots of stages and create many files

In theory, all digital assets should line up and enter the digital repository in an orderly and systematic manner. However, we all know that in practice things aren’t so straightforward.

Staff involved in digitisation and quality control need the freedom to be able to work with files in the applications and hardware they are used to without being hindered by rules and convoluted ingestion processes. They should to be allowed to work in a messy (to outsiders) environment, at least until the assets are finalised. Also there are many other environmental factors that affect working practices including rights issues, time pressures from exhibition development, and skills and tools available to get the job done. By layering new limitations based on digital preservation we are at risk of designing a system that wont be adopted, as illustrated in the following tweet by @steube:

So we’ll need to think carefully about how we implement any new procedures that may increase the workload of staff. Ideally, we’ll be able to reduce the time staff take in moving files around by using designated folders for multimedia ingestion – these would be visible to the digital repository and act as “dropbox” areas which automatically get scanned and any files automatically uploaded an then deleted. For this process to work, we’ll need to name files carefully so that once uploaded they can be digitally associated with the corresponding catalogue records that are created as part of any inventory project. Having a 24 hour ingestion routine would solve many of the complaints we hear from staff about waiting for files to upload to the system.

 

Automation can help but will need a human element to clean up and anomalies

 

Digital services

Providing user-friendly, online services is a principle we strive for at Bristol Culture – and access to our digital repository for researchers, commercial companies and the public is something we need to address.

We want to be able to recreate the experience of browsing an old photo album using gallery technology. This interactive uses the Turn JS open source software to simulate page turning on a touchscreen featuring in Empire Through the Lens at Bristol Museum.

Visitors to the search room at Bristol Archives have access to the online catalogue as well as knowledgeable staff to help them access the digital material. This system relies on having structured data in the catalogue and scripts which can extract the data and multiemdia and package them up for the page turning application.

But we receive enquiries and requests from people all over the world, in some cases from different time zones which makes communication difficult. We are planning to improve the online catalogue to allow better access to the digital repository, and to link this up to systems for requesting digital replicas. There are so many potential uses and users of the material that we’ll need to undertake user research into how we should best make it available and in what form.

 

Culture KPIs

There are various versions of a common saying that ‘if you don’t measure it you can’t manage it’. See Zak Mensah’s (Head of Transformation at Bristol Culture) tweet below. As we’ll explain below we’re doing a good job of collecting a significant amount of Key Performance Indicator data;  however, there remain areas of our service that don’t have KPIs and are not being ‘inspected’ (which usually means they’re not being celebrated). This blog is about our recent sprint to improve how we do KPI data collection and reporting.

The most public face of Bristol Culture is the five museums we run (including Bristol Museum & Art Gallery and M Shed), but the service is much more than its museums. Our teams include, among others; the arts and events team (who are responsible the annual Harbour Festival as well as the Cultural Investment Programme which funds over 100 local arts and cultural organisations in Bristol); Bristol Archives; the Modern Records Office; Bristol Film Office and the Bristol Regional Environmental Recording Centre who are responsible for wildlife and geological data for the region.

Like most organisations we have KPIs and other performance data that we need to collect every year in order to meet funding requirements e.g. the ACE NPO Annual Return. We also collect lots of performance data which goes beyond this, but we don’t necessarily have a joined up picture of how each team is performing and how we are performing as a whole service.

Why KPIs?

The first thing to say is that they’re not a cynical tool to catch out teams for poor performance. The operative word in KPI is ‘indicator’; the data should be a litmus test of overall performance. The second thing is that KPIs should not be viewed in a vacuum. They make sense only in a given context; typically comparing KPIs month by month, quarter by quarter, etc. to track growth or to look for patterns over time such as busy periods.

A great resource we’ve been using for a few years is the Service Manual produced by the Government Digital Service (GDS) https://www.gov.uk/service-manual. They provide really focused advice on performance data. Under the heading ‘what to measure’, the service manual specifies four mandatory metrics to understand how a service is performing:

  • cost per transaction– how much it costs … each time someone completes the task your service provides
  • user satisfaction– what percentage of users are satisfied with their experience of using your service
  • completion rate– what percentage of transactions users successfully complete
  • digital take-up– what percentage of users choose … digital services to complete their task

Added to this, the service manual advises that:

You must collect data for the 4 mandatory key performance indicators (KPIs), but you’ll also need your own KPIs to fully understand whether your service is working for users and communicate its performance to your organisation.

Up until this week we were collecting the data for the mandatory KPIs but they have been  somewhat buried in very large excel spreadsheets or in different locations.  For example our satisfaction data lives on a surveymonkey dashboard. Of course, spreadsheets have their place, but to get more of our colleagues in the service taking an interest in our KPI data we need to present it in a way they can understand more intuitively. Again, not wanting to reinvent the wheel, we turned to the GDS to see what they were doing. The service dashboard they publish online has two headline KPI figures followed below with a list of the departments which you can click into to see KPIs at a department level.

Achieving a new KPI dashboard

As a general rule, we prefer to use open source and openly available tools to do our work, and this means not being locked into any single product. This also allows us to be more modular in our approach to data, giving us the ability to switch tools or upgrade various elements without affecting the whole system. When it comes to analysing data across platforms, the challenge is how to get the data from the point of data capture to the analysis and presentation tech – and when to automate vs doing manual data manipulations. Having spent the last year shifting away from using Excel as a data store and moving our main KPIs to an online database, we now have a system which can integrate with Google Sheets in various ways to extract and aggregate the raw data into meaningful metrics. Here’s a quick summary of the various integrations involved:

Data capture from staff using online forms: Staff across the service are required to log performance data, at their desks, and on the move via tablets over wifi. Our online performance data system provides customised data entry forms for specific figures such as exhibition visits. These forms also capture metadata around the figures such as who logged the figure and any comments about it – this is useful when we come to test and inspect any anomalies. We’ve also overcome the risk of saving raw data in spreadsheets, and the bottleneck often caused when two people need to log data at the same time on the same spreadsheet.

Data capture directly from visitors: A while back we moved to online, self-completed visitor surveys using SurveyMonkey and these prompt visitors to rate their satisfaction. We wanted the daily % of satisfied feedback entries to make its way to our dashboard, and to be aggregated (both combined with data across sites and then condensed into a single representative figure). This proved subtly challenging and had the whole team scratching our heads at various points thinking about whether an average of averages actually meant something, and furthermore how this could be filtered by a date range, if at all.

Google Analytics:  Quietly ticking away in the background of all our websites.

Google sheets as a place to join and validate data: It is a piece of cake to suck up data from Google Sheets into Data Studio, provided it’s in the right format. We needed to use a few tricks to bring data into Google Sheets, however, including Zapier, Google Apps Script, and sheets Add-ons.

Zapier: gives us the power to integrate visitor satisfaction from SurveyMonkey into Google Sheets.

Google apps script: We use this to query the API on our data platform and then perform some extra calculations such as working out conversion rates of exhibition visits vs museum visits. We also really like the record macro feature which we can use to automate any calculations after bringing in the data. Technically it is possible to push or pull data into Google Sheets – we opted for a pull because this gives us control via Google Sheets rather than waiting for a scheduled push from the data server.

Google Sheets formulae: We can join museum visits and exhibition visits in one sheet by  using the SUMIFS function, and then use this to work out a daily conversion rate. This can then be aggregated in Data Studio to get an overall conversion rate, filtered by date.

Sheets Add-Ons: We found a nifty add-on for integrating sheets with Google Analytics. Whilst it’s fairly simple to connect Analytics to Data Studio, we wanted to combine the stats across our various websites, and so we needed a preliminary data ‘munging’ stage first.

Joining the dots…

1.) Zapier pushes the satisfaction score from SurveyMonkey to Sheets.

2.) A Google Sheets Add On pulls in Google Analytics data into Sheets, combining figures across many websites in one place.

3.) Online data forms save data directly to a web database (MongoDB).

4.) The performance platform displays raw and aggregated data to staff using ChartJS.

5.) Google Apps Script pulls in performance data to Google Sheets.

6.) Gooogle Data Studio brings in data from Google Sheets,  and provides both aggregation and calculated fields.

7.) The dashboard can be embedded back into other websites including our performance platform via an iframe.

8.) Good old Excel and some VBA programming can harness data from the performance platform.

logos
Technologies involved in gathering and analysing performance data across museums.

Data Studio

We’ve been testing out Google Data Studio over the last few months to get a feel for how it might work for us. It’s definitely the cleanest way to visualise our KPIs, even if what’s going on behind the scenes isn’t quite as simple as it looks on the outside.

There are a number of integrations for Data Studio, including lots of third party ones, but so far we’ve found Google’s own Sheets and Analytics integrations cover us for everything we need. Within Data Studio you’re somewhat limited to what you can do in terms of manipulating or ‘munging’ the data (there’s been a lot of munging talk this week), and we’re finding the balance between how much we want Sheets to do and how much we want Data Studio to do.

At the beginning of the sprint we set about looking at Bristol Culture’s structure and listing five KPIs each for 1.) the service as a whole; 2.) the 3 ‘departments’ (Collections, Engagement and Transformation) and 3.) each team underneath them. We then listed what the data for each of the KPIs for each team would be. Our five KPIs are:

  • Take up
  • Revenue
  • Satisfaction
  • Cost per transaction
  • Conversion rate

Each team won’t necessarily have all five KPIs but actually the data we already collect covers most of these for all teams.

Using this structure we can then create a Data Studio report for each team, department and the service as a whole. So far we’ve cracked the service-wide dashboard and have made a start on department and team-level dashboards, which *should* mean we can roll out in a more seamless way. Although those could be famous last words, couldn’t they?

Any questions, let us know.

 

 

Darren Roberts (User Researcher), Mark Pajak (Head of Digital) &  Fay Curtis (User Researcher)

 

 

 

Going digital with our Exhibition Scheduling Timeline

 

 

developing a digital timeline for scheduling exhibitions

BACKGROUND

Having a visual representation of upcoming exhibitions, works, and major events is important in the exhibition planning process. Rather than relying on spotting dates that clash using lists of data, having a horizontal timeline spread out visually allows for faster cross-checking and helps collaboratively decide on how to plan for exhibition installs and derigs.

 

Until recently we had a system that used excel to plan out this timeline, by merging cells and coloring horizontally it was possible to manually construct a timeline. Apart from the pure joy that comes from printing anything from Excel, there were a number of limitations of this method.

  • When dates changed the whole thing needed to be rejigged
  • Everyone who received a printed copy at meetings stuck that to the wall and so date changes were hard to communicate.
  • We need to see the timeline over different scales – short term and long term, so this means using 2 separate excel tabs for each, hence duplication of effort.
  • We were unable to apply any permissions
  • The data was not interoperable with other systems

TIMELINE SOFTWARE (vis.js)

Thanks to Almende B.V. there is an open source timeline code library available at visjs.org/docs/timeline so this offers a neat solution to the manual task of having to recast the timeline using some creative Excel skills each time. We already have a database of Exhibition dates following our digital signage project and so this was the perfect opportunity to reuse this data, which should be the most up to date version of planned events as it is what we display to the public internally in our venues.

IMPLEMENTATION

The digital timeline was implemented using MEAN stack technology and combines data feeds from a variety of sources. In addition to bringing in data for agreed exhibitions, we wanted a flexible way to add installations, derigs, and other notes and so a new database on the node server combines these dates with exhibitions data. We can assign permissions to different user groups using some open source authentication libraries and this means we can now release the timeline for staff not involved in exhibitions, but also let various teams add and edit their own specific timeline data.

The great thing about vis is the ease of manipulation of the timeline, users are able to zoom in and out, and backward and forwards in time using with mouse, arrow or touch/pinch gestures.

 

Zoomed out view for the bigger picture
Zoomed in for the detail…

EMU INTEGRATION

The management of information surrounding object conservation, loans and movements is fundamental to successful exhibition development and installation. As such we maintain a record of exhibition dates in EMu, our collections management software. The EMu events module is used to record when exhibitions take place and also the object list where curators select and deselect objects for exhibition. Using the EMU API we are able to extract a structured list of Exhibitions information for publishing to the digital timeline.

HOW OUR TIMELINE WORKS

Each gallery or public space has its own horizontal track where exhibitions are published as blocks. These are grouped into our 5 museums and archives buildings and can be selected/deselected from the timeline to cross reference each. Once logged in a user is able ot manually add new blocks to the timeline and these are pre-set to “install”, “derig” and “provisional date”. Once a block is added our exhibitions team are able to add notes that are accessible on clicking the block. It is also possible to reorder and adjust dates by clicking and dragging.

IMPACT

The timeline now means everyone has access to an up to date picture of upcoming exhibitons installations to no one is out of date. The timeline is on a public platform and is mobile accessible so staff can access it on the move, in galleries or at home. Less time is spent on creative Excel manipulation and more work on spotting errors. It has also made scheduling meetings more dynamic, allowing better cross referencing and moving to different positions in time. An unexpected effect is that we are spotting more uses for the solution and are currently investigating the use of it for booking rooms and resources. There are some really neat things we can do such as import a data feed from the timeline back into our MS Outlook calendars  (“oooooh!”). The addition of thumbnail pictures used to advertise exhibitions has been a favorite feature among staff and really helps give an instant impression of current events, since it reinforces the exhibition branding which people are already familiar with.

ISSUES

It is far from perfect! Several iterations were needed to develop the drag and drop feature fo adding events. Also, we are reaching diminishing returns in terms of performance – with more and more data available to plot, the web app is performing slowly and could do with further optimisation to improve speed. Also due to our IT infrastructure, many staff use Internet Explorer and whilst the timeline works OK, many features are broken on this browser without changes to compatibility and caching settings on IE.

WHAT’S NEXT

Hopefully optimisation will improve performance and then it is full steam ahead with developing our resource booking system using the same framework.

 

 

Update from the Bristol University development team:

Since October we have been working with Computer Science students from the University of Bristol to redesign the interface for our digital asset management system.

After initially outlining what we want from the new design, there have been frequent meetings and they’ve now reached a stage where they can happily share with us their project so far.
Quick, appealing and easy to use, this potential new interface looks very promising!

Introducing exhibition entry gates

Photo of a visitor entering the exhibition through the barrier

Image of Jake Mensah walking successfully through the barrier

This week we installed an entry gate system to our exhibition gallery at M Shed just in time for the opening of Children’s TV. Our “exhibition” gallery is located on the top floor, far away from the ground floor reception and not naturally easy to stumble across for the average wandering visitor. The project scope was to reduce the overall cost of an exhibition to the service and encourage as many visitors as possible to purchase tickets in advance. We’ll then test the success of the project against three of our key performance indicators – customer satisfaction, cost per transaction, and digital take-up.

Against each KPI we aim to:

Customer satisfaction – We don’t want people to experience a notable difference between our old approach of buying from a member of staff at the entrance and them buying online/kiosk and then entering the exhibition via the gate. We expect teething issues around the “behaviour” of this new approach but not from the technology itself which should be robust. The outcome we need is for little to no complaints within the first two weeks or until we find solutions for the teething problems.

Reduce cost per transaction – a typical paid exhibition costs approximately £7,000 to staff the ticket station. By moving to a one off fee (plus annual service) we’ll save money within 12 months and then in year two this will return a large saving for this function.

Increase digital take-up – until now it wasn’t possible to buy exhibition tickets online or using your mobile device at the museum. This is a feature that the new system enables so we’ll spend the next 18 months actively encouraging the public to buy a ticket “digitally” as part of our move to being digital by default. An additional benefit of using our website to buy tickets is that hopefully a percentage of these visitors will discover other services and events we offer. I also do wonder if we need to get a self-service kiosk to reduce the impact on the reception.

Setting up the entry gates

The third party supplier obviously manufactured and installed the gates but there was still lots for our team to deal with. We needed input from a whole gang of people. Our operations duo worked on ensuring we had the correct physical location, power, security and fire systems integration. Via collective feedback our visitor assistants provided various customer journeys and likely pinch points. Our digital team then helped with the installation and software integration for buying tickets. Design and marketing then helped with messaging. Throughout I was charged with overseeing the project and site visits with the supplier.

The major components of the project are:

  • Physical barriers – two stainless steel coated gates with a bunch of sensors and glass doors
  • Software for the barrier
  • Web service to purchase tickets
  • Onsite EPOS to sell tickets and print which is currently located at main reception

Initial observations

I was onsite for the launch and saw the first 50 or so visitors use the entry gates. My initial observations were that the gates didn’t negatively slow or concern the visitor and having asked a number of them it wasn’t a big deal. However an obvious pinch-point is that the barcode scanner doesn’t always read the barcode, leaving the visitor struggling. My hunch at this point is that our paper tickets are too thin and bendy which means the barcode scanner fails to recognise the barcode. In the coming week we’ll need to investigate if it is the barcode or barcode scanner as the primary cause and find a fix.

When multiple visitors arrive at the barrier there can be some confusion about how “one at a time” actually works. I’m hopeful that clear messaging will iron this out.

A slight issue was that we couldn’t take online payments due to a gateway issue which we’ll have fixed Monday.

Overall I’m very happy with the introduction of the gates and once we deal with the aforementioned teething issues it should be on to the next location for these gates. This is one of those projects that can only really be tested once they go live with real visitors, and the team did a fantastic job!

Google Drive for Publishing to Digital Signage

Having taken an agile development approach to our digital screen technology, it has been interesting as the various elements emerge based on our current needs. Lately there has been the need for quick ways to push posters and images to the screens for private events and one-off occasions.

Due to the complexity of the various modes, and the intricacies of events-based data and automatic scheduling it has been difficult incorporating these needs into the system. Our solution was to use Google Drive as a means to override the screens with temporary content. This means our staff can manage content for private events using tables and mobile devices, and watch the updates push through in real time.

The pathway of routes now looks like this

Untitled Diagram (1)

HOW?

There are two main elements to the override process – firstly, we are using BackboneJS as the application framework because this provides a routing structure that controls the various signage modes. We added a new route at the beginning of the process to check for content added to Google Drive – if there is no content the signs follow their normal modes of operation.

Google Drive Integration

Google provide a nice way to publish web services, hidden amongst the scripts editor inside Google sheets. We created a script that loops through a Drive directory and publishes a list of contents as JSON –  you can see the result of that script here. By making the directory public, any images we load into the drive are picked up by the script. The screens then check the script for new content regularly. The good thing about this is that we can add content to specially named folders – if the folder names match either the venue or the specific machine name – all targeted screens will start showing that content.

Google drive integration

It seems that this form of web hosting will be deprecated in Google Drive at the end of August 2016. But the links we are using to get the image might still work. If not we can find a workaround – possibly by listing urls to content hosted elsewhere in the Google sheet and looking that up.

The main benefits of this solution are being able to override the normal mode of operation using Google Drive on a mobile device. This even works with video – we added some more overrides so the poster mode doesn’t loop till the next slide until after the video has finished – video brings in several issues when considering timings for digital signage. One problem with hosting via Google Drive is that files over 25MB don’t work due to Google’s antivirus checking warning which prevents the files being released.

We’ll wait to see if this new functionality gets used – and if it is reliable after August 2016. In fact – this mode might be usable on its own to manage other screens around the various venues which until now were not up datable. If successful it will vastly reduce the need to run around with memory sticks before private events – and hopefully let us spend more time generating the wonderful content that the technology is designed to publish for our visitors.

You can download the latest release and try it for yourself here.

 

 

 

 

100 days of using Shopify POS tills

We’ve just passed our 100 day mark since the introduction of Shopify till system in our retail shops. In case you don’t intend on reading the whole post, i’ll tell you now that we’re still using Shopify and i think it’s safe to say it is a success.

In this post I want to cover us going live, what features we use at the moment and what our next steps are.

Choosing Shopify

Our previous system was never properly setup and as a team we didn’t take advantage of its potential. I could of stuck with it but I saw this as an opportunity to explore using the latest shopping cart technology from the web. I’m a big fan of popular tools that I’ve seen ‘scale’ regardless of the sector. I had heard about lots of arts/museum sector specific approaches which quite frankly scare me. As a sector we aren’t really all that ‘special’ when it comes to doing normal things like running a shop. So instead of looking at any of these potentially risky solutions where the market is small and we can get tied to one small supplier I just went straight to looking at what local shops and market stalls were using as i’m treating our retail as a small business so what better place to look. All of these were using web services via tablet or phone. Having attended a Shopify workshop back in June 2014 run by Keir Whitaker I felt that it had what the other systems had to offer so why not use this – no long spec document just a nose for good software and services.

Fast forward to launch

After an initial alpha use of Shopify using the free trial (tip: use the 7 day trial as your alpha test so you have no money to front) I felt happy to use Shopify with the public. Our fallback was to keep the old system plugged in and as we use a separate card reader, we could easily manually take orders with that and a calculator if we really got jammed up.

We decided to launch in early May at Bristol Museum & Art Gallery. We decided to do one shop first and then if all went well we’d do live at M Shed, followed by our tills used by the exhibition team.

As Shopify is pretty user friendly we showed Helen how to add products, how to make a custom sale and how to cash up at the end of the day in less than 20mins. It turns out that Helen had never used an iPad before, let alone Shopify. But within minutes Helen was comfortable enough to plough on with only a little arm twisting from me.

Rather than add 100s of products to the inventory we decided to use the ‘custom sale’ option on the first day and then add any purchased products to the inventory retrospectively. As a word of advice, i think this approach makes the most sense instead of committing many hours to adding products to the Shopify inventory which you may or may not run with. Instead, add as you go.

On the first day I made sure that we had both Zahid and myself available. I spent the first ‘live’ hour down in the shop. Within an hour it was clear that I wasn’t needed. By the day of the first day Helen and Zahid knew way more than I did – in this type of case i’m glad to be made redundant!

After two days Helen asked us to remove the old system as she was very happy with how things were progressing. We have a small retail team of four part-time staff and a small bank of casual staff. Within 2-3 weeks I was getting staff thanking me for introducing the new system. In my previous two years I’ve never had such positive feedback. After the third week we also replaced our M Shed till too. On week six we also used this for our third till which is used to buy tickets for our exhibition (William Hogarth: Painter and Printmaker).

Helpful documentation and support

One of the things I love about modern day web services is that they normally offer good documentation and Shopify is no different. This not only helps us to learn about how to best use the service but saves any of us having to write lengthy support documentation. I’ve since used their live online chat a few times when i’ve got stuck and it’s 24/7. A service i’m sure many of the museum POS vendors can only dream of offering. You can ring, live chat, email or use the forums. All of which help staff when none of us digital types are around, which is the way it should be.

Mobile app for the win

I have a great retail team led by Helen Lewis. In theory I just need to know our current financial position. The mobile app lets me see live sales income for the last 90 days. This alone is a leap forward for POS and i get ‘POS envy’ now by all other retail managers whenever I show them. Furthermore I can see what product inventory level is is at anytime and i can scan barcodes to make sales if i really wanted to. I’m currently keeping an eye on our Hogarth mugs, scarfs, a book by Louise Brown and drinks. All new products that I like to track.

Reporting sales

To keep the cost down there a number of features which don’t come as standard and reporting is one of them. We have paid for the reporting features which we mostly use for splitting vat/non-vat and exhibition tickets at this point. In the next few months we will really get our head around what reports we want.

A few problems and issues

We’re very happy so far with our shopify service but there have been a few teething problems worth mentioning.

We hit our first major technical snag – till drawer says No!

Our exhibition till has a unique challenge compared to our retail shops. We have over 50 visitor assistants. For each day of the exhibition we may have any one or more of them on the till. This poses a few challenges, mainly around processes and training. Some people had no problems but others really struggled with the idea of an iPad for a till. Nothing too bad. But then it happened. I got a call to say that the exhibition till wasn’t working. I went down and sure enough it was working. False alarm right? Another call 30mins later. This time I could see the problem. Although we could use the Shopify app, the till cash drawer refused to open. Turn it off, turn it back on. Boom. Fixed…. so I thought. This kept happening, time after time. Anyway it turns out that although Shopify will run perfectly happily offline, the cash drawer NEEDS wifi to be triggered to open. A major problem that made lots of visitor assistants quite reluctant to use the till. The problem only occurred on one of the four tills. Zahid tracked down the issue to the router. Apparently there is a known issue with some routers – despite it working fine with the same router elsewhere. Zahid swapped out the router and the problem hasn’t come back. Luckily for us we could use the shop as a fallback till but this wasn’t the best customer service period.

Costs of goods isn’t standard

By default there isn’t a feature to include the ‘cost of goods – COGS’ which are essential for knowing the price you paid against the retail price and thus your profit margin. How did I miss that in the alpha! Luckily one of the reasons I chose Shopify was for its adaptability. Shopify has a useful feature to allow them or third parties to make apps for beefing up the default service. One of these, deepmine looks like it has COGS so we’ll be trialling this very soon.

Not many hints and tricks yet

I haven’t found much information about using Shopify POS as it is still quite new. This means that it hasn’t been super fast to find answers to some of our issues. One of the reasons i’m writing this is to increase that information pool. Oh and there is no public roadmap for what’s coming so follow the blog to stay in the loop

What’s coming next

Now that we’re comfortable with Shopify we’re starting to turn our attention to the next phase of work.

  • Trial deepmine app to get COGs and deep reporting
  • Setup better custom reports to help staff
  • Offer group workshops on basic training and reporting
  • Add inventory levels to all products
  • Add photos to all products
  • Explore email upsell and sales offers

Get in touch

I’ve had several chats with other museums who spotted by last blog post asking about Shopify. Please do get in touch by phone 0117 922 3571 or zak.mensah@bristol.gov.uk if you want me to help you with anything around our use of Shopify. We’ll also be happy to be paid consultants to set up your service if you need a proper hand.

Anatomy of our Digital Signage Web App

At this stage in the development of our digital signage, we have a working release of the software in the live environment, and we are focussing on training, improvements to the design and data structure for the next version. This post is about the nuts and bolts of how the client-side app works, while it is still fresh.

Mode Schematic

Firstly, it is a single page web application – loaded upon calling index.html from a web browser.  Inside the index.html are just the basics you’d expect. The magic is all controlled via a master JavaScript file called require.js. This library is used to pull together all of the source code in the right order and makes sure files don’t get loaded twice etc. All of the content of the app is loaded and removed via a single content div in the body.

index.html 
(... some bits removed...check the GitHub page for the whole lot)


<html>
  <head><title>BMGA Digital Signage</title>
     <link rel="stylesheet" href="css/styles.css"> 
     <script data-main="js/main" src="js/libs/require/require.js"/>    
  </head>
  <body class="nocursor">
   <div id="mainContent" > </div></div>
  </body>
</html>

The first JavaScript to load up is main.JS. This simple file follows theRequireJS format, which is used to alias some of the code libraries which will get used the most such as JQuery.

//main.js 

require.config({

 paths:{
     jquery:'libs/jquery/jquery-min',
     underscore:'libs/underscore/underscore-min',
     backbone:'libs/backbone/backbone-min', 
     templates: '../templates'
 }
 })

require([

"app"], function(App) {
App.initialize();
});

Next up is main.js. This loads up the code libraries required to start the app, and brings in our first global function – used to close each ‘view’. For a single page app it is really important to destroy any lingering event handlers and other bits which can take up memory and cause the app to go a bit crazy – something that Backbone apps have difficulties with, and otherwise known as Zombie Views. Killing Zombies is important.

//main.js
define([
 'jquery', 
 'underscore', 
 'backbone',
 'router'

], function($, _, Backbone, Router){
var initialize = function(){

 
  Backbone.View.prototype.close = function () { //KILL ZOMBIE VIEWS!!!!
      this.undelegateEvents();
      this.$el.empty();
      this.unbind();
  };
 

   Router.initialize();
 };

 return { 
     initialize: initialize
 };
});

It gets a bit more fun next as we call the backbone ‘router’ – and from now on I’ll only add snippets from the files, to see the lot head to GitHub. The router is what drives navigation through each of the modes that the screens can display. Each route takes its parameters from the url and so this means we can control the modes by appending the text ‘sponsors’, ‘posters’ or ‘events’ to the index.html in the browser.

In addition to the mode we can pass in parameters – which poster to display, which page of sponsors, which venue etc. This was a solution to the problem of how to remember which posters have not yet been shown. If you only wish the poster mode to last 40 seconds, but you’ve got lots of posters – you need to remember which posters come next in the sequence. Additionally as you loop through modes, you need to pass along each parameter until you are back on poster mode. This is why every route has all the parameters for venue and poster.

This slightly convoluted situation has arisen as we are using a page refresh to flip between modes and so without relying on local storage our variables are only around as long as the page lasts

//router.js 

 var AppRouter = Backbone.Router.extend({
 routes: { 
 'sponsors(/venue:venue)(/stick:stick)(/logo:page)(/poster:page)(/machine:machine)': 'sponsors', 
 'posters(/venue:venue)(/stick:stick)(/logo:page)(/poster:page)(/machine:machine)': 'posters', 
 'events(/venue:venue)(/stick:stick)(/logo:page)(/poster:page)(/machine:machine)(/date:all)':'events',

 }
 });

The code for a single route looks a bit like this and works as follows.  We start off with an option to stick or move – this allows us to have a screen stay on a particular mode. Then we look at our settings.JSON file which contains the machine specific settings for all of the signs across each venue. The machine name is the only setting help locally on the system and this is used to let each machine find their node of settings (loop times, etc.).

...
 app_router.on('route:posters', function(venue,stick,logoOffset,posterOffset,machine){
 
 
 var stick = stick || "move"
 var logoOffset=logoOffset||0
 var posterOffset=posterOffset||0;
 
 machineName=machine||'default'
 Allsettings=(JSON.parse(Settings))
 settings=Allsettings.machineName
 settings=('Allsettings',Allsettings[machineName])
 
 
 var venue = settings.location;
 
 if(Globals.curentView){
 Globals.curentView.close()
 }
 
 var venue = venue || "ALL"
 self.venue=venue
 
 var posterView = new PosterView({venue:self.venue,stick: stick,logoOffset:logoOffset,posterOffset:posterOffset,machine:machine,settings:settings,type: settings.eventTypes});
 
 posterView.addPostersFromLocaLFile();
 Globals.curentView=posterView
 
 

 }),
....

With all settings loaded, and filtered by machine name and the mode specified – we are ready to load up the view. This contains all of the application logic for a particular mode, brings in the html templates for displaying the content, and performs the data fetches and other database functions needed to display current events/posters…more on that in a bit

Amongst the code here are some functions used to check which orientation the image supplied is, and then cross reference that with the screen dimensions, and then check if that particular machine is ‘allowed’ to display mismatched content. Some are and some aren’t, it kinda depends. When we push a landscape poster to a portrait screen, we have lots of dead space. A4 looks OK on both but anything squished looks silly. So in the dead space we can display a strapline, which is nice, until there is only a tiny bit of dead space. Oh yep, there is some code to make the font smaller for a bit if there is just enough for a caption..etc.   ….turns out poster mode wasn’t that easy after all!

//view.js
 
define([
 'jquery',
 'underscore',
 'backbone',
 'text!templates/posters/posterFullScreenTemplate_1080x1920.html',
 'text!templates/posters/posterFullScreenTemplate_1920x1080.html',
 'collections/posters/PostersCollection',
 'helpers/Globals',
], function($, _, Backbone, posterFullScreenTemplate ,posterFullScreenTemplateLandscape,PostersCollection,Globals){

 var PosterView = Backbone.View.extend({
 
 el: $("#eventsList"),
 
  addPostersFromLocaLFile: function(){ 
 
 var self = this;
 self.PostersCollection = new PostersCollection({parse:true}) 
 self.PostersCollection.fetch({ success : function(data){
 self.PostersCollection.reset(data.models[0].get('posters'))
 self.PostersCollection=(self.PostersCollection.byEventType(self.settings.eventTypes));
 self.PostersCollection=(self.PostersCollection.venueFilter(self.venue));
 self.renderPosters(self.PostersCollection)
 
 $( document ).ready(function() {
 
 setInterval(function(){ 
 
 self.renderPosters(self.PostersCollection)
 if(self.stick=="move"){ 
 setTimeout(function() { 
 self.goToNextView(self.posterOffset)
 }, settings.posterMode_time * 1000);
 }
 }, settings.posterLoop_time * 1000);
 })
 
 }, dataType: "json" });
 
 },
 
 renderPosters: function (response) { 

 if( self.posterOffset>= response.models.length){self.posterOffset=0}
 
 var width = (response.models[self.posterOffset].get('width'))
 var height = (response.models[self.posterOffset].get('height'))
 LANDSCAPE=(parseInt(width)>=parseInt(height))
 ImageProportion = width/height 
 
 if(LANDSCAPE==true){break;}
 self.posterOffset++ 
 }
 }
 
 if(self.orientationSpecific==2){
 
 //enforced orientation lock
 while(LANDSCAPE==false ){ 
 
 if( self.posterOffset>= response.models.length){self.posterOffset=0}
 
 var width = (response.models[self.posterOffset].get('width'))
 var height = (response.models[self.posterOffset].get('height'))
 LANDSCAPE=(parseInt(width)>=parseInt(height))
 if(LANDSCAPE==true){break;}
 self.posterOffset++ 
 }
 }
 
 ImageProportion = width/height 
 if(ImageProportion<=0.7){miniFont='miniFont'}
 if(ImageProportion<=0.6){miniFont='microFont'}
 if(ImageProportion<=0.5){miniFont='hideFont'}
 if(ImageProportion>=1.4){miniFont='hideFont'}
 console.log('ImageProportion'+ImageProportion) 
 self.$el.html(self.PostertemplateLandscape({poster: response.models[self.posterOffset],displayCaption:displayCaption,miniFont:miniFont},offset=self.posterOffset,TemplateVarialbes=Globals.Globals)); 
 

 ....


return PosterView;
 
});

Referenced by the view is the file which acts as a database would do, called the collection, and there is a collection for each data type. The poster collection looks like this, and its main function is to point at a data source, in this case a local file, and then to allow us to perform operations on that data. We want to be able to filter on venue, and also on event type -(each machine can be set to filter on different event types)  and so below you see the functions which do this… and they cater for various misspellings of our venues just in case 🙂

//postercollection.js 

define([
 'underscore',
 'backbone',
 'models/poster/posterModel'
], function(_, Backbone, SponsorModel){

 var PosterCollection = Backbone.Collection.extend({
 
 sort_key: 'startTime', // default sort key
 

 url : function() {
 var EventsAPI = 'data/posters.JSON'; 
 return EventsAPI
 },
 
 byEventType: function(typex) { 
 typex=typex.toUpperCase()
 filteredx = this.filter(function(box) {
 
 var venuetoTest = box.get("type")
 
 if( box.get("type")){
 venuetoTest = (box.get("type").toUpperCase())}
 
 
 return typex.indexOf(venuetoTest) !== -1;
 }); 
 return new PosterCollection(filteredx);
 },
 
 

 venueFilter: function(venue) { 

 if(venue.toUpperCase()=="M SHED"){venue = "M SHED"}
 if(venue.toUpperCase()=="BMAG"){venue = "BRISTOL MUSEUM AND ART GALLERY"}
 if(venue.toUpperCase()=="MSHED"){venue = "M SHED"}
 filteredx = this.filter(function(box) {
 var venuetoTest = box.get("venue")
 
 if( box.get("venue")){
 venuetoTest = (box.get("venue").toUpperCase())}
 
 return venuetoTest==venue ||box.get("venue")==null
 }); 
 return new PosterCollection(filteredx);
 
 },
 
 parse : function(data) { 
 return data 
 }

 
 });

 return PosterCollection;

});

Referenced by the collection is the model – this is where we define the data that each poster record will need. One thing to watch here is that the field names match exactly those in the data source. When backbone loads in data from a JSON file or API, it looks for these field names in the source data and loads up the records accordingly (models in backbone speak) . So once the source data is read, we populate our poster collection with models, each model contains the data for a single poster etc.

//postermodel.js


 define([
 'underscore',
 'backbone'
], function(_, Backbone) {

 PosterModel = Backbone.Model.extend({

 defaults: {
 
 category: 'exhibition',
 irn: '123456' ,
 startDate: '01/01/2015' ,
 endDate: '01/01/2015' ,
 venue: 'MSHED' ,
 caption: 'caption' ,
 strapline: 'strapline' ,
 copyright: '© Bristol Museums Galleries and Archives' 
 

 },
 initialize: function(){
 //alert("Welcome to this world");
 },
 adopt: function( newChildsName ){
 // this.set({ child: newChildsName });
 }
 })

 return PosterModel;

});

With the collection loaded with data, and all the necessary venue and event filters applied, it is time to present the content – this is where the templates come in. A template is an html file, with a difference. The poster template contains the markup and styling needed to fill the screen, and uses the underscore library to insert and images into the design.

/*posterFullScreenTemplate_1080x1920.html */

<style>

body{
    background-color:black;
    color: #BDBDBD;
}
  
#caption{
    position: relative;
    margin-top: 40px;
    width:100%;
   z-index:1;
  /*padding-left: 20px;*/
}

.captionText{
    font-weight: bold;
    font-size: 51.5px;
    line-height: 65px;
}

.miniFont{
   font-size:35 !important;
   line-height:1 !important;
}

...

</style>


<div id="sponsorCylcer"> 
 <% 
 var imageError= TemplateVarialbes.ImageRedirectURL+ poster.get('irn') + TemplateVarialbes.ImageSizePrefix
 var imageError= TemplateVarialbes.ImageRedirectURL+poster.get('irn') + TemplateVarialbes.ImageSizePrefix 
 %>
 <div id="poster_1" class="">
 <img onError="this.onerror=null;this.src='<% print(imageError) %>';" src="images/<%= poster.get('irn') %>.jpg" />
 <div id="imageCaption"> <%= poster.get('caption') %><br> <%= poster.get('copyright') %></div>
 </div>
 


 <% if (poster.get('type').indexOf("poster") !== -1 && displayCaption==true){ %>
 <div id="datesAndInfo">
 <h1>from <%= poster.get('startDate') %> till <%= poster.get('endDate') %></h1>
 </div>

 <%} else{ 
 if ( displayCaption==true){ 

 %>
 <div id="caption">
 <div class="captionText <% if( miniFont!=false){print(miniFont)} %>" > <%= poster.get('strapline').replace(/(?:\r\n|\r|\n)/g, '<br />') %> </div>
 <%} } %>
 </div>
</div>>
 


 <% if (poster.get('type').indexOf("poster") !== -1 && displayCaption==true){ %>
<div id="datesAndInfo">
<h1>from <%= poster.get('startDate') %> till <%= poster.get('endDate') %></h1>
</div>

<%} else{ 
if ( displayCaption==true){ 

%>
<div id="caption">
<div class="captionText <% if( miniFont!=false){print(miniFont)} %>" > <%= poster.get('strapline').replace(/(?:\r\n|\r|\n)/g, '<br />') %> </div>
<%} } %>

Once the template is loaded, the poster displays, and that’s pretty much job done for that particular mode, except that we want posters to be displayed on a loop, and so the view reloads the template every x seconds depending on what has been set for that machine using the digital signage administration panel. A master timer controls how long the poster loop has been running for and moves to the next mode after that time. Additionally a counter keeps a note of the number of posters displayed and passes that number across to the next mode so when poster mode comes back round, the next poster in the sequence is loaded.

Remarks

folder structureUsing the require backbone framework for the application has kept things tidy throughout the project and has meant that extending new modes and adding database fields is as hassle free as possible. It is easy to navigate to the exact file to make the changes – which is pretty important once the app gets beyond a certain size. Another good thing is that bugs in one mode don’t break the app, and if there is no content for a mode the app flips to the next without complaining – this is important in the live environment where there are no keyboards in easy reach to ‘OK’ any error messages.

 

 

Furthermore the app is robust – we have it running on Ubuntu, Windows 7 [in Chinese], and a Raspberry PI, and it hasn’t crashed so far. Actually if it does its job right, the application architecture  won’t get noticed at all (which is why I am writing this blog)  – and the content will shine through…. one reason I have avoided any scrolling text or animations so far – posters look great just as they are, filling the screen.

Now that our content editors are getting to grips with the system, we are starting to gather consensus about which modes should be prominent, in which places – after all if you have different modes, not every visitor will see the same content – so it there any point in different modes?  Let the testing commence!

 

Acknowledgements

Thanks to Thomas Davis for the helpful info at backbonetutorials.com and Andrew Henderson for help Killing Zombies.

 

 

 

Using Shopify to run an affordable museum shop till system (POS)

Photo of Shopify till - iPad, till and printer first use

Across the service we typically take payments for our two major retail shops and  ‘paid for’ exhibitions at Bristol Museum & Art Gallery and M Shed. To date we have never set the tills up to give us useful reporting beyond the “groups of products” e.g. ‘books’ or ‘cards’ which is simply not good enough [no shots].  We need useful data to help us understand our business and improve our service. GDS refer to ‘designing with data‘ in their design principles and I see no reason not to be the same across the museums, especially with trading and IT retail systems.

During 2015-16 we will design our retail offer based on good usable data about our visitors, product ranges and sector trends.

Introducing Shopify Point of Sale (POS)

In the not too distant past I used to do freelance web projects and shopify would regularly appear on my radar. It is an affordable (from $9 a month) web shop that recently introduced the ability to run as a till system called Shopify POS. Due to its popularity with web folk I trust, our desire to get a move on and its feature set to cost ratio, I figure we have nothing to lose but try it out – we have no historic data either so anything is better than our current position.

Also, we’re an Arts Council England lead for digital so what better problem to solve than affordable till systems to kick off our 2015-18 partnership?

We will use Shopify POS to:

  • Take cash and card payments
  • Manage our products and stock level
  • Provide both retail and service management with regular performance reports
  • Act as a mininum viable service to help plan for the future
  • Dip our toe in the water with an online shop offer (both POS and web shop are interelated making it easy to do)

Getting started

I made a “management” decision to switch POS and so this is an enforced project for the retail team who have understood my reasoning are behind the project. I have said that we have nothing to lose but this may not work and i’ll hold my hands up if we fail. We had a call with the Shopify team and knew we needed some new kit:

  • Two paid instances of Shopify POS – one for each retail shop. I am disappointed there is to no way to have multiple shops from one account even if it was a discounted upgrade. This will enable us to report accurately each shop as its own business
  • iPad air 2 with Shopify (use the 7 day trial first) with retail add-on and reporting ($59 per month)
  • Bluetooth barcode reader, till drawer and receipt printer from uk reseller POS hardware for approx. £250 ex Vat (turns out you can use any drawer though as they are standard
  • Reuse existing card reader (approx £20 per month)
  • iPad secure stand
  • Router to avoid public wifi and maintain security – fitted by IT services

First steps

  1. Test a proof of concept – Zahid and Tom did a stand up job of getting the system to play nice with our infrastructure and i can’t thank them enough as this proved to be a pain for an unknown reason on our network.
  2. Once we had our test ‘alpha’ system working, we confirmed that IT were happy for us to proceed. They generally like projects that they don’t have to get involved in too much! As we’re using the existing corporate contract for our card payments which never touch Shopify there isn’t a security risk at that point AND it doesn’t touch our finance system. Essentially Shopify is “off” the network and at worse we expose our reporting and products – secure passwords for staff is the biggest challenge!
  3. Add our MANY products. Our retail and admin team are working on this at the moment
  4. ‘Beta’ Test over the week of 27th April alongside the existing system with our retail manager Helen who is critical to the success of the project
  5. Show the retail team how to use the system and get their feedback – after all they need to use and champion the project and service

Next steps

Assuming staff are happy and we’re getting the data we need I plan to put the service into ‘live’ starting 1st May so we can get 11 months usable data. We’ll be sharing our progress on the blog. PLEASE get in touch if you have anything to help us make a better service or have any questions.

A full shop till system for unless than £1000 a year…..let’s see!