Hi! My name is Cameron Hill and I am currently working as a Digital Apprentice as part of
the Bristol City Council Culture Team, where I’ll mainly be based at Bristol Museum and helping out with all things digital.
Previously to joining Bristol City Council, I studied Creative Media at SGS College for two years as well as at school for GCSE. A huge interest of mine is social media. Whilst at college I worked with a friend who was a fashion student who sold her creations to create more of a brand for herself. After she came up with the name, I created an Instagram page for the brand and started creating various types of content. Using Instagram stories was a great way to interact with followers. Using different features such as Q&A and polls, it was easy to see what the customers like. Something else we did with stories was showing the ‘behind the scenes’. For example: from picking the fabric, making the item itself and packing the item to be shipped.
As I am writing this it is my first day and so far it has been a lot to take in. One of my first tasks was to upload an image to a folder linked to the various screens around the museum.
Although technology can be temperamental, the first issue we came across was unexpected….
Using my iPhone, I was asked to take an image to upload into the folder but without me realising the phone camera had ‘live photos’ turned on meaning all pictures taken would create small video clips. After waiting for five minutes or so and the image not appearing we realised that the image was taken in High-Efficiency Image File Format (HEIC). Not knowing what HEIC was I did what anyone in the twenty-first century would do and took to Google.
After a little research, I came across an article in a technology magazine, The Verge stating that this format that Apple has added to iOS 11 would be a problem for PC users. From reading various articles online it is clear that a lot of people have struggled
when trying to upload their files to PCs and not being able to view and edit it. I am really looking forward to my future working here as part of the Digital Team.
In team digital we like to make things look easy, and in doing so we hope to make life easier for people. A recent challenge has been how to recreate the Top sales by product analysis from the Shopify web application in Google Docs to see how the top 10 selling products compare month by month. The task of creating a monthly breakdown of product sales had up until now been a manual task of choosing from a date picker, exporting data, copying to google sheets, etc.
Having already had some success pushing and pulling data to google sheets using google apps script and our Culture Data platform, we decided to automate the process. The goal was to simplify the procedure of getting the sales analysis into Google docs to make it as easy as possible for the user – all they should need to do would be to select the month they wish to import.
We have developed a set of scripts for extracting data using the Shopify API, but needed to decide how to get the data into Google Sheets. Whilst there is a library for pushing data from a node application into a worksheet, our trials found it to be slow and prone to issues where the sheet did not have enough rows or other unforeseen circumstances. Instead, we performed our monthly analysis on the node server and saved this to a local database. we then built an api for that database that could be queried by shop and by month.
The next step, using google script was to query the api and pull in a month’s worth of data, then save this to a new sheet by month name. This could then be set added as a macro so that it was accessible in the toolbar for the user in a familiar place for them, at their command.
As the data is required on a monthly basis, we need to schedule the server side analysis to save a new batch of data after each month – something we can easily achieve with a cron job. The diagram below shows roughly how the prototype works from the server side and google sheets side. Interestingly, the figures don’t completely match up to the in-application analysis by Shopify, so we have some error checking to do, however we now have the power to enhance the default analysis with our own calculations, for example incorporating the cost of goods into the equation to work out the overall profitability of each product line.
There are various versions of a common saying that ‘if you don’t measure it you can’t manage it’. See Zak Mensah’s (Head of Transformation at Bristol Culture) tweet below. As we’ll explain below we’re doing a good job of collecting a significant amount of Key Performance Indicator data; however, there remain areas of our service that don’t have KPIs and are not being ‘inspected’ (which usually means they’re not being celebrated). This blog is about our recent sprint to improve how we do KPI data collection and reporting.
The most public face of Bristol Culture is the five museums we run (including Bristol Museum & Art Gallery and M Shed), but the service is much more than its museums. Our teams include, among others; the arts and events team (who are responsible the annual Harbour Festival as well as the Cultural Investment Programme which funds over 100 local arts and cultural organisations in Bristol); Bristol Archives; the Modern Records Office; Bristol Film Office and the Bristol Regional Environmental Recording Centre who are responsible for wildlife and geological data for the region.
Like most organisations we have KPIs and other performance data that we need to collect every year in order to meet funding requirements e.g. the ACE NPO Annual Return. We also collect lots of performance data which goes beyond this, but we don’t necessarily have a joined up picture of how each team is performing and how we are performing as a whole service.
The first thing to say is that they’re not a cynical tool to catch out teams for poor performance. The operative word in KPI is ‘indicator’; the data should be a litmus test of overall performance. The second thing is that KPIs should not be viewed in a vacuum. They make sense only in a given context; typically comparing KPIs month by month, quarter by quarter, etc. to track growth or to look for patterns over time such as busy periods.
A great resource we’ve been using for a few years is the Service Manual produced by the Government Digital Service (GDS) https://www.gov.uk/service-manual. They provide really focused advice on performance data. Under the heading ‘what to measure’, the service manual specifies four mandatory metrics to understand how a service is performing:
cost per transaction– how much it costs … each time someone completes the task your service provides
user satisfaction– what percentage of users are satisfied with their experience of using your service
completion rate– what percentage of transactions users successfully complete
digital take-up– what percentage of users choose … digital services to complete their task
Added to this, the service manual advises that:
You must collect data for the 4 mandatory key performance indicators (KPIs), but you’ll also need your own KPIs to fully understand whether your service is working for users and communicate its performance to your organisation.
Up until this week we were collecting the data for the mandatory KPIs but they have been somewhat buried in very large excel spreadsheets or in different locations. For example our satisfaction data lives on a surveymonkey dashboard. Of course, spreadsheets have their place, but to get more of our colleagues in the service taking an interest in our KPI data we need to present it in a way they can understand more intuitively. Again, not wanting to reinvent the wheel, we turned to the GDS to see what they were doing. The service dashboard they publish online has two headline KPI figures followed below with a list of the departments which you can click into to see KPIs at a department level.
Achieving a new KPI dashboard
As a general rule, we prefer to use open source and openly available tools to do our work, and this means not being locked into any single product. This also allows us to be more modular in our approach to data, giving us the ability to switch tools or upgrade various elements without affecting the whole system. When it comes to analysing data across platforms, the challenge is how to get the data from the point of data capture to the analysis and presentation tech – and when to automate vs doing manual data manipulations. Having spent the last year shifting away from using Excel as a data store and moving our main KPIs to an online database, we now have a system which can integrate with Google Sheets in various ways to extract and aggregate the raw data into meaningful metrics. Here’s a quick summary of the various integrations involved:
Data capture from staff using online forms: Staff across the service are required to log performance data, at their desks, and on the move via tablets over wifi. Our online performance data system provides customised data entry forms for specific figures such as exhibition visits. These forms also capture metadata around the figures such as who logged the figure and any comments about it – this is useful when we come to test and inspect any anomalies. We’ve also overcome the risk of saving raw data in spreadsheets, and the bottleneck often caused when two people need to log data at the same time on the same spreadsheet.
Data capture directly from visitors: A while back we moved to online, self-completed visitor surveys using SurveyMonkey and these prompt visitors to rate their satisfaction. We wanted the daily % of satisfied feedback entries to make its way to our dashboard, and to be aggregated (both combined with data across sites and then condensed into a single representative figure). This proved subtly challenging and had the whole team scratching our heads at various points thinking about whether an average of averages actually meant something, and furthermore how this could be filtered by a date range, if at all.
Google Analytics: Quietly ticking away in the background of all our websites.
Google sheets as a place to join and validate data: It is a piece of cake to suck up data from Google Sheets into Data Studio, provided it’s in the right format. We needed to use a few tricks to bring data into Google Sheets, however, including Zapier, Google Apps Script, and sheets Add-ons.
Zapier: gives us the power to integrate visitor satisfaction from SurveyMonkey into Google Sheets.
Google apps script: We use this to query the API on our data platform and then perform some extra calculations such as working out conversion rates of exhibition visits vs museum visits. We also really like the record macro feature which we can use to automate any calculations after bringing in the data. Technically it is possible to push or pull data into Google Sheets – we opted for a pull because this gives us control via Google Sheets rather than waiting for a scheduled push from the data server.
Google Sheets formulae: We can join museum visits and exhibition visits in one sheet by using the SUMIFS function, and then use this to work out a daily conversion rate. This can then be aggregated in Data Studio to get an overall conversion rate, filtered by date.
Sheets Add-Ons: We found a nifty add-on for integrating sheets with Google Analytics. Whilst it’s fairly simple to connect Analytics to Data Studio, we wanted to combine the stats across our various websites, and so we needed a preliminary data ‘munging’ stage first.
Joining the dots…
1.) Zapier pushes the satisfaction score from SurveyMonkey to Sheets.
2.) A Google Sheets Add On pulls in Google Analytics data into Sheets, combining figures across many websites in one place.
3.) Online data forms save data directly to a web database (MongoDB).
4.) The performance platform displays raw and aggregated data to staff using ChartJS.
5.) Google Apps Script pulls in performance data to Google Sheets.
6.) Gooogle Data Studio brings in data from Google Sheets, and provides both aggregation and calculated fields.
7.) The dashboard can be embedded back into other websites including our performance platform via an iframe.
8.) Good old Excel and some VBA programming can harness data from the performance platform.
We’ve been testing out Google Data Studio over the last few months to get a feel for how it might work for us. It’s definitely the cleanest way to visualise our KPIs, even if what’s going on behind the scenes isn’t quite as simple as it looks on the outside.
There are a number of integrations for Data Studio, including lots of third party ones, but so far we’ve found Google’s own Sheets and Analytics integrations cover us for everything we need. Within Data Studio you’re somewhat limited to what you can do in terms of manipulating or ‘munging’ the data (there’s been a lot of munging talk this week), and we’re finding the balance between how much we want Sheets to do and how much we want Data Studio to do.
At the beginning of the sprint we set about looking at Bristol Culture’s structure and listing five KPIs each for 1.) the service as a whole; 2.) the 3 ‘departments’ (Collections, Engagement and Transformation) and 3.) each team underneath them. We then listed what the data for each of the KPIs for each team would be. Our five KPIs are:
Cost per transaction
Each team won’t necessarily have all five KPIs but actually the data we already collect covers most of these for all teams.
Using this structure we can then create a Data Studio report for each team, department and the service as a whole. So far we’ve cracked the service-wide dashboard and have made a start on department and team-level dashboards, which *should* mean we can roll out in a more seamless way. Although those could be famous last words, couldn’t they?
Any questions, let us know.
Darren Roberts (User Researcher), Mark Pajak (Head of Digital) & Fay Curtis (User Researcher)
Here at the M Shed in Bristol, we have amazing views of the harbor from our lovely events suit. Here we hold all sorts of events from large annual AGMs for corporations’, to weddings and some really great community events.
We have a fully automated integrated audio visual system. With AMX and Creston control systems, you can walk around the function rooms holding a smart, touch screen control panel and control just about everything! You can power up the projectors, lower the screens, open and shut the blinds, control volumes, select what to display from Sky TV, Blu-ray players and laptops, you can even change the lighting to any colour scheme you want.
It’s all pretty smart. Pretty smart apart from the dreaded Video Graphics Array as the main interface, more commonly referred to as the VGA connector! For all this advanced technology, presenters still have to connect their devices with a cable.
The VGA standard was invented in 1987 by IBM, and its dreaded 15 pin D Sub connector still to this day refuses to go away.
There’s something amiss when a presenter asks to use their nice, brand new iPad to run their presentation and you then have to use a lighting port to VGA adapter connected to 10 meter VGA cable. These VGA connectors were designed for permanent installation and so when they are swapped between laptop and other devices several times a day, the 15 tiny pins take a battering and it only takes one bent pin for the screen to go pink, blue or stop all together.
Here comes the ingenious solution to take advantage of the wireless / Wi-Fi capabilities that are now standard for all devices.
The idea and solution comes in the form of finding a combination of ready available, off the shelf technology combined in such a way it allows the transmission of a device’s screen to appear on our projection system, without any wires. We needed this to be augmented into our current system without affecting its current capabilities. It is already a great intergraded AV system, it’s just needs to be brought into the future without losing its ability to use the old VGA system. It may be old but it works so well as a last resort and backup.
Apple products long ago ditched the VGA system in favour of min-display ports or “lighting ports”. A quick trip to any Apple store and an assistant will enthusiastically show how with a flick of the devices, a display can be “thrown” to another screen. It’s called Air Play and is Apple’s secure version of Wi-Fi streaming.
Google, with their ever innovative developments, have developed a technology called Chrome Cast to the same effect, which is also based on Wi-Fi streaming.
With delegates at our events bringing Apple products, PCs and android devices, we needed an all in one system; so purchased these products to enable this streaming. I ordered an Apple TV and a Chrome Cast device which both work by connecting to a Wi-Fi network and looking for compatible devices. Both of these provide a solution for all devices. Chrome Cast is much cheaper than Apple TV and can support Apple products too, but the ease of use and reliability of Apple on Apple products seemed worth the extra investment. I calculated the cost of replacement VGA cables and at the current rate we replace them, these new items would pay for themselves in just three years!
The main issue I faced in integrating these was how to patch them into a fully automated, closed AV system without affecting its capabilities. In essence, how to “retrofit” an Apple TV and Chrome Cast and get the systems to talk over M Shed’s Wi-Fi – a public network, effectively part of the councils IT network and heavily locked down.
To solve the first issue, I had to literally climb into the AV racking system to find a suitable part that interfaced with an HDMI connector (both Chrome Cast and Apple TV use HDMI). I chose our SKY TV box and unplugged its HDMI cable. Onto this cable I place a HDMI switcher, which allows 4 inputs to connect as one. The switcher is the sort of device you would buy if your TV at home only has one HDMI port and you had multiple devices you wanted to connect: a DVD player, games console and a Freeview box. I then connected the Sky box to the switcher along with the Apple TV and Chrome Cast unit. Then after finding power outlets, whilst still inside the AV systems rack, I carefully slid the switcher unit so its control switch faced out the front of the rack. A few cables ties and some Velcro later and the hardware was installed, all that was left to do was to climb out and check it all worked.
Going back to the Creston AV touch panel, I selected Sky TV and sure enough it appeared on the projections screen as it should. Then by using the controls on the switcher unit I was able to toggle between Sky, Apple TV and Chrome Cast.
It then occurred to me that both the Apple and Chrome devices use the HDMI to output their audio too. However the HDMI feeds to the projector which only projects the image, so audio would be lost. Climbing back into the AV rack, I noticed that the Sky box was using analog RCA connectors to output its audio to integrated ceiling speaker system. Fortunately the switcher also had 3.5mm TRS output (headphone socket), so by setting the Sky box to output audio through its HMDI it meant that all three devices were now feeding the audio and visual signal to the switcher. Then by using the RCA connector from the Sky box with the TRS adapter, all three devices were now feeding to the ceiling speaker system. I climbed back out of the rack and started to create a new, independent Wi-Fi network for devices to communicate.
The new Wi-Fi network was actually the simpler part.
I purchased an ASUS RT-AC3200 Tri-Band Giga-Bit Wi-Fi router. This router is enormous with six aerials and looks like the Batmobile. I figured that it would have to be reliable and be able to cope with large amount of data traffic, so I got the most powerful but cost effective router I could find.
The idea behind the router was to have all the devices (Apple TV, Chrome Cast and whichever device is streaming) all on the same network, a network I could manage. Once on the same network, it was a matter of connecting. The Apple system was really straight forward- you join the same Wi-Fi network as the Apple TV (I named the network “presentations”) then chose the Airplay option on the device and as easy as that the screen is mirrored on the projector. The Chrome set up was a little more involved. With an android device, you have to install an app called Chrome Cast. Once installed it’s quite straight forward to pair with the Chrome Cast receiver and then the screen can be mirrored on the projector. With a Windows PC laptop, I had to install the latest version of Chrome. This then comes with the option to cast either just the browser tab you’re using or the whole desktop -this works well but compared to the Apple TV there is a slight lag. In some instances you would have to install the Chrome Cast extension for Chrome.
I also connected the Wi-Fi router to our open Wi-Fi system with a RJ45 cable. This then allowed people on the Presentation Wi-Fi to still be able to access the net.
We are still trialing the system before we start to officially offer it as part of a package, but so far so good. It has been received very positively from users. We’ve had people walking around with iPads – controlling their presentation and not being tied to the lectern with an old pc. We’ve even had the best man at a wedding wirelessly control the music playlist from his iPhone at the top table! PCs are still being used at the lectern as normal but without the need to trail VGA cable everywhere. The only thing left to work out is wireless power… I suppose batteries will have to do for now.
Here at M Shed Bristol, we have some great working exhibits from the bygone era of Bristol Harbour’s industrial past: steam engines, steam boats, steam cranes and more. But the most recognisable and iconic are the four great towering electric cranes standing over 120 feet above the old docks.
As the Industrial Museum was being transformed into the present day M Shed Museum two of the cranes would strike up conversations with each other, entertaining and informing passers-by of what they could look forward to seeing inside the new museum. However due to renovations and movement of the
cranes they fell silent again…
A few years later, due to popular demand I was tasked with bringing the cranes back to life!
To get these cranes talking was going to require rebuilding the whole audio and lighting system and recording new scripts. We were fortunate enough to have Alex Rankin, from our M Shed team, lend his penning abilities for the new scripts and Jacqui and Heather to voice the new crane characters.
To record the dialogue, we arranged to meet in a nice quite corner of the L Shed store room. It’s a vast store, full of so many objects that there isn’t enough space to have them on permanent display. With both Jacqui and Heather sat at opposite ends of a table, I set up a pair of good quality condenser microphones. Each plugged into their own separate channel on my external sound card, an Akai EIE 4 channel usb sound card with great preamps and phantom powered for the mics. This in turn was hooked up to my MacBook and copy of Logic Pro. I recorded through each script a few times and was able to compile a seamless recording from the various takes. Once finished, I hard panned each channel left and right so that when each voice played back each would have its own speaker, left or right – crane 1 or crane 2.
To start building the new AV system, I searched around the vast L-Shed stores and work rooms to find what was left of the old system. I then decided what could be re used and what new equipment would be needed. I had been informed, by our volunteer team for the working exhibits, that everything had been removed from the cranes themselves; this meant starting from scratch.
The cranes themselves would need a loud speaker system for the voices and the crane cabs would need different coloured lights to flash in time with the talking as this helps to animate the cranes. That part was relatively easy. It meant scaling the cranes and bolting speakers to their underside and mounting lamps inside the cabs. I’ll be honest, I was helped by the Volunteer team and a huge mobile diesel powered cherry picker!
The hard part was how to feed the power and audio cables to the cranes. After some investigation it turned out that below the surface of the dockside was a network of underground pipes which lead to the base of each crane to feed their power. The great volunteer team once again worked miracles and fed over 600 combined meters of audio and lighting cables for me. This all led back to the clean room in their ground floor workshop. With all the cabling done I just needed to build a lighting control and audio playback system.
My design solution, using what kit I could find and a few new bits, was to use a solid state compact flash media player, graphic equaliser, audio mixing desk and power amplifier for the audio. To have the light flash in time with the dialogue, I used a two light controller with a light to sound module, similar to what a DJ might use to have their disco lights flash to the music!
By having the audio go through the mixing desk, I was able to take an audio feed for each channel and direct them to lighting controllers. By recording the two voices in stereo, with each voice on its own left or right channel, it meant i only needed one media player and could easily control each channel on the sound desk. The graphic equaliser allowed me to tweak the speakers to acoustically fit their environment.
I looked at randomising the audio or having it triggered by people walking past, but with the amount of people who pass outside M Shed the cranes would be chatting away, non-stop all day! I decided to create a long audio file of about 3 hours with the different recorded scripts and random intervals of silence. These ranged from 5 minutes to 20 minutes, so it always comes as a surprise when they start talking to each other.
The results are really effective. It is always fun to see people being caught by surprise as the cranes light up and start a conversation and to see them stop and listen in on what they have to say.
*explicit content warning* this post makes reference to APIs.
THE PROBLEM: Having set ourselves the challenge of improving the buying process , our task in Team Digital was to figure out where we can do things more efficiently and smartly. Thanks to our implementation of Shopify, we have no shortage of data on sales to help with this, however the process of gathering the required information to place an order of more stock is time consuming – retail staff need to manually copy and paste machine-like product codes, look up supplier details and compile fresh order forms each time, all the while attention is taken away from what really matters, i.e. which products are currently selling, and which are not.
In a nutshell, the problem can be addressed by creating a specific view of our shop data – one that combines the cost of goods, with the inventory quantity (amount of stock left) in a way that factors in a specific period of time and which can be combined with supplier information so we know who to order each top selling product from, without having to look anything up. We were keen to get in to the world of Shopify development and thanks to the handy Shopify developer programme documentation & API help it was fairly painless to get a prototype up and running.
SETTING UP: We first had to understand the difference between public and private apps with Shopify. A private app lets you hard code it to speak to a specific shop, whereas the public apps need to be able to authenticate on the fly to any shop. With this we felt a private app was the way to go, at least until we know it works!
Following this and armed with the various passwords and keys needed to programmatically interact with our store, the next step was to find a way to develop a query to give us the data we need, and then to automate the process and present it in a meaningful way. By default Shopify provides its data as JSON, which is nice, if you are a computer.
TECHNICAL DETAILS: We set up a cron job on an AWS virtual machine running Node and MongoDB. Using the MEAN stack framework and some open source libraries to integrate with Google Sheets, and notably to handle asynchronous processes in a tidy way. If you’d like to explore the code – that’s all here. In addition to scheduled tasks we also built an AngularJS web client which allows staff to run reports manually and to change some settings.
Which translates as:In order to process the data automatically, we needed a database and computer setup that would allow us to talk to Shopify and Google Docs, and to run at a set time each day without human intervention.
The way that Shopify works means we couldn’t develop a single query to do the job in one go as you might in SQL (traditional database language). Also, there are limitations in how many times you can query the store. What emerged from our testing was a series of steps, and an algorithm which did multiple data extractions and recombination’s, which I’ll attempt to describe here. P.S. do shout if there is an easier way to do this ;).
STEP 1: Get a list of all products in the store. We’ll need these to know which supplier each product comes from, and the product types might help in further analysis.
STEP 2: Combine results of step one with the cost of goods. This information lives in a separate app and needs to be imported from a csv file. We’ll need this when we come to build our supplier order form.
STEP 3: Get a list of all orders within a certain period. This bit is the crucial factor in understanding what is currently selling. Whilst we do this, we’ll add in the data from the steps above so we can generate a table with all the information we need to make an order.
STEP 4: Count how many sales of each product type have taken place. This converts our list of individual transactions into a list of products with a count of sales. This uses the MongoDB aggregation pipeline and is what turns our raw data into something more meaningful. It looks a bit like this, (just so you know):
STEP 5: Add the data to a Google Sheet. What luck there is some open source code which we can use to hook our Shopify data up to Google. There are a few steps needed in order for the Google sheet to talk to our data – we basically have our server act as a Google user and share editing access with him, or her?. And while we are beginning to personify this system, we are calling it ‘Stockify’, the latest member of Team Digital, however Zak prefers the lofty moniker Dave.
The result is a table of top selling products in the last x number of days, with x being a variable we can control. The whole process takes quite a few minutes, especially if x >60, and this is due to limitations with each integration – you can only add a new line to a Google sheet once / second, and there are over 500 lines. The great thing about our app is that he/she doesn’t mind working at night or early in the morning, and on weekends or at other times when retail managers probably shouldn’t be looking at sales stats, but probably are. With Stockify/Dave scheduled for 7am each morning we know that when staff look at the data to do the ordering it will be an up to date assessment of the last 60 days’ worth of sales.
We now have the following columns in our Google Sheet, some have come directly from their corresponding Shopify table, whereas some have been calculated on the fly to give us a unique view of our data and on we can gain new insights from.
product_type: (from the product table)
variant_i:d (one product can have many variants)
price: (from the product table)
cost_of_goods: (imported from a csv)
order_cost: (cost_of_goods * amount sold)
sales_value: (price * amount sold)
name: (from the product table)
amount sold: (transaction table compared to product table / time)
inventory_quantity: (from the product table)
order_status: (if inventory_quantity < amount sold /time)
barcode: (from the product table)
sku: (from the product table)
vendor: (from the product table)
date_report_ru:n (so we know if the scheduled task failed)
TEST, ITERATE, REFINE: For the first few iterations we failed it on some basic sense checking – not enough data was coming through. This turned out to be because we were running queries faster than the Shopify API would supply the data and transactions were missing. We fixed this with some loopy code, and now we are in the process of tweaking the period of time we wish to analyse – too short and we miss some important items, for example if a popular book hasn’t sold in the last x days, this might not be picked up in the sales report. Also – we need to factor in things like half term, Christmas and other festivals such as Chinese New Year, which Stockify/Dave can’t predict. Yet.
AUTOMATIC ORDER FORMS: To help staff compile the order form we used our latest Google-sheet-fu using a combination of pick lists, named ranges and the query function to lookup all products tagged with a status of “Re-order”
A list of suppliers appears on the order form template:
and then this formula looks up the products for the chosen supplier and populates the order table:
“=QUERY(indirect(“last_60_days”&”!”&”11:685″),”select G where M='”&$B2&”‘ and J=’re-order'”)”
The trick is for our app to check if the quantity sold in the last x days is less than the inventory quantity, in which case it goes on the order form.
NEXT STEPS: Oh we’re not done yet! with each step into automation we take, another possibility appears on the horizon…here’s some questions we’ll be asking our system in the coming weeks..
-How many products have not sold in the last x days?
-If the product type is books, can we order more if the inventory quantity goes below a certain threshold?
Even if a particular product has not sold in the last 60 days, can we flag this product type anyway so it gets added to our automatic order form?
While we are at it, do we need to look up supplier email addresses each time – cant we just have them appear by magic.
…furthermore we need to integrate this data with our CRM…..looks like we will be busy for a while longer.
Since October we have been working with Computer Science students from the University of Bristol to redesign the interface for our digital asset management system.
After initially outlining what we want from the new design, there have been frequent meetings and they’ve now reached a stage where they can happily share with us their project so far.
Quick, appealing and easy to use, this potential new interface looks very promising!
With one of the main aims this year being to improve our online shop, myself and Darren decided to improve and update some stock photos. We enrolled in a crash course from resident photographer David Emeney and by the end of session thought we’d be able to do it, easy.
However, we came to find that photography is not as easy as it seems! First came the issue of space. Although David kindly allowed us to use his studio in the basement, with no computer nearby to check pictures and in fear of messing with any of his equipment, we though it may be best if we set up a studio a little more close by.
In true Blue Peter style Darren and I set about creating our own in-office photography studio by collecting bits and pieces from around the museum to mirror the one in the basement. Cardboard tubes were stapled together acting as a rod to hold the white background in place, this was held up by string wrapped multiple times around our window bars, counter tops were cleaned as to not make the paper dirty and even a staff noticeboard was used behind the paper to block out any natural light. Of course our office had to be rearranged first to fit such a project inside, a move which would have me non-stop sneezing for a few days as the settled dust was disturbed!
After a while of playing with the camera’s settings trying to find the right ones, we set to work to photograph stock. With thanks to Debs for letting us borrow geology’s light, the products came out well and the online shop now looks a lot smarter for it. Having this type of light was key to taking a good image, the close proximity between the product and source of light and changing the camera’s white balance when needed added extra quality.
It was a really good experience getting to know the manual settings of a camera and how each product requires a slight adjustment, also to be up to date with what products we currently have in store. I look forward to doing more stock photo shoots in the future and hope, at some point, to have all products photographed like this to keep a consistent look for the online shop.
My name is Lacey Trotman and I am currently in the fifth week of my Digital Apprenticeship with Bristol Museums. Having left college this June completing a 2 year A levels course in History, Art History, Sociology and Film Studies, the summer was spent searching for the right role for me. Despite College pushing for students to attend University – and many of my friends doing so, I felt the pressures of study and exams to degree level were not for me at this time. I chose instead to look at apprenticeships as it gave me a chance to put my skills into practical use in a real world setting.
Since starting on October 4th I have already begun to work on various projects broadening my range of skills and understanding: tackling the Discovery Pens, writing ‘How to’ guides, resizing images, composing surveys, working on the online shop, diving into the fast paced world of social media and editing blogs for the Museum website.
My first impression is that it’s an amazing place to work, with many opportunities to
undertake and progress. It’s also clear to see that there is a lot of work going into such an institution with many more departments behind the scenes than I could possibly have imagined.
I have always loved visiting museums and galleries. As a proud Bristolian I feel Bristol Museums provide some of the best in the country. Growing up, family holidays were full of excursions to castles and places of historical interest. Most recently, we visited St Michael’s Mount in Cornwall. Our seaside cottage faced the historic site making for picturesque views at all times. With Poldark loving parents we also visited the historic mines and ruins of work houses on the Cornish coast. Cornwall was also the home to legendary artist Barbara Hepworth, one of my key artists to feature in the Art History exam I completed this year; so I was thrilled
to see an original piece by her on our day trip to St. Ives. Even better is that a few weeks after starting this apprenticeship, Winged Figure was newly installed in the gallery confirming this is definitely the best place to work!
Throughout my childhood I visited all the venues that come under the Bristol Museums canopy. My first trip to The Red Lodge Museum, was with Primary School. I remember being asked by the staff if I wanted to dress as Queen Elizabeth I for the class picture, but afraid of the spotlight I volunteered my best friend instead! Blaise Castle was always a childhood favourite of mine and I can also remember visiting the old Industrial Museum with its variety of transport, planes and trucks. However I was delighted when the new M Shed opened offering fun and interactive features for free. I have not yet gotten over missing the iconic Banksy vs Bristol Museum exhibition or Dismaland, just 40 minutes away in Weston. With such strong links to Bristol, Banksy is a favourite artist of mine. Recently he paid my old Primary School a visit leaving a large mural on their classroom wall.
The next two years fill me with excitement and expectation. The addition of a marketing qualification will add further qualifications to my growing C.V. I hope to excel in my role growing in both confidence and ability; I am keen to ensure I make the most of this experience and hope that all I have to offer will been seen as a positive addition to the hardworking Digital Team.
By David Wright (Digital Curating Intern, Bristol Culture)
Both Macauley Bridgman and I are now into week six of our internship as Digital Curating Assistants here at Bristol Culture (Bristol Museums) . At this stage we have partaken in a wide array of projects which have provided us with invaluable experiences as History and Heritage students (a discipline that combines the study if history with its digital interpretation) at the University of the West of England. We have now been on several different tours of the museum both front of house and behind the scenes. Most notably our store tour with Head of Collections Ray Barnett, which provided us with knowledge of issues facing curators nationwide such as conservation techniques, museum pests and the different methods of both utilisation and presentation of objects within the entirety of the museum’s collection.
In addition we were also invited to a presentation by the International Training Programme in which Bristol Museums is a partner alongside the British Museum. Presentations given by Ntombovuyo Tywakadi, Collections Assistant at Ditsong Museum (South Africa), followed by Wanghuan Shi, Project Co-ordinator at Art Exhibitions China and Ana Sverko, Research Associate at the Institute of Art History (Croatia). All three visitors discussed their roles within their respective institutions and provided us with a unique insight into curating around the world. We both found these presentations both insightful and thought provoking as we entered Q&A centred on restrictions and limitations of historical presentation in different nations.
Alongside these experiences we have also assumed multiple projects for various departments around the museum as part of our cross disciplinary approach to digital curating.
Our first project involved working with Natural Sciences Collections Officer Bonnie Griffin to photograph, catalogue and conserve Natural History specimens in the store. This was a privileged assignment which we have perhaps found the most enjoyable. The first hand curating experience and intimate access with both highly experienced staff and noteworthy artefacts we both found inspiring in relation to our respective future careers.
Following on from this we undertook a project assigned by Lisa Graves, Curator for World Cultures, to digitise the outdated card index system for India. The digital outcome of this will hopefully see use in an exhibition next year to celebrate the seventieth anniversary of Indian independence in a UK-India Year of Culture. At times we found this work to be somewhat tedious and frustrating however upon completion we have come to recognise the immense significance of digitising museum records for both the preservation of information for future generations and the increased potential such records provide for future utilisation and accessibility.
We have now fully immersed ourselves into our main Bristol Parks project which aims to explore processes by which the museum’s collections can be recorded and presented through geo-location technology. For the purposes of this project we have limited our exploration to well-known local parks, namely Clifton and Durdham Downs with the aim of creating a comprehensive catalogue of records that have been geo-referenced to precise sites within the area. With the proliferation of online mapping tools this is an important time for the museum to analyse how it records object provenance, and having mappable collections makes them suitable for inclusion in a variety of new and exciting platforms – watch this space!. Inclusive of this we have established standardised procedures for object georeferencing which can then be replicated for the use of future ventures and areas. Our previous projects for other departments have provided the foundation for us to explore and critically analyse contemporary processes and experiment with new ways to create links between objects within the museum’s collections.
As the saying goes “time flies when you are having fun”, and this is certainly true for our experience up to date. We are now in our final two weeks here at the museum and our focus is now fervently on completing our Bristol Parks project.