On our requirements list is, to weave interest-based navigation maps through our data site. And feedback from the recent SODU 2021 conference, affirmed this:
I like the site’s tools and visualisations, but more needs to be done to help me navigate my path of interest through the prototype website.
In an exploratory step towards fulfilling that requirement, we have annotated some data points with explanations/narrative. The idea is that that these annotations could become waymarks in navigation maps, to guide users between the datapoints which underpin data-based stories. We might even imagine how clicking a ‘next’ button on a waymark would visually ‘fly’ the user to the next datapoint in the story (which is, perhaps, on a different graph or different page). But(!) back to our present, very simple proof-of-concept implementation…
Here’s how the annotations look in our present, proof-of-concept implementation:
Each annotation is depicted by an emoji which is plotted beside a datapoint (on a graph, or in a table). When the user hovers over (or clicks on) an annotation’s emoji, a pop-up will display some informative text.
We want to code annotations just as we would any other dataset – as a straighforward CSV file. So we have built a data-drive annotation mechanism. This has allowed us to specify annotations, as data, in a CSV file like this:
Each annotation record contains
datapoint coordinates which specify the datapoint against which the annotation is to be plotted. The
datapoint coordinates include a
record-type which specifies the dataset against which the annotation is to be plotted. (In this example, the specified dataset
household-waste-derivation-generation is a derived dataset, based on the
This proof-of-concept, data-driven, annotation mechanism has been useful because it has:
given us a model with moving parts to learn from,
provided hints about how annotations can be used to help users understand and navigate the data,
shown us that we need more structure around the naming and storage of derived datasets (and their annotations), and
uncovered the difficultlies of retro-fitting an annotations mechanism into our
prototype-6 website. (Annotations are displayed using off-the-shelf Vega-lite tooltips and Bulma CSS dropdowns, but these don’t provide a satisfactory level of placement/control/interactivity. More customised webpage components will be needed to provide a better user experience.)
What do households put into their bins and and how appropriate are their disposal decisions?
To help provide an answer to that question, Zero Waste Scotland (ZWS) occasionally asks each of the 32 Scottish councils to sample their bin collections and to analyse their content. This compositional analysis uncovers the types and weights of the disposed of materials, and assesses the appropriateness of the disposal decisions (i.e. was it put into the right bin?).
Laudably, ZWS is considering publishing this data as open data. Click on the image below to see a web page that is based on an anonymised subset of this data.
We have bought the domain name
wastemattersscotland.org for the waste data website that we are developing.
At the time of writing, https://wastemattersscotland.org is being redirected to our latest prototype
prototype-6 – as can be seen in the screen shot below.
Discover how many cars worth of CO2e is avoided each year because of this university based, reuse store
The Fair Share is a university based, reuse store. It accepts donations of second-hand books, clothes, kitchenware, electricals, etc. and sells these to students. It is run by the Student Union at the University of Stirling. It meets the Revolve quality standard for second-hand stores.
The Fair Share is in the process of publishing its data as open data. Click on the image below to see a web page that is based on an draft of that work.
With Glasgow City hosting the UN Climate Change conference (COP26) later this year, it was appropriate that this year’s The Data Lab data analysis hackathon (held last week) had the theme “pollution reduction”.
Three organisations provided challenge projects for the hackathon teams: we provided a “waste management” project based on our easier-to-use datasets; Code the City provided an “air quality” project; and Scottish Power an “electric vehicle charging” project.
The hackathon was lead by a young Scottish tech start-up company called Filament. They have an interesting product that is basically a sharable, cloud-hosted Jupyter Notebook.
Each day a new cohort of teams would tackle the project challenges. We helped by answering their questions about our datasets, and by suggesting ideas for investigation.
At the end of each day the teams presented their findings.
It was informative to see how the teams (each with a mix of skills that included programming, data analysis and business acumen) organised themselves for group working, handled the data, and applied learned analysis techniques.
The teams had a relatively short amount of time to work on their projects so having easy to use datasets was a deciding factor in how much they could achieve. Therefore one take-away is clear, and helps substantiate an aim of our DCS project… open data needs to be easy to use, not just be accessible. Making data easier to use for non-experts, opens it to a much wider audience and to much more creativity.
Stirling Council set a precedent by being the first (and still only) Scottish local authority to have published open data about their bin collection of household waste.
The council are currently working on increasing the fidelity of this dataset, e.g. by adding spatial data to describe collection routes. However, we can still squeeze from its current version, several interesting pieces of information. For details, visit the Stirling bin collection page on our website mockup.