Code For Europe (Scotland)

As I wrote in my previous post, Aberdeen City Council have joined Code for Europe 2014.

Code For Europe Badge

Code For Europe Badge

Following the international meeting in Barcelona, and ahead of the appointment of the code fellows, the four participating Scottish councils came together in Edinburgh this week to compare where we are and what our next steps will be.

As part of that meeting Katalin Gallyas of Amsterdam city municipality addressed the group on that city’s experiences. She spoke of the development of the apps mentioned in my previous post, but also covered a number of other issues. These included tools they use, such as Open Data Kit, and the choice of Open Data platform. “Beware the big corporates offering expensive, proprietary platforms”. Amsterdam chose CKAN an open source one. This is a choice that the Scottish authorities need to make – and perhaps an opportunity for something on which we could collaborate.

She also covered the extensive and enviable Open Data ecosystem that exists in Amsterdam. This involved

  • SMEs, hackers, coders,
  • Business accelerators
  • Innovation intermediaries
  • EU projects and funding
  • An open-minded city government
  • Participatory citizens

In 2014 Amsterdam are seeking national and EU funding – and they have committed to having

Badge with the slogan I Amsterdam

I Amsterdam

4 fulltime employees who amongst other duties will be collecting evidence and use-cases on reusable Apps.

All of this activity and engagement is bringing the state-of-the-art into city hall, shifting from traditional in-house one-off developments (that we see all over the place) to the use of tools such as Tableau or GitHub, allied to the adoption of and growing familiarity with concepts such as Commons / Standards, civic coders, hackers, grassroots tech.

She suggested that in following such a model Scotland needs to be looking for sources of external funding – such as Horizon 2020, but also at growing entrepreneurship, and developing a local Open Data ecosystem.

We also need to look at Apps For Amsterdam, and Appsterdam which has loads of activities and even its own ‘Mayor': Mike Lee.

All of this activity acts as a real Open Data catalyst. It exposes the need for a vocabulary match between the policy makers and the civic app space, generates clear use cases for the data. And it highlights the need for the adoption of commons and opens standards by local authorities.

Suzanne leads a workshop

Suzanne leads a workshop

Following Katalin’s presentation there followed some highly energetic road-mapping of the Scottish programme which was led by Suzaanne of the Waag Society, of whom I wrote in my previous post. This generate some positive twitter comments:

I’ll return to this topic in future posts as the Scottish C4E programme gets underway.

UK Railways and internet connectivity

Over the last couple of years I have spent a large amount of time travelling by train; mainly between Aberdeen and Edinburgh, Stirling or Glasgow. And on each of those journeys I have experienced the same frustrating issues in trying to get an internet connection for my laptop or phone.

I have a Samsung S4 and unlimited data plan which allows tethering of other devices, so on the face of it I should have no problems – but the quality of the Three network is so patchy as to be unusable for much of the journey. On common parts of these routes, between Aberdeen and Dundee say, as much as 80% of the journey has no functioning 3G or HSPDA network signal. Even in Dundee station itself there is no signal – whereas there is a reasonable one in the centre of the Tay bridge! And where there is a signal, it frequently drops out after 5 minutes or less. This makes mobile working, or any other online activity almost impossible.

Speaking to friends and colleagues this is not unique to Three – the other operators suffer the same issues.

You might say that I should use the in-train Wifi. But my experience of this is mixed, and no train operator provides reasonable cheap service. Scotrail’s is free but frequently impossibly slow and often throttled. Cross Country Trains is all paid for and expensive on a per journey basis, and East Coast’s offers only a limited 15 minute free service then it is very expensive.

Perhaps there needs to be a movement for free wifi on trains just as there are several for free wifi in  hotels.

But why don’t the UK Network operators ensure better HSDPA coverage along our railways? Many of us pay over £30 per month for an unlimited service. Do we want to pay an extra £5 or £10 per day / journey to fill the holes in Three’s pr other operators’ service.

This raises the opportunity for the development of a cross-platform app that does the following in a fashion similar to a war-driving apps:

  • Notes the network operator to which the handset connects
  • Uses GPS to plot the journey
  • Monitors the signal strength of 3G / H data network along the journey, capturing location constantly
  • Notes black spots in the same way
  • Uploads the anonymised data to a central mapped portal

The portal would then highlight the various networks’ performance,k show routes, compare network operators, and provide the data as Open Data for analysis. This could then be used to put pressure on network operators to improve their coverage.

That’s one APP I would use.

Now don’t get me started on the lack of power points on Scotrail trains…..

Facts Are Sacred – a review

Facts are Sacred is the new book by Simon Rogers, the award-wining editor of and a news editor on the Guardian, working with the graphics team to bring figures to life on the page. He was closely involved with the Guardian‘s ground-breaking decision to crowdsource 450,000 MP expenses records, as well as the organisation’s coverage of the Afghanistan and Iraq ‘Wikileaks’ war logs.

The book, which is available in hard cover and a very inexpensive kindle edition, describes the changing world of data journalism, touching on big data, open data and citizen hacktivism.

Simon describes the methods and approaches taken by his colleagues on the Guardian, and shows how everyone can get involved in creating, analysing and visualising data.

It is clear that in the last four years things have changed dramatically. Governments, their agencies and local authorities have all started to provide open data with varying levels of commitment, standards and approaches; but the fact that they have these limited made inroads is positive and significant and we need to press to make open data and transparent government the norm. Nor does it mean that the job is done. The challenge remains for journalists and citizens alike in contextualising the data, analysing it, checking accuracy and uncovering what the data tells us.

That exposes a need for more, not less, citizen involvement, which itself necessitates better skills in data analysis, understanding statistics and being able to paint a picture (or at least create an infographic) with the data exposed.

Using examples from the 2011 riots, hurricane Sandy, MPs expenses, and more Rogers tells an engaging story of how the Guardian, in particular, interacted with its readers, challenged the government, used open source tools, and broke new ground in not only using data to source new stories, but also in starting to lay the foundations for live data reporting and analysis. This is something that could not have been imagined only 10 years ago, nor when CP Snow, celebrating 50 years as the Guardian’s editor, wrote in 1921 “Comment is free but facts are sacred” from which the title is drawn (let alone when the Guardian’s own first edition in 1821 carried some tabular data about Manchester schools).

This is an excellent book, which takes the pulse of data journalism as it stands in this early phase of open data. It offers us all a chance to develop our skills in data analysis and citizen journalism and reminds us that we all can hold authorities to account, and collaborate to develop further tools and crowd-sourced analysis.

Highly recomended!

Ian Watt6 May 2013

Aberdeen Cultural Hackathon 4th – 5th May

Aberdeen’s newest Hackathon is all set to go ahead! Mark your diaries for 4th and 5th May. Details of how to get involved in this coding weekend with a difference are below.

After several weeks of trying to agree dates with potential partners for a two-centre Open Data Hackathon in Aberdeen and Edinburgh, and stumbling over logistics, I finally had to abandon the original plan and alter my approach.

I changed the concept to a Culture Hack and pitched it to the bid team for the City of Culture 2017. Meeting with them, and Dr Bruce Scharlau of Aberdeen University,  it was obvious from the start that we had a commitment to get this of the ground, which makes me so pleased.

The following is taken from @aberdeen2017‘s latest news letter:

We are hosting a Cultural Hackathon at Seventeen over the first weekend in May.

The hackathon will see computer programmers brought together with artists, musicians, designers and writers in a competition to create something around one of the key themes of the UK City of Culture bid.

We will release full details of our themes on the day of the bid submission.

We have no idea what will be created.  Previous hackathons have resulted in new apps, new software and new ways of doing things on the web.

This is the first cultural hackathon we’ve had in Aberdeen and we’re looking for some creative people willing to spend a weekend in Seventeen experimenting away.We’ll provide refreshments, internet access and some top class prizes.

Programmers, writers, poets, designers or artists who might be interested in working on the hackathon should contact us at
Does anyone else have  burning desire to work on some machine-generated poetry based on the social media outpourings of local citizens? Why not join me there?

PLODS – but faster

The following article is an adaptation of one I wrote for my team’s internal blog at work.

Today I coined an acronym. PLODS – Public Linked Open Data Sets.

Anyone who does not understand why publishing open data is a good thing might benefit from watching this brief presentation: How to make the flowers bloom (or why Open Data is necessary but not enough to make a difference) before reading further.

You can also watch this 35min video from the data2.0 conference earlier this week: Why Open Data?

Last year Aberdeen City Council were the first Scottish Authority to publish Open Data. Since then we’ve added further data sets. Being the first meant that it got positive attention and led to the setting up of  a pilot project with the University of Aberdeen and Swirrl IT. That is now up and running and there will be more news of that soon.

Towards the end of 2010 I proposed to the rest of SOCITM’s regional committee that we host an “open and linked data day”, to explain what it is all about to the heads of ICT in Scotland. That day took place on 25th March in Stirling when I introduced two speakers. The first was Stuart Harrison, the hands-on and innovative webmaster of Lichfield District Council. While many authorities in the south have started publishing their spending data online after a push from Central Government, Lichfield started earlier and have gone much further than most. Stuart described what they’ve done and why it proved to be good for both the council, and its efficiency, and the citizen. He talked about using existing council data – and even using open data from other agencies to enhance their website. He told the audience about his approach and how it relates to hack days – and engaging with a wider developer community. You can see Stuart’s slides on slideshare.

The second speaker was Bill Roberts from Swirrl. Bill explained what Linked Data is and how it differs from Open Data. Bill made it clear that only two authorities in the UK are actively looking at Linked data at the moment: Aberdeen and Lichfield. He painted a picture of the ongoing pilot project, which is believed to be the first of its kind for any UK authority, explained what we’re seeking to do, what the benefits will be and what we aim to get out of it. He also explain what the next stages could be – how the broader Scottish local authority IT we might build on this authority’s work.

The Stirling event was well attended by around 27 senior ICT people from the public sector in Scotland and the presentations were well received and followed by intelligent questioning. As a result it looks like SOCITM Scotland will be working with the Scottish Government and a representative of the Improvement Service on what will probably be a set of guidance or best practice for the public sector in Scotland for Open and Linked Data.

Unconnected to this, on 26th March I attended a National Hack the Government Day (#NHTG11) at Aberdeen University. You can read more about that event here. This echoed the national #NHT11 day held in the Guardian’s HQ in London on the same day. This event was also hot on the heels of the Hacks and Hackers Glasgow day in Glasgow as reported on in Scraperwiki’s blog, which looks like it was extremely good fun and very productive.

All of these events had one thing in common: coders and developers from around the country coming together to do creative things based on government datasets. While we in the public sector may be able to do similar things, we frequently find that we have neither  the resources nor often the skills to do what we’d want to do. But as holders of data, to follow the analogy at the top of this page, we can at least provide the seeds. Incidentally if you are interested my first-hand account of the challenges presented by closed data, have a look at my recent experiences in this area.

Since the SOCITM day and #NHTG11 I’ve had a meeting with our neighbouring council. They, as we did last year, are about to unveil a number of data sets – and even more of it. We’ve agreed that there would be benefits in us linking our open data plans and approaches. As we established at the #NHTG11 day providing the same or similar data sets on both side of the authority boundary would be advantageous. So, we’ve agreed to collaborate on this.

  • We’re going to share lists of published data sets
  • We’re going to compare planned data releases
  • We’ll look at what we could release (and with what effort)
  • We’ll each look at links between open data and data held in our GIS systems, identifying what could be made available from there too,

all with a view to matching where possible the open data released by each authority, to the benefit of the community at large. We’ll also share best practice and put developers in touch with each others between the authorities.

We’re also keen in engaging with the local development community in the North East, and asking them what they like to see us release. We need to change from a “we’ll give you what we think you’d like” approach to one that is much more collaborative with the consumers of our data. So, what would help them win this: for example?

By working with our neighbouring authority, feeding into national policy development, and engaging with local extended developer community we stand to deliver most from this programme, with the benefits accruing not to the councils as much as to the local economy and to local citizens through better uses of our data.

NHTG11 – a personal reflection

Yesterday I spent nine and a half hours in a room with a group of hard-core coders and not only lived to tell the tale, but thoroughly enjoyed the experience.

Firstly I should say thanks to Dr Bruce Scharlau of Aberdeen University for hosting the event, and for providing coffee and pizza. Given that fewer than expected participants turned up there certainly was no shortage of food!

From the outset it was clear that this was a free-form day, with an encouragement for all to collaborate and to experiment on whatever we liked.

Initially we split into three groups. I paired with @mr_urf and we agreed to use some data from Aberdeen City Council – avoiding the already open data in favour of scraping some data which was locked in HTML tables. We chose the Food Hygiene Inspection Data. We chose to use ScraperWiki to do the scraping. While this might be a potentially brittle solution it does allow the user to be alerted if the scraped page changes.

The most significant challenge (other than the fact that neither of us had used Scraper Wiki before) was that while tutorial and sample code demonstrated scraping multiple pages by following Next links, this page proved more challenging as the MS ASP-generated pages used javascript postback links to paginate the records. So Mr Urf had to be more creative in finding a way overcoming the challenges that this presented.

All of this took quite a bit of time, understandably, but eventually we had a scrape working that sucked the data in and saved it, allowing it to be accessed as open data.

While all of this was going on, as the non-coder, I spent my time preparing a presentation on our project and making suggestions for naming it. After much amusement we settled on “Shiny Spoon” as the opposite of Greasy Spoon.

I quickly registered the domain name, linked it to Mr_Urf’s hosting, and completed and uploaded the presentation.

By the end of the afternoon Mr Urf was able to display the data on his own page.

Despite Mr Urf’s best efforts it didn’t feel like an enormous success: we’d scraped data and re-presented it on a web page with less functionality the original source. But this was only part of our proposed project and we agreed that we’d like to get back together to work on it further. The added value will come in combining the data with other sources.

And we shouldn’t forget that we did manage to open up data that was previously locked in an HTML page – allowing anyone else to re-purpose it in their own projects. Also, given that the Health and Safety inspection reports that we proposed to scrape are in a very similar system to the original, it will be reasonably to scrape that in exactly the same way allowing the data sets to be merged.

Mr_Urf even found time to upload the source to GitHub. If I get the address I’ll link to it here. I also wrote it up hurriedly for Rewired State.

At the end of the day we each gave a short presentation on what we’d done. The other guys’ presentations showed their work as being more conceptual and while they appeared to have real potential, they’d need some work to put them into production. One  was a system to allow school pupils to work through what exams they need to pass in order to get to a university to get a particular degree with an end job in mind. This required data from a large number of sources including LinkedIn, SQA, UCAS as well as individual university sites.

The other was a concept to allow users to define their own linked data requirements as bundles and to then generate a structure and a means of hosting that as well structured linked data. What then became of the data was up to the user community. This seemed an intriguing proposition and it would benefit from working up further.

Tomorrow I got back to my day job. As I do so, I’ll be bearing in mind the first-hand experience of how difficult it can be to get data out of council websites.

To date we’ve agreed that there is a need to provide openly, and we’ve dipped our toe in the water, picking off simple sets of data and making these available as standards-compliant open data.

My aim is to work up a strategy for how we provide open and linked data in the future.

  • Should we now try to follow the lead of Enschede and make all data open by default, with only valid exceptions held in closed formats?
  • Should we also attempt to retrofit the website’s existing data-driven systems to pull the data out of them for the ease of end users,  or
  • Should we encourage the user community to assist us using ScraperWiki and other tools.

Expect another blog post on this topic in the near future!


Two great new Google projects

Two new Google projects crossed my radar today – and very interesting they both are, too.

The first is Google For Non-profits.  It offers the chance to reach and engage supporters, raise awareness of campaigns, improve operations, and a mass of other tools to use.

As the current secretary and occasional web-master of a local arts group this looks very interesting indeed!

The second thing I noticed was Google’s new book / magazine Think Quarterly.

At Google, we often think that speed is the forgotten ‘killer application’ – the ingredient that can differentiate winners from the rest. We know that the faster we deliver results, the more useful people find our service.

But in a world of accelerating change, we all need time to reflect. Think Quarterly is a breathing space in a busy world. It’s a place to take time out and consider what’s happening and why it matters.

Our first issue is dedicated to Data – amongst a morass of information, how can you find the magic metrics that will help transform your business? We hope that you find inspiration, insights, and more, in Think Quarterly.

Matt Brittin
Managing Director, UK & Ireland Operations, Google

I’m particularly interested in Nigel Shadbolt’s piece “Open for Business“.

One drawback, though … it appears that the book is not downloadable (eg for Kindle).  Pity.

Hack The Government Day – how will I participate?

This Saturday sees events to mark the third National Hack The Government Day.

It will also be marked across the UK by events such as the one being run at Aberdeen University by Dr Bruce Scharlau.

I’ve committed to going to it too. As a keen supporter of the principles of Open Data and the benefits that it can deliver I could hardly not go.

Then I started thinking – what will I do there?  It is five years or more since I wrote any real code. My team won’t let me near any live systems any more.  Sure, I can play with Yahoo Pipes and Excel Spreadsheets but we need developers there – and more than are presently signed up to make it a really worthwhile day.

That said, I’m not alone in my doubt about what I contribute:

So I’ll go along with an open mind. I make good tea and coffee if nothing else?

Meantime, in addition to the useful links that we’ve collected on the page here I’ve found a few more.

The Open Data search engine looks useful. You can also read the article about it.  That said the filtering down to Scotland level includes much English data and the search for Aberdeen returns no results.

Where no Open Data exists Scraper Wiki looks interesting for grabbing new sources. Their blog has some useful examples of what’s been done with it.

If you have any suggested data sources for Aberdeen and the North East please post them below.