12 Nov 2014

Last month, I was fortunate enough to fly off to Austin with a group of colleagues for a week long IBM Design Thinking camp. It was an opportunity to get away from the day job, with laptops all-but banned, and have a deep-dive into what IBM Design is about and how it can be applied.

As a relatively new effort within the company, IBM Design sets out to bring a focus back to where it should be; the human-experience of our products and services. This isn’t just about making pretty user interfaces; it is the entire experience of our products.

As an engineer, the temptation is always there to create shiny new features. But no matter how shiny it is, if it isn’t what a user needs, then it’s a waste of effort. The focus has to be on what the user wants to be able to do. This is something I’ve always tried to do with Node-RED; we often get suggestions for features that, once you start picking at them, are really solutions looking for a problem. Once you work back and identify the problem, we’re often able to identify alternative solutions that are even better.

P1070048

It’s often just a matter of asking the right question; At Designcamp, the very first exercise we were asked to do was to draw a new type of vase. Everyone drew something that looked vaguely vase-like. Then (spoilers…) we were asked to draw a better way to display flowers. At this point we got lots of decidedly un-vase-like ideas that were much more imaginative. It’s the difference between asking for a feature and asking for an idea. The former presupposes a lot about the nature of the answer, the latter is focused on not just the what, but also the why.

This relentless focus on the user isn’t a new idea. GDS, who are doing incredible things with government services, have it as their very first Design Principle. But it is refreshing to see this focus being brought to bear within a transformation of how the entire company operates.

Oh, and of course being in Austin, we got to screen print our own IBM Designcamp T-Shirts to commemorate the visit.

Go to Designcamp, screen print your own t-shirt. Obvs.

Lots more photos from the week over on flickr.

18 Apr 2014

Having recently added a Travis build for the Node-RED repository, we get a nice green, or sometimes nasty red, merge button on pull-requests, along with a comment from Travis thanks to GitHub’s Commit Status API.

From a process point of view, the other thing we require before merging a pull-request is to ensure the submitter has completed a Contributor License Agreement, or CLA. This has been a manual check against a list we maintain. But with GitHub’s recent addition of the Combined Status api I decided we ought to automate this check.

And of course, why not implement it in a Node-RED flow.

There are a few simple steps to take:

  1. know there has been a pull-request submitted, or that we want to manually trigger a check
  2. check the submitter against the list
  3. update the status accordingly

Rather than poll the api to see when a new pull-request has arrived, GitHub allows you to register a Webhook to get an http POST when certain events occur. For this to work, we need an HTTP-In node to accept the request. As with any request, we need to make sure it is responded to properly, as well as keeping a log – handy for debugging as you go.

nr-gh-flow-1

With that in place, we can register the webhook on the repositories we want to run the checks on.

  1. From the repository page, find your way to the webhook page via Settings ➝ WebHooks & Services ➝ Add webhook
  2. Enter the url of the http-in node
  3. Select ‘Let me select individual events.’ and then pick, as a minimum, ‘Pull Request’ and ‘Issue comment’
  4. Ensure the ‘Active’ option is ticked and add the webhook

When the POST arrives from GitHub, the X-GitHub-Event http header identifies the type of event that triggered the hook. As we’ve got a couple different types registered for the hook we can use a Switch node to route the flow depending on what happened. We also need a function node to access the http header as the Switch node can’t do by itself.

// GitHub Event Type
msg.eventType = msg.req.get('X-GitHub-Event');
return msg;

nr-gh-flow-2

Skipping ahead a bit, to update the status of the commit requires an http POST to the GitHub API. This is easily achieved with an HTTP Request node, but it does need to be configured to point to the right pull-request. Pulling that information out of the ‘pull-request’ webhook event is easy as it’s all in there.

// PR Opened
msg.login = msg.payload.pull_request.user.login;
msg.commitSha = msg.payload.pull_request.head.sha;
msg.repos = msg.payload.pull_request.base.repo.full_name;
return msg;

It’s a bit more involved with the ‘issue-comment’ event which fires for all comments made in the repository. We only want to process comments against pull-requests that contain the magic text to trigger a CLA check to run.

Even after identifying the comments we’re interested in, there’s more work to be done as they don’t contain the information needed to update the status. Before we can do that, we must first make another http request to the GitHub API to get the pull-request details.

// Filter PR Comments
msg.repos = msg.payload.repository.full_name;
if (msg.payload.issue.pull_request &&
       msg.payload.comment.body.match(/node-red-gitbot/i)) {
    msg.url = "https://api.github.com:443/repos/"+msg.repos+
              "/pulls/"+msg.payload.issue.number+
              "?access_token=XXX";
    msg.headers = {
        "user-agent": "node-red",
	    "accept": "application/vnd.github.v3"
    };
    return msg;
}
return null;

Taking the result of the request, we can extract the information needed.

// Extract PR details
var payload = JSON.parse(msg.payload);
msg.login = payload.user.login;
msg.commitSha = payload.head.sha;
msg.repos = payload.base.repo.full_name;
return msg;

nr-gh-flow-4

Now, regardless of which event has been triggered we have everything we need to check the CLA status; if msg.login is in the list of users who have completed a CLA, send a ‘success’ status, otherwise send a ‘failure’.

// Update CLA Status
var approved = ['list','of','users'];

var login = msg.login;
var commitSha = msg.commitSha;
var repos = msg.repos;

msg.headers = {
	"user-agent": "node-red",
	"accept": "application/vnd.github.she-hulk-preview+json"
};

msg.url = "https://api.github.com:443/repos/"+repos+
          "/statuses/"+commitSha+
          "?access_token=XXX";

msg.payload = {
    state:"failure",
    description:"No CLA on record",
    context:"node-red-gitbot/cla",
    target_url:"https://github.com/node-red/node-red/blob/master/CONTRIBUTING.md#contributor-license-aggreement"
}

if (approved.indexOf(login) > -1) {
    msg.payload.state = "success";
    msg.payload.description = "CLA check passed";
}
return msg;

nr-gh-flow-3

And that’s all it takes.

One of the things I like about Node-RED is the way it forces you to break a problem down into discrete logical steps. It isn’t suitable for everything, but when you want to produce event-drive logic that interacts with on-line services it makes life easy.

You want to be alerted that a pull-request has been raised by someone without CLA? Drag on a twitter node and you’re there.

There are some improvements to be made.

Currently the node has the list of users who have completed the CLA hard-coded in – this is okay for now, but will need to be pulled out into a separate lookup in the future.

It also assumes the PR only contains commits by the user who raised it – if a team has collaborated on the PR, this won’t spot that. Something I’ll have to keep an eye on and refine the flow if it becomes a real problem.

Some of the flow can already be simplified since we added template support for the HTTP Request node’s URL property.

The final hiccup with this whole idea is the way GitHub currently present commit statuses on the pull-request page. They only show the most recent status update, not the combined status. If the CLA check updates first with a failure, then the Travis build passes a couple minutes later, the PR will go green thanks to the Travis success. The only way to spot the Combined Status is currently ‘failure’ is to dive into the API. I’ve written a little script to give me a summary of all PR’s against the repositories, which will do for now.

Given the Combined Status API was only added as a preview last month, hopefully once it gets added to the stable API they will update how the status is presented to use it.

A final note, I’ve chosen not to share the full flow json here – hopefully there’s enough above to let you recreate it, but if you really want it, ping me.

26 Jan 2014

I’m a big fan of the Ghost blogging platform. Having heard Hannah Wolfe talk about it a couple of times, I was keen to try it out. In fact I purposefully held off setting up a blog for Node-RED until Ghost was available.

It was painless to set up and get running on my Web Faction hosted server – even though at that point Web Faction hadn’t created their one-click installer for it.

Ghost released their second stable version a couple weeks ago, which added, amongst many other new features, the ability for a post’s url to contain its date. This was a feature I hadn’t realised I was missing until I wrote a post on the blog called ‘Community News’. The intent was this would be a commonly used blog title as we post things we find people doing with Node-RED. However, without a date in the post, the urls would end up being .../community-news-1, .../community-new-2 .. etc. Perfectly functional, but not to my taste.

So I clicked to enable dated permalinks on the blog, and promptly found I had broken all of the existing post urls. I had half-hoped that on enabling the option, any already-published posts would not be affected – but that wasn’t the case; they all changed.

To fix this, I needed a way to redirect anyone going to one of the old urls to the corresponding new one. I don’t have access to the proxy running in front of my web host, so that wasn’t an option.

Instead, I added a bit of code to my installation of Ghost that deals with it.

In the top level directory of Ghost is a file called index.js. This is what gets invoked to run the platform and is itself a very simple file:

var ghost = require('./core');

ghost();

What this doesn’t reveal is that the call to ghost() accepts an argument of an instance of express, the web application framework that Ghost uses. This allows you to pass in an instance of Express that you’ve tweaked for your own needs – such as one that knows about the old urls and what they should redirect to:

var ghost = require('./core');
var express = require("express");

var app = express();

ghost(app);

var redirects = express();

var urlMap = [
   {from:'/version-0-2-0-released/', to:'/2013/10/16/version-0-2-0-released/'},
   {from:'/internet-of-things-messaging-hangout/', to:'/2013/10/21/internet-of-things-messaging-hangout/'},
   {from:'/version-0-3-0-released/', to:'/2013/10/31/version-0-3-0-released/'},
   {from:'/version-0-4-0-released/', to:'/2013/11/14/version-0-4-0-released/'},
   {from:'/version-0-5-0-released/', to:'/2013/12/21/version-0-5-0-released/'},
   {from:'/community-news-14012014/', to:'/2014/01/22/community-news/'}
];

for (var i=0; i<urlMap.length; i+=1) {
   var to = urlMap[i].to;
   redirects.all(urlMap[i].from,function(req,res) {
      res.redirect(301,to);
   });
}

app.use(redirects);

It’s all a bit hard-coded and I don’t know how compatible this will be with future releases of Ghost, but at least it keeps the old urls alive. After all, cool uris don’t change.

5 Dec 2013

I spoke this week at the ThingMonk conference. Unlike other talks I’ve given, I actually wrote this one down, rather than my normal approach of throwing some slides together at the last minute.

That has the added benefit of giving me a coherent(ish) written version I can post here. Below is the talk as I roughly planned it, albeit certainly not a word-for-word record of what I actually said.


node-red-thingmonk-slide-00

node-red-thingmonk-slide-01

Andy Stanford-Clark, inventor of MQTT, sat here with us today, lives on the Isle of Wight, the south coast of the Isle of Wight.

The nature of his job means he has to head to London for meetings fairly regularly. Getting to London from the South cost of the Isle of Wight in time for a morning meeting is no mean feat. To catch the 7.30 train from Southampton, he has to catch the 6.45am ferry from Cowes, for which he has leave home at 6am to catch.

On one particular morning, he left home in the early morning sunshine, to arrived at the ferry port in a thick blanket of fog. And the ferries frequently get delayed in the fog.

Helpfully, there was a phone line you could call to find out if the ferries were sailing. A helpful answering machine message telling you the state of play. Unfortunately, the guy who updates the message doesn’t arrive until 9, so it still said the ferries were running fine.

Sat in his car, with the extra hour in bed he could have had on his mind, Andy did what he often does and started thinking.

Surely there must be someway to know if the ferries are sailing.

node-red-thingmonk-slide-02

Firing up his laptop, plugging in his 3G modem, he soon found this site – which shows, in realtime, all of the boats sailing in the Solent. This is based on the [AIS] radio transponders they all carry that broadcast their GPS position and identification.

Being a frequent user of the Isle of Wight ferries, Andy was soon able to pick out all of the ferries he used and started scraping their position.

A few lines of Perl later – even great minds have their flaws – and he had created a geofence – a virtual boundary – around the Cowes and Southampton ferry terminals. So whenever a ferry arrived or left a terminal, he could get a notification.

node-red-thingmonk-slide-03

The obvious next step was to give each of the ferries their own Twitter account – allowing the ferries to tweet what they were doing.

So a quick glance at his twitter stream would tell him if the ferries were sailing as expected – and give him back his hour in bed if they weren’t.

As is often the case with innovation, Andy, sat there in the fog-bound car park in Cowes solved a problem personal to him and in doing so, created a Thing of use to a wider audience. So much so that, a few months later, looking at the Red Funnel website to check times for a friend he spotted a new section of the page listing the live arrival and departure times of each ferry.

node-red-thingmonk-slide-04

Curious, he clicked through and it took him to the Twitter accounts he had set up – some one at Red Funnel had found what he’d done and rather than get him to shut it down, integrated it into their own site.

Now, Andy being Andy, and it being April 1st that he spotted this, he logged into one of the accounts and sent a ferry to land-locked Milton Keynes.

node-red-thingmonk-slide-05

A short while, and a couple phone conversations, later, Red Funnel formally adopted the system; ‘get into Social Media’ had been on their todo list for ages, but they hadn’t figured out what to do.


node-red-thingmonk-slide-06

node-red-thingmonk-slide-07

The Internet of Things is not a single amorphous blob that you can stick a label on. Everyone has their own definition or angle on the subject.

What I find fascinating is when an individual is able to bring together different sources of data or events and combine them in some new way. To create a new system or emergent behaviour that wasn’t part of the original intent.

The AIS transponders are there for maritime safety, but now, indirectly, allow Red Funnel to provide a service they didn’t know they could.


node-red-thingmonk-slide-08

As a long time Linux user I have a physical fight with every projector I have to plug into my laptop. It also means I have a tendency to see any new connected device as something of a challenge; will it work with my laptop – or does it depend on an iPhone app to do anything interesting.

From a consumer point of view, the Apple Store iPhone Accessories category is full of high end, beautifully designed connected devices. And that’s how IoT is beginning to enter many people’s lives on a more concious level, albeit at the premium that the Apple experience elicits.

But how open are any of these devices for them to used beyond the purposes they were designed.

node-red-thingmonk-slide-09

Take for example, as I’m sure will have been invoked already, the NEST thermostat, and their recently announced Smoke & Co2 detector. Taking one of the more mundane, invisible pieces of electronics in the home, to be a fully connected, intelligent device. On one side, I’m fascinated by what it takes to produce appealing connected devices. On the other, I want to know how much can I tinker with it.

So far, the Nest is a complete black box – albeit a round, shiny black box. But, they have announced a developer API to come in the New Year.

node-red-thingmonk-slide-10

By comparision, are the WeMo sockets – the remote controllable sockets made by Belkin. They don’t have a formal api, but the developer community has found ways in and there are open-source libraries out there already for them.

node-red-thingmonk-slide-11

But of course have announced for the robustness and security of their system, they may have to ‘secure’ the open protocols that are being used.


node-red-thingmonk-slide-12

The common theme here is how you can empower the individual developer to do something interesting. Open APIs and standards-based protocols – something I’m sure Ian will talk about later today.

One of the way that openess manifests itself is within the node.js ecosystem.

node-red-thingmonk-slide-13

One of the many strengths of Node.js, the server-side JavaScript environment, is the NPM repository. This is a repository of almost 50,000 modules that have been created by developers around the world to add all sorts of pieces of functionality into the environment.

node-red-thingmonk-slide-14

A quick search in the repository finds at least 3 different modules written to control WeMo sockets, there’s even one for the NEST to query basic information from the Thermostat.

Whatever the device, as time passes, the probability of there being at a relevant module within NPM increases to 1.

This can pose its own challenges – there is no arbiter of quality to get a module into the repository; of those three Wemo modules, two of them are still version 0.0.2 and seem to have been orphaned off – so you do have to take some care picking the right modules to use.


node-red-thingmonk-slide-15

The days of hardware hacking being the preserve of the highly skilled have long since gone.

The arduino, mbed, beaglebone black all exist to make it easier for anyone to start wiring things together; to cross the physical/digital divide and make Things.

When you look at the NEST as the current pinacle of connected thermostats, you also have to consider things like this:

node-red-thingmonk-slide-16

Built an ‘annoying home thermometer’ that plays Arduino ‘Twinkle Twinkle Little Star’ example on loop while room temp is above 25 degrees.
@danlockton – https://twitter.com/danlockton/status/392422819297894400

It’s that freedom to play, to create something that joins A & B, to experiment with what works for you as an individual.

node-red-thingmonk-slide-17

As part of the Homesense research project, Russell Davies, along with Tom Taylor who spoke earlier, created this bike map. Centred on his flat in London, it has LEDs at each of the local Boris bike stations. Whenever there are more than 5 bikes available at a particular station, the light comes on. It means when he leaves in the morning, with a simple glance he knows whether to turn left or right to find the nearest available bike. There’s no screen here, no mobile app, no intrusive notification. A quick glance and he’s on his way.

The ability for individuals to create things for themselves; to solve problems personal to them.

node-red-thingmonk-slide-18

This is Bubblino – Adrian McEwens connected bubble machine that spews out bubbles when people mention it on twitter. Silly, delightful, playful.

node-red-thingmonk-slide-19

A couple years ago, Andy Huntingdon, called this stage of IoT as the Geocities of Things. Geocities being that early space on the internet where may people cut their teeth in creating X-Files fan pages or unwiting tributes to the creator of the animated under-construction gif.

It was a space you could create webpages without really knowing what you were doing. Clay Shirky famously thought sites like Geocities would never take off, but later came to realise that:

node-red-thingmonk-slide-20

Creating something personal, even of moderate quality, has a different kind of appeal than consuming something made by others, even something of high quality.

Geocities allowed people to play, to experiement. To create sites that were beautiful in their eyes – if not anyone elses. It’s where a generation learnt the art, or otherwise, of web design.

And with IoT today, there is an explosion in platforms, products and protocols that help solve some of the underlying hard technical challenges and provide a space for people to play.

node-red-thingmonk-slide-21

But that explosion brings its own challenges. Each platform has its own set of apis. Each protocol has its own set of clients and servers to learn about. Every technology that makes something easier to do, brings another choice that a developer has to make.

Just like the WeMo node.js modules I mentioned.

There is never going to be a single, one-size-fits-all solution. Nor should there be.

node-red-thingmonk-slide-22

The challenge of integration is the diversity of skills and knowledge needed to get things talking.

Take for example, the challenge of Andy’s Twittering Ferries. It required someone with knowledge of the AIS transponders to create the site that plotted the positions for Andy to find. It required the ability to scrape the data from the page and parse out the individual ferry’s position and compare to the geo-fences. It required the ability to get through Twitter’s OAuth authentication flows to get the tweet posted.

For all the security benefits brought by OAuth in replacing the exchange of usernames and passwords, it has probably been the root cause of more than its fair share of headaches as developers try to figure out which token goes where and is signed with what and when.

And that’s a common problem. As a developer, you spend more time on HOW to integrate with something than you do on WHAT you want to do with it.


At which point I introduce Node-RED, and demo what can be done with it

26 Sep 2013

Node-RED is the visual tool for wiring the Internet of Things that I have been working on through the year. What started as an educational exercise to get to grips with technologies such as Node.js and d3, soon turned into a tool that provided real value in client engagements.

One of our goals all along was to open-source the tool; to build a community of users and contributors. Following a whirlwind pass through the appropriate internal process, we got all the approvals we needed at the start of September and have released Node-RED as an Apache v2 licensed open source project on github.

You can find us here: nodered.org.

Following on from that, I have had the opportunity to speak at a couple of external events.

First up was the Wuthering Bytes technology festival in sunny/rainy Hebden Bridge. I did a completely unprepared, but well received, 5 minute lightning-talk at the end of the first day which generated a lot of interest. On the second day, there was a series of workshops, one of which was based around MQTT on Arduino – using my PubSubClient library. The workshop was organised by a contact of mine from the IoTLondon community, who was happy to see me there to help out. We ended up using Node-RED as well in the workshop as it makes it easy to visualise topics and move messages around. It was a great feeling to glance over the room and see people I’ve never met before using Node-RED.

The second event was last night, at the monthly London Node.js User Group (LNUG) Meetup. This time I had a 30 minute slot to fill, so went prepared with a presentation and demo. Again, it was well received and generated lots of good comments on twitter and afterwards in person. The talk was recorded and should be embedded below – starts at 29mins.

We’ve got a couple more talks lined up so far. Next up is the November meetup of IoTLondon, exact date tbc, and then ThingMonk in December.