11 Aug 2017

TJBot is an open source DIY kit from IBM Watson to create a Raspberry Pi powered robot backed by the IBM Watson cognitive services. First published late last year, the project provides all the design files needed to create the body of the robot as either laser-cut cardboard or 3D printed parts.

It includes space for an RGB LED on top of its head, an arm driven by a servo that can wave, a microphone for capturing voice commands, a speaker so it can talk back and a camera to capture images.

There are an ever-growing set of tutorials and how-tos for building the bot and getting it hooked up to the IBM Watson services. All in all, its a lot of fun to put together. Having played with one at Node Summit a couple weeks ago, I finally decided to get one of my own. So this week I’ve mostly been monopolising the 3D printer in the Emerging Technologies lab.

A post shared by Nick O'Leary (@knolleary) on

A post shared by Nick O'Leary (@knolleary) on

After about 20 hours of printing, my TJBot was coming together nicely, but I knew I wanted to customise mine a bit. In part this was for me to figure out the whole process of designing something in Blender and getting it printed, but also because I wanted my TJBot to be a bit different.

First up was a second arm to given him a more balanced appearance. In order to keep it simple, I decided to make it a static arm rather than try to fit in a second servo. I took the design of the existing arm, mirrored it, removed the holes for the servo and extended it to clip onto the TJBot jaw at a nice angle.

The second customisation was a pair of glasses because, well, why not?

I designed them with a peg on their bridge which would push into a hole drilled into TJBot’s head. I also created a pair of ‘ears’ to push out from the inside of the head for the arms of the glasses to hook onto. I decided to do this rather than remodel the TJBot head piece because it was quicker to carefully drill three 6mm holes in the head than it was to wait another 12 hours for a new head to print.

In fact, as I write this, I’ve only drilled the hole between the eyes as there appears to be enough friction for the glasses to hold with just that one fixing point.

There some other customisations I’d like to play with in the future; this was enough for me to remember how to drive Blender without crying too much.

Why call him ‘Bleep’? Because that’s what happens when you let your 3 and 7 year old kids come up with a name.

I’ve created a repository on GitHub with the designs of the custom parts and where I’ll share any useful code and how-tos I come up with.

18 Apr 2014

Having recently added a Travis build for the Node-RED repository, we get a nice green, or sometimes nasty red, merge button on pull-requests, along with a comment from Travis thanks to GitHub’s Commit Status API.

From a process point of view, the other thing we require before merging a pull-request is to ensure the submitter has completed a Contributor License Agreement, or CLA. This has been a manual check against a list we maintain. But with GitHub’s recent addition of the Combined Status api I decided we ought to automate this check.

And of course, why not implement it in a Node-RED flow.

There are a few simple steps to take:

  1. know there has been a pull-request submitted, or that we want to manually trigger a check
  2. check the submitter against the list
  3. update the status accordingly

Rather than poll the api to see when a new pull-request has arrived, GitHub allows you to register a Webhook to get an http POST when certain events occur. For this to work, we need an HTTP-In node to accept the request. As with any request, we need to make sure it is responded to properly, as well as keeping a log – handy for debugging as you go.

nr-gh-flow-1

With that in place, we can register the webhook on the repositories we want to run the checks on.

  1. From the repository page, find your way to the webhook page via Settings ➝ WebHooks & Services ➝ Add webhook
  2. Enter the url of the http-in node
  3. Select ‘Let me select individual events.’ and then pick, as a minimum, ‘Pull Request’ and ‘Issue comment’
  4. Ensure the ‘Active’ option is ticked and add the webhook

When the POST arrives from GitHub, the X-GitHub-Event http header identifies the type of event that triggered the hook. As we’ve got a couple different types registered for the hook we can use a Switch node to route the flow depending on what happened. We also need a function node to access the http header as the Switch node can’t do by itself.

// GitHub Event Type
msg.eventType = msg.req.get('X-GitHub-Event');
return msg;

nr-gh-flow-2

Skipping ahead a bit, to update the status of the commit requires an http POST to the GitHub API. This is easily achieved with an HTTP Request node, but it does need to be configured to point to the right pull-request. Pulling that information out of the ‘pull-request’ webhook event is easy as it’s all in there.

// PR Opened
msg.login = msg.payload.pull_request.user.login;
msg.commitSha = msg.payload.pull_request.head.sha;
msg.repos = msg.payload.pull_request.base.repo.full_name;
return msg;

It’s a bit more involved with the ‘issue-comment’ event which fires for all comments made in the repository. We only want to process comments against pull-requests that contain the magic text to trigger a CLA check to run.

Even after identifying the comments we’re interested in, there’s more work to be done as they don’t contain the information needed to update the status. Before we can do that, we must first make another http request to the GitHub API to get the pull-request details.

// Filter PR Comments
msg.repos = msg.payload.repository.full_name;
if (msg.payload.issue.pull_request &&
       msg.payload.comment.body.match(/node-red-gitbot/i)) {
    msg.url = "https://api.github.com:443/repos/"+msg.repos+
              "/pulls/"+msg.payload.issue.number+
              "?access_token=XXX";
    msg.headers = {
        "user-agent": "node-red",
	    "accept": "application/vnd.github.v3"
    };
    return msg;
}
return null;

Taking the result of the request, we can extract the information needed.

// Extract PR details
var payload = JSON.parse(msg.payload);
msg.login = payload.user.login;
msg.commitSha = payload.head.sha;
msg.repos = payload.base.repo.full_name;
return msg;

nr-gh-flow-4

Now, regardless of which event has been triggered we have everything we need to check the CLA status; if msg.login is in the list of users who have completed a CLA, send a ‘success’ status, otherwise send a ‘failure’.

// Update CLA Status
var approved = ['list','of','users'];

var login = msg.login;
var commitSha = msg.commitSha;
var repos = msg.repos;

msg.headers = {
	"user-agent": "node-red",
	"accept": "application/vnd.github.she-hulk-preview+json"
};

msg.url = "https://api.github.com:443/repos/"+repos+
          "/statuses/"+commitSha+
          "?access_token=XXX";

msg.payload = {
    state:"failure",
    description:"No CLA on record",
    context:"node-red-gitbot/cla",
    target_url:"https://github.com/node-red/node-red/blob/master/CONTRIBUTING.md#contributor-license-aggreement"
}

if (approved.indexOf(login) > -1) {
    msg.payload.state = "success";
    msg.payload.description = "CLA check passed";
}
return msg;

nr-gh-flow-3

And that’s all it takes.

One of the things I like about Node-RED is the way it forces you to break a problem down into discrete logical steps. It isn’t suitable for everything, but when you want to produce event-drive logic that interacts with on-line services it makes life easy.

You want to be alerted that a pull-request has been raised by someone without CLA? Drag on a twitter node and you’re there.

There are some improvements to be made.

Currently the node has the list of users who have completed the CLA hard-coded in – this is okay for now, but will need to be pulled out into a separate lookup in the future.

It also assumes the PR only contains commits by the user who raised it – if a team has collaborated on the PR, this won’t spot that. Something I’ll have to keep an eye on and refine the flow if it becomes a real problem.

Some of the flow can already be simplified since we added template support for the HTTP Request node’s URL property.

The final hiccup with this whole idea is the way GitHub currently present commit statuses on the pull-request page. They only show the most recent status update, not the combined status. If the CLA check updates first with a failure, then the Travis build passes a couple minutes later, the PR will go green thanks to the Travis success. The only way to spot the Combined Status is currently ‘failure’ is to dive into the API. I’ve written a little script to give me a summary of all PR’s against the repositories, which will do for now.

Given the Combined Status API was only added as a preview last month, hopefully once it gets added to the stable API they will update how the status is presented to use it.

A final note, I’ve chosen not to share the full flow json here – hopefully there’s enough above to let you recreate it, but if you really want it, ping me.

26 Jan 2014

I’m a big fan of the Ghost blogging platform. Having heard Hannah Wolfe talk about it a couple of times, I was keen to try it out. In fact I purposefully held off setting up a blog for Node-RED until Ghost was available.

It was painless to set up and get running on my Web Faction hosted server – even though at that point Web Faction hadn’t created their one-click installer for it.

Ghost released their second stable version a couple weeks ago, which added, amongst many other new features, the ability for a post’s url to contain its date. This was a feature I hadn’t realised I was missing until I wrote a post on the blog called ‘Community News’. The intent was this would be a commonly used blog title as we post things we find people doing with Node-RED. However, without a date in the post, the urls would end up being .../community-news-1, .../community-new-2 .. etc. Perfectly functional, but not to my taste.

So I clicked to enable dated permalinks on the blog, and promptly found I had broken all of the existing post urls. I had half-hoped that on enabling the option, any already-published posts would not be affected – but that wasn’t the case; they all changed.

To fix this, I needed a way to redirect anyone going to one of the old urls to the corresponding new one. I don’t have access to the proxy running in front of my web host, so that wasn’t an option.

Instead, I added a bit of code to my installation of Ghost that deals with it.

In the top level directory of Ghost is a file called index.js. This is what gets invoked to run the platform and is itself a very simple file:

var ghost = require('./core');

ghost();

What this doesn’t reveal is that the call to ghost() accepts an argument of an instance of express, the web application framework that Ghost uses. This allows you to pass in an instance of Express that you’ve tweaked for your own needs – such as one that knows about the old urls and what they should redirect to:

var ghost = require('./core');
var express = require("express");

var app = express();

ghost(app);

var redirects = express();

var urlMap = [
   {from:'/version-0-2-0-released/', to:'/2013/10/16/version-0-2-0-released/'},
   {from:'/internet-of-things-messaging-hangout/', to:'/2013/10/21/internet-of-things-messaging-hangout/'},
   {from:'/version-0-3-0-released/', to:'/2013/10/31/version-0-3-0-released/'},
   {from:'/version-0-4-0-released/', to:'/2013/11/14/version-0-4-0-released/'},
   {from:'/version-0-5-0-released/', to:'/2013/12/21/version-0-5-0-released/'},
   {from:'/community-news-14012014/', to:'/2014/01/22/community-news/'}
];

for (var i=0; i<urlMap.length; i+=1) {
   var to = urlMap[i].to;
   redirects.all(urlMap[i].from,function(req,res) {
      res.redirect(301,to);
   });
}

app.use(redirects);

It’s all a bit hard-coded and I don’t know how compatible this will be with future releases of Ghost, but at least it keeps the old urls alive. After all, cool uris don’t change.

26 Sep 2013

Node-RED is the visual tool for wiring the Internet of Things that I have been working on through the year. What started as an educational exercise to get to grips with technologies such as Node.js and d3, soon turned into a tool that provided real value in client engagements.

One of our goals all along was to open-source the tool; to build a community of users and contributors. Following a whirlwind pass through the appropriate internal process, we got all the approvals we needed at the start of September and have released Node-RED as an Apache v2 licensed open source project on github.

You can find us here: nodered.org.

Following on from that, I have had the opportunity to speak at a couple of external events.

First up was the Wuthering Bytes technology festival in sunny/rainy Hebden Bridge. I did a completely unprepared, but well received, 5 minute lightning-talk at the end of the first day which generated a lot of interest. On the second day, there was a series of workshops, one of which was based around MQTT on Arduino – using my PubSubClient library. The workshop was organised by a contact of mine from the IoTLondon community, who was happy to see me there to help out. We ended up using Node-RED as well in the workshop as it makes it easy to visualise topics and move messages around. It was a great feeling to glance over the room and see people I’ve never met before using Node-RED.

The second event was last night, at the monthly London Node.js User Group (LNUG) Meetup. This time I had a 30 minute slot to fill, so went prepared with a presentation and demo. Again, it was well received and generated lots of good comments on twitter and afterwards in person. The talk was recorded and should be embedded below – starts at 29mins.

We’ve got a couple more talks lined up so far. Next up is the November meetup of IoTLondon, exact date tbc, and then ThingMonk in December.

23 Feb 2012

Another post in the series about things I’ve done with this site rather than write actual content.

I’ve been thinking about how to make things more friendly to mobile browsers around here. The easy option was to just install one of the established plugins, such as WPTouch, to do all the heavy lifting. Which is exactly what I did a while ago, but I was never completely happy with the result. I didn’t like the idea of having to create a mobile-specific theme, but without one the site would become yet another blog using WPTouch’s default theme.

Going back to basics, there were essentially three things that needed looking at; layout, navigation and bandwidth.

Here’s what the site looks like today:

A post’s meta-information floats over on the left, outside the column of the page. That’s fine if you’ve got the screen width to accommodate it, but on a mobile screen, it pushes the real content off the screen to the right.

To fix that, CSS Media Selectors come to the rescue. They allow you to specify CSS rules that should only be applied if certain conditions are met. In this instance, the conditions relate to the width of the screen displaying the page.

Here’s an excerpt from the stylesheet used to move the meta-information in-line with the main content:

@media screen and (max-width: 1000px) {
   .meta {
      background: #efefef;
      float: none;
      width: 780px;
      margin-top: 0px;
      margin-left: 10px;
      border-bottom: 1px solid #999;
      padding: 3px;
   }
}

Which results in something like this:

There are a number of other tweaks in the CSS to adapt the main column width depending on the screen size. For example, the footer content that is normally presented in three columns drops to two columns below a certain width and down to a single column at the smallest (ie mobile) size.

The nice thing with this technique is that it isn’t specific to mobile; try resizing your desktop browser window to see it in action.

The next thing to tackle was navigation – in particular, keeping the links at the top accessible. A little bit of CSS and theme tweaking later, the site now has a static mini-header at the top that reveals itself as you scroll down the page. It is a simple effect, but one I find quite pleasing.

All of the things described so far are handled in the browser. But some work was needed on the server side to reduce the size of the page in terms of bandwidth. The first step of which was to properly detect if the browser accessing the page is a mobile or not – easily done by checking the User-Agent header for signs of mobile. The only slight difficulty is determining what a mobile user-agent looks like. Of course there are a number of open-source plugins for doing this sort of thing, but for now I’ve rolled my own. It won’t detect every mobile browser under the sun, but it’ll hit the vast majority of them.

With that done, there are two things this site will do if it finds it is being served to a mobile. First, the ‘recent things’ column in the footer is omitted. This column comes from FriendFeed and likely contains a lot of images from Flickr, YouTube and Vimeo – depending on what I’ve been up to. This has the added benefit of reducing the length of the page, which gets quite excessive once the footer’s three columns are stacked up.

The second thing done is a filter is applied to the main content of the page to replace any Flickr-sourced image with a smaller version. This can be done thanks to the predictable nature of Flickr URLs.

I normally use the 500px wide images, which, as per the flickr docs, have urls of the form:

http://farm{farm-id}.staticflickr.com/{server-id}/{id}_{secret}.jpg

which just needs an _m adding after the {secret} to get the 240px version.

Here’s the function that does just that:

function wp_knolleary_shrink_flickr_images($content) {
   if (wp_knolleary_is_mobile()) {
      return preg_replace('/(static\.?flickr.com\/.*?\/[^_]*?_[^_]*?)(\.jpg)/',"$1_m$2",$content);
   } else {
      return $content;
   }
}
add_filter( 'the_content', 'wp_knolleary_shrink_flickr_images');

Admitted I wrote that regex late last night so could probably be simplified. But it works.

And that’s where I’ve got to so far. I’m sure there is a lot more I could do on this topic – any feedback would be welcome. I find it much more satisfying to figure these things out, rather than just activate a plugin that does it all for me.

On a related note, I’ve put the theme and plugin that this site uses onto github. More as a backup than in expectation of anyone forking it.