26 Jan 2014

I’m a big fan of the Ghost blogging platform. Having heard Hannah Wolfe talk about it a couple of times, I was keen to try it out. In fact I purposefully held off setting up a blog for Node-RED until Ghost was available.

It was painless to set up and get running on my Web Faction hosted server – even though at that point Web Faction hadn’t created their one-click installer for it.

Ghost released their second stable version a couple weeks ago, which added, amongst many other new features, the ability for a post’s url to contain its date. This was a feature I hadn’t realised I was missing until I wrote a post on the blog called ‘Community News’. The intent was this would be a commonly used blog title as we post things we find people doing with Node-RED. However, without a date in the post, the urls would end up being .../community-news-1, .../community-new-2 .. etc. Perfectly functional, but not to my taste.

So I clicked to enable dated permalinks on the blog, and promptly found I had broken all of the existing post urls. I had half-hoped that on enabling the option, any already-published posts would not be affected – but that wasn’t the case; they all changed.

To fix this, I needed a way to redirect anyone going to one of the old urls to the corresponding new one. I don’t have access to the proxy running in front of my web host, so that wasn’t an option.

Instead, I added a bit of code to my installation of Ghost that deals with it.

In the top level directory of Ghost is a file called index.js. This is what gets invoked to run the platform and is itself a very simple file:

var ghost = require('./core');

ghost();

What this doesn’t reveal is that the call to ghost() accepts an argument of an instance of express, the web application framework that Ghost uses. This allows you to pass in an instance of Express that you’ve tweaked for your own needs – such as one that knows about the old urls and what they should redirect to:

var ghost = require('./core');
var express = require("express");

var app = express();

ghost(app);

var redirects = express();

var urlMap = [
   {from:'/version-0-2-0-released/', to:'/2013/10/16/version-0-2-0-released/'},
   {from:'/internet-of-things-messaging-hangout/', to:'/2013/10/21/internet-of-things-messaging-hangout/'},
   {from:'/version-0-3-0-released/', to:'/2013/10/31/version-0-3-0-released/'},
   {from:'/version-0-4-0-released/', to:'/2013/11/14/version-0-4-0-released/'},
   {from:'/version-0-5-0-released/', to:'/2013/12/21/version-0-5-0-released/'},
   {from:'/community-news-14012014/', to:'/2014/01/22/community-news/'}
];

for (var i=0; i<urlMap.length; i+=1) {
   var to = urlMap[i].to;
   redirects.all(urlMap[i].from,function(req,res) {
      res.redirect(301,to);
   });
}

app.use(redirects);

It’s all a bit hard-coded and I don’t know how compatible this will be with future releases of Ghost, but at least it keeps the old urls alive. After all, cool uris don’t change.

26 Sep 2013

Node-RED is the visual tool for wiring the Internet of Things that I have been working on through the year. What started as an educational exercise to get to grips with technologies such as Node.js and d3, soon turned into a tool that provided real value in client engagements.

One of our goals all along was to open-source the tool; to build a community of users and contributors. Following a whirlwind pass through the appropriate internal process, we got all the approvals we needed at the start of September and have released Node-RED as an Apache v2 licensed open source project on github.

You can find us here: nodered.org.

Following on from that, I have had the opportunity to speak at a couple of external events.

First up was the Wuthering Bytes technology festival in sunny/rainy Hebden Bridge. I did a completely unprepared, but well received, 5 minute lightning-talk at the end of the first day which generated a lot of interest. On the second day, there was a series of workshops, one of which was based around MQTT on Arduino – using my PubSubClient library. The workshop was organised by a contact of mine from the IoTLondon community, who was happy to see me there to help out. We ended up using Node-RED as well in the workshop as it makes it easy to visualise topics and move messages around. It was a great feeling to glance over the room and see people I’ve never met before using Node-RED.

The second event was last night, at the monthly London Node.js User Group (LNUG) Meetup. This time I had a 30 minute slot to fill, so went prepared with a presentation and demo. Again, it was well received and generated lots of good comments on twitter and afterwards in person. The talk was recorded and should be embedded below – starts at 29mins.

We’ve got a couple more talks lined up so far. Next up is the November meetup of IoTLondon, exact date tbc, and then ThingMonk in December.

23 Feb 2012

Another post in the series about things I’ve done with this site rather than write actual content.

I’ve been thinking about how to make things more friendly to mobile browsers around here. The easy option was to just install one of the established plugins, such as WPTouch, to do all the heavy lifting. Which is exactly what I did a while ago, but I was never completely happy with the result. I didn’t like the idea of having to create a mobile-specific theme, but without one the site would become yet another blog using WPTouch’s default theme.

Going back to basics, there were essentially three things that needed looking at; layout, navigation and bandwidth.

Here’s what the site looks like today:

A post’s meta-information floats over on the left, outside the column of the page. That’s fine if you’ve got the screen width to accommodate it, but on a mobile screen, it pushes the real content off the screen to the right.

To fix that, CSS Media Selectors come to the rescue. They allow you to specify CSS rules that should only be applied if certain conditions are met. In this instance, the conditions relate to the width of the screen displaying the page.

Here’s an excerpt from the stylesheet used to move the meta-information in-line with the main content:

@media screen and (max-width: 1000px) {
   .meta {
      background: #efefef;
      float: none;
      width: 780px;
      margin-top: 0px;
      margin-left: 10px;
      border-bottom: 1px solid #999;
      padding: 3px;
   }
}

Which results in something like this:

There are a number of other tweaks in the CSS to adapt the main column width depending on the screen size. For example, the footer content that is normally presented in three columns drops to two columns below a certain width and down to a single column at the smallest (ie mobile) size.

The nice thing with this technique is that it isn’t specific to mobile; try resizing your desktop browser window to see it in action.

The next thing to tackle was navigation – in particular, keeping the links at the top accessible. A little bit of CSS and theme tweaking later, the site now has a static mini-header at the top that reveals itself as you scroll down the page. It is a simple effect, but one I find quite pleasing.

All of the things described so far are handled in the browser. But some work was needed on the server side to reduce the size of the page in terms of bandwidth. The first step of which was to properly detect if the browser accessing the page is a mobile or not – easily done by checking the User-Agent header for signs of mobile. The only slight difficulty is determining what a mobile user-agent looks like. Of course there are a number of open-source plugins for doing this sort of thing, but for now I’ve rolled my own. It won’t detect every mobile browser under the sun, but it’ll hit the vast majority of them.

With that done, there are two things this site will do if it finds it is being served to a mobile. First, the ‘recent things’ column in the footer is omitted. This column comes from FriendFeed and likely contains a lot of images from Flickr, YouTube and Vimeo – depending on what I’ve been up to. This has the added benefit of reducing the length of the page, which gets quite excessive once the footer’s three columns are stacked up.

The second thing done is a filter is applied to the main content of the page to replace any Flickr-sourced image with a smaller version. This can be done thanks to the predictable nature of Flickr URLs.

I normally use the 500px wide images, which, as per the flickr docs, have urls of the form:

http://farm{farm-id}.staticflickr.com/{server-id}/{id}_{secret}.jpg

which just needs an _m adding after the {secret} to get the 240px version.

Here’s the function that does just that:

function wp_knolleary_shrink_flickr_images($content) {
   if (wp_knolleary_is_mobile()) {
      return preg_replace('/(static\.?flickr.com\/.*?\/[^_]*?_[^_]*?)(\.jpg)/',"$1_m$2",$content);
   } else {
      return $content;
   }
}
add_filter( 'the_content', 'wp_knolleary_shrink_flickr_images');

Admitted I wrote that regex late last night so could probably be simplified. But it works.

And that’s where I’ve got to so far. I’m sure there is a lot more I could do on this topic – any feedback would be welcome. I find it much more satisfying to figure these things out, rather than just activate a plugin that does it all for me.

On a related note, I’ve put the theme and plugin that this site uses onto github. More as a backup than in expectation of anyone forking it.

7 Apr 2011

Pachube
I have been thinking about how MQTT could be integrated into Pachube, as a service that utilises their public API. With their Hackathon happening tomorrow, which I’m unable to attend, it felt like a good time to write up what I’ve done.

Basic Pachube concepts

The way Pachube structures data lends itself well to a topic hierarchy:

  • An Environment is a logical collection of datastreams. It can be used to represent a physical location, such as my house.
  • A Datastream represents a single sensor/device within a particular environment, such as living room temperature.
  • A Datapoint is a point-in-time value on a datastream

Data is sent to Pachube using HTTP POST requests. It can retrieved by either polling a datastream’s value, or by configuring a trigger. A trigger is a push notification that happens when a defined condition occurs. This notification exists as an HTTP POST from Pachube to the URL configured as a part of the trigger. The POST contains a json payload that contains the details of the notification. There are some limitations on data-rates around triggers; the minimum interval between the same trigger firing is 5 seconds.

All access to the API is controlled using API Keys. Each request to the API must be accompanied with either the X-PachubeApiKey HTTP header, or as a request parameter.
Each user has a master API key which gives full access to their account. Additional API keys can be created with different levels of access for a particular user.

The following scheme uses the API key as a part of the topic strings used. This relies on the broker having been configured to disallow any client to subscribe to the wild-carded topic “pb/#”. Beyond that, no more security is assumed (ie username/password, SSL etc), but of course could be added.

There are two components to this; an MQTT subscribing application that can bridge into Pachube and an HTTP listening application that can bridge from Pachube triggers into MQTT.

Publishing to Pachube

A datapoint value is published to the topic:

pb/<api key>/<environment id>/<datastream id>

which is received by the subscribing application, leading to it sending an HTTP PUT of the datapoint to the Pachube API:

http://api.pachube.com/v2/feeds/<environment id>/datastreams/<datastream id>.csv

with header property:

X-PachubeApiKey: <api key>

No pre-configuration is needed here – it relies on the provided API key having the correct permission to update the datastream. There are some questions here on how to handle failures to post (permission denied, service unavailable etc), as there is no mechanism to report failures back to the originating publisher.

Subscribing to Pachube

Subscribing to a datastream is slightly more involved as it requires an HTTP intermediary for Pachube to push its notifications to which can then be forwarded to the broker. There is also no way to automatically create the trigger when a client subscribes to a topic on the broker, so an additional configuration step is required of the client. It is also possible to configure multiple triggers for a single datastream, so we cannot use a simple one-to-one mapping of datastream to topic for the outbound flow. This causes an asymmetry between publishing and subscribing, but I think it is necessary.

A trigger is configured by publishing to the topic:

pb/<api key>/<environment id>/<datastream id>/trigger

with a payload that defines the trigger in json, for example:

{
   "topic":"lt15",
   // As defined in the pachube API: 
   "url":"http://example.com/pachubeBridge/callback",
   "threshold_value":"15.0",
   "trigger_type":"lt"
}

This causes the appropriate HTTP PUT to the Pachube API to create the trigger.

The trigger is configured with the url of the HTTP intermediary that is able to react to the HTTP POSTs sent by Pachube by publishing the value to the corresponding MQTT topic. This means clients can subscribe to:

pb/<api key>/<topic>

to receive the updates as they are received.

As an aside, there could be an option to make the trigger feed ‘public’ on the broker, so a user’s API key doesn’t need to be shared. This would mean the trigger would get published to something like:

pb/public/<topic>

To unsubscribe, the trigger needs to be deleted on Pachube. This is done by publishing to the same topic as was used to create the trigger, but with a payload that includes a ‘delete’ command.

The bit at the end

I have implemented pretty much all of what I describe above and it works. Although there are some drawbacks with the approach:

  • The rate-limiting that Pachube applies to triggers (at least 5 seconds between firings) defines the minimum granularity for getting data out to MQTT – so potentially values could be missed.
  • Subscribing to a datastream is not as simple as subscribing to a topic.
  • No consideration is made of the additional meta-data that is provided with the HTTP requests – this is very much just about the raw data points

In an ideal world, the MQTT interface would sit much closer to the centre of Pachube and be more of a 1st class API citizen; but then I want to MQTT enable everything.

Unfortunately I don’t have the wherewithal to make it public facing for other’s to play with at the moment, which is a shame. If you’re really interested in seeing it in action, get in touch and I’ll see what can be done.

6 Apr 2011

I often spend time tweaking the theme of this site rather than write content for it, but I thought the latest tweak was worth a few words of mention.

I have tried to keep the theme quite clean – with all of the traditional sidebar content pushed right down at the bottom of the page. Previously, there was a simple division between the bottom content and the rest of the page; nothing more than a border-top CSS declaration:

Inspired by the new look Planet GNOME, I wanted to do something a bit more interesting down there – some sort of ragged edge, but I didn’t want to spend an age trying to draw it by hand.

Coincidentally, earlier today I was looking for an excuse to play with Processing and this seemed like just the thing. I very quickly got something producing a very regular pattern:

But that felt too boring – it needed some randomness.

These were way too severe. To emulate the grass-like effect, I moved to using curves, rather than straight lines.

After a couple attemps where I didn’t have my Bezier control points under control, I finally hit upon something I liked:

And that’s what you’ll find at the bottom of page. The nice thing with this approach is that I can easily tweak the script to produce variations and I don’t have to start from scratch each time.

For the record – and so I can find it when I want to tweak it – here’s the processing sketch that produced the final result:

update: stuck the code in a gist on github

void setup() {
  size(2000,50);
  background(240);
  smooth();
  strokeJoin(ROUND);
  stroke(255);
  fill(255);
  rect(0,0,2000,40);
  stroke(153);
  fill(240);
  beginShape();
  vertex(0,42);
  int x = 0;
  for (int i=0;i<200;i++) {
    int y = (int)random(15,30);
    bezierVertex(x,37,
                 x,30,
                 x+random(-4,4),y);

    bezierVertex(x+2,30,
                 x+2,37,
                 x+random(8,9),40);
                 
    x+=10;
    int w = (int)random(5,15);
    bezierVertex(x+(w/3),42,
                 x+(2*w/3),42,
                 x+w,40);
    x += w;
  }
  vertex(2000,42);
  endShape();
  
  save("footerGrass.png");
}