5 Jun 2018

In the first part of this series I showed how to create a deployment pipeline from Node-RED running locally to Node-RED running in IBM Cloud.

It got the basic pieces into place to let you deploy a version controlled Node-RED application to the cloud. The next task is to connect some other IBM Cloud services to the application.

IBM Cloud-aware nodes

The existing Node-RED boilerplate comes with some extra nodes that are IBM Cloud-aware. They are able to automatically detect available instances of their respective services using the VCAP_SERVICES environment variable that Cloud Foundry provides.

One such collection of nodes are for the Cloudant database service, which we’re going to add to our Node-RED application.

The challenge is how to develop against those nodes when running locally – outside of the IBM Cloud environment.


Setting up Cloudant

Create a Cloudant service instance

Open up the IBM Cloud catalog and select the ‘Cloudant NoSQL DB’ service. Create a new instance, making sure you select the same region as your Node-RED application.

Bind Cloudant to your Node-RED application

Go to the dashboard page for your Node-RED application and select the ‘Connections’ tab. Find your newly created Cloudant service in the list and click ‘connect’.

It will prompt you to restage the application which will take a couple of minutes to complete.

Once that’s done, go back to the ‘Runtime’ tab on the IBM Cloud dashboard and the environment variables section. You will see a section for VCAP_SERVICES – this is the environment variable that the platform uses to pass the application all of the details it needs to access the connected services. You should see an entry for our newly created cloudant instance – if you don’t, make sure the restage has completed and reload the page.

Beneath the credentials is an ‘export’ button – clicking that will download a copy to a file called <your-app-name>_vcap.json.

Copy that file into your Node-RED user directory, ~/.node-red – do not put it under version control.

Edit your local settings.js file – this is the one in ~/.node-red not the one in your project directory.

Add the following just above the module.exports line and then restart Node-RED. Make sure to replace <your-app-name>_vcap.json with the actual name of the file you downloaded.

var fs = require("fs");
var path = require("path");

// Load and export IBM Cloud service credentials
process.env.VCAP_SERVICES = fs.readFileSync(path.join(__dirname,"<your-app-name>_vcap.json"));

Your local Node-RED now has access to your service credentials in the same way as your Node-RED in IBM Cloud does.

Install the IBM Cloud-enabled Cloudant nodes

Open up the Palatte Manager from the drop-down menu in Node-RED. Go to the ‘Install’ tab, search for node-red-node-cf-cloudant and click install.

Once installed, you’ll have a new pair of Cloudant nodes in the ‘storage’ section of the palette. Drag one into your workspace and double-click to edit it. The ‘Service’ property should have the name of your Cloudant service listed. If it doesn’t, check you’ve follow the steps to get your VCAP_SERVICES setup correctly.

Close the edit dialog but do not delete the node – we’ll come back to this a bit later.

Add the Cloudant nodes to the project

Having installed the nodes locally, we need to add them to our project’s package.json file so they also get installed when deploying to the cloud. We can do this within Node-RED by going to the ‘information’ sidebar tab and clicking the button next to the project name. This opens up the Project Settings dialog.

Go to the ‘Dependencies’ tab where you’ll see a list of the modules our project depends on. This is a combination of modules already listed in package.json and modules which provide nodes we have in our flow. At this point you should have two entries node-red and node-red-node-cf-cloudant.

Ignore the offer to remove node-red from the project as we need that, but do click the ‘add to project’ button next to the Cloudant module.

Commit changes

If you switch back to the ‘history’ tab you should now have two entries in the ‘Local files’ section – manifest.yml and package.json. If you click on either filename it will show you a diff of what has changed in the file. Check the changes look correct then click the ‘+ all’ button to prepare both files for committing and then commit them. Switch to the ‘Commit History’ tab and push the changes up to GitHub.

Wait for the Travis build to redeploy you application and then reload it your browser. You should now have the Cloudant nodes available in the palette and, as before, when you add one to your workspace and edit it, your Cloudant service will be selected.

Separating development and production

At this point, an application built locally will use the same Cloudant instance as the one running in IBM Cloud.

If we consider the local Node-RED as a development environment and the IBM Cloud instance as the production environment, then they really should use separate instances.

This can be achieved by creating a second Cloudant instance to treat as the development instance. Rather than connect it to your Node-RED application, you can generate a set of credentials from the instance dashboard page.

Update the <your-app-name>_vcap.json file with the new credentials and after restarting Node-RED, your local instance will now be accessing the separate instance.


Next Steps

This post has shown how to connect IBM Cloud services to you Node-RED application with separate development and production instances. It’s another important step to creating production-ready applications with Node-RED in IBM Cloud.

In the next part of this series, we’ll look at how to start building a simple application using this setup. That’s what I said in the previous post, but I really mean it this time.

1 Jun 2018

Node-RED has been available in the IBM Cloud catalog since the early days of what was then known as IBM Bluemix. Provided as a boilerplate application, it gives a really quick and easy way to get started with both Node-RED and the wide range of services available in the catalog.

The boilerplate is optimised for ease of use and getting started. Applications running in the Cloud Foundry part of IBM Cloud cannot treat their local file system as persistent storage – any time the application is restarted the file system is wiped back to its original state. This is why the Node-RED boilerplate comes with an instance of the Cloudant database service; giving it somewhere to store its flows.

It also means that any nodes that are installed using the Palette Manager in the editor have to be dynamically reinstalled whenever the application restarts. This is not ideal as it takes longer to restart, exposes the application to random network/npm failures and also risks memory issues as Node-RED tries to reinstall multiple things at once.

The better solution is to enable the Continuous Deployment feature and edit the application’s package.json file to explicitly add any additional modules. That’s also good as it means your application is version controlled and can be easily restored.

Except that isn’t entirely true. The underlying Node-RED application might be version controlled, but the most valuable part, the flows, are still held in Cloudant.

In an ideal world, you’d have all of your application assets under a single source of version control. It should be possible to deploy that application to separate development, test and production environments. It should all fit in with more traditional developer workflows.

This is the first in a series of posts that will show how you can create just such a workflow.

You’ll be able to develop a Node-RED application on a local machine, pushing changes to a GitHub repository and have them deploy automatically to IBM Cloud using Travis CI.


Getting started

Before we begin, you’ll need:

  • A GitHub account – it’s free!
  • A Travis CI account – sign-up using your GitHub account – it’s free!
  • An IBM Cloud account – sign-up for a Lite Account; it’s free, doesn’t require a credit card, never expires and gives you enough resources to get started

You’ll also need Node-RED installed locally.


Create a new Node-RED project

Node-RED introduced the Projects feature in the 0.18 release. It allows you to manage your flows in a git repository along with all the other pieces you need to create a redistributable Node-RED application.

Enabling the Node-RED projects feature

In the 0.18 release, the Projects feature needs to be enabled. Edit your settings.js file and update the editorTheme setting to change the projects.enabled flag to true. If you don’t have an editorTheme setting, add one in:

    editorTheme: {
        projects: {
            enabled: true
        }
    }

You can see how it should look in our default settings file – although the default is set to disable the feature, so if you copy it, make sure you change it to true.

When you restart Node-RED, you’ll be shown a welcome screen that introduces the projects feature.

Create a new GitHub repository

To create our Node-RED project, we’re going to first create a new repository on GitHub for the project.

Login to GitHub, click the New repository option under the + menu in the header.

Give your repository a name and leave all of the options as they are – in particular, do not tick the Initialize this repository with a README option. Then click ‘Create repository’.

On the repository page, copy the git url to your clipboard as we’ll need it in the next step.

Clone the repository into a new project

Back in Node-RED, select the option to create a new project by cloning a repository. When prompted, paste in the git url from the previous step.

Once you create the project, you’ll get a message saying it’s empty and it will offer to create a default set of project files – an offer you should accept.

It will then prompt you for the name of the flow file to use – we’ll use flow.json. Next it will ask about encrypting your flow credentials – something you must enable as you will be publishing your flow to GitHub. Provide an encryption key and make a note of it for later on.

With that done, you’ll now have your project ready to start wiring up your flows.


Modify the project to run on IBM Cloud

In order to deploy your Node-RED project as a Cloud Foundry application on IBM Cloud we need to add some extra files and update an existing one. These changes need to be made outside of Node-RED in a text editor of your choice.

First we need to find the project files. Node-RED stores them in a directory under the runtime user directory. By default, that will be ~/.node-red/projects/<name-of-project>.

Update package.json

The project already has a default package.json file that needs some updates:

  • add node-red in the dependencies section – this will ensure Node-RED gets installed when the application is deployed.
  • add a scripts section to define a start command – this is how IBM Cloud will run the application. We’ll look at this in a bit more detail in a moment.
  • add an engines section to define what version of Node.js we want to run with. You could leave this out and just get whatever the current Node.js buildpack defaults to, but it is better to be explicit.
{
    "name": "node-red-demo-1",
    "description": "A Node-RED Project",
    "version": "0.0.1",
    "dependencies": {
        "node-red": "0.18.*"
    },
    "node-red": {
        "settings": {
            "flowFile": "flow.json",
            "credentialsFile": "flow_cred.json"
        }
    },
    "scripts": {
        "start": "node --max-old-space-size=160 ./node_modules/node-red/red.js --userDir . --settings ./settings.js flow.json"
    },
    "engines": {
        "node": "8.x"
    }
}

Lets take a closer look at the start command:

node 
    --max-old-space-size=160         (1)
    ./node_modules/node-red/red.js   (2)
    --userDir .                      (3)
    --settings ./settings.js         (4)
    flow.json                        (5)
  1. As we’re running with a fixed memory limit, this argument is used to tell node when it should start garbage collecting.
  2. With node-red listed as an npm dependency of the project, we know exactly where it will get installed and where the red.js main entry point it.
  3. We want Node-RED to use the current directory as its user directory
  4. Just to be sure, we point at the settings file it should use – something we’ll add in the next step
  5. Finally we specify the flow file to use.

With the current version of Node-RED, 0.18, you should restart Node-RED after editing this file. – it doesn’t know the file has changed and may overwrite any changes you’ve made if you modify the project within the editor later.

Add a settings file

We need a settings file to configure Node-RED for the IBM Cloud environment. Create a file called settings.js in the project directory and copy in the following:

module.exports = {
    uiPort: process.env.PORT,
    credentialSecret: process.env.NODE_RED_CREDENTIAL_SECRET,
    adminAuth: {
        type: "credentials",
        users: [],
        default: {
            permissions: "read"
        }
    }
}

This tells Node-RED to listen on the port IBM Cloud gives us from the PORT environment variable. It also sets the key used to decrypt the credentials – this time coming from the NODE_RED_CREDENTIAL_SECRET environment variable. That lets us provide the key to the application without having to hardcode it in the version controlled files. We’ll sort that out in a later step of this post.

Finally it configures the editor to be in read-only mode. In a future post we’ll turn off the editor entirely, but leaving it running is useful at this stage to help verify your application is running.

Add a manifest file

The next file we need is the manifest.yml file used to deploy the application. Here’s a minimal file to start with. Make sure you change the name field to something unique for your project – nr-demo is already used and will cause your deploy to fail if you don’t change it.

applications:
- name: nr-demo
  memory: 256MB
  instances: 1

Configure Travis

Next we’re going to get Travis to watch our GitHub repository and trigger a build whenever we push changes to it.

Enable Travis for your repository

Sign in to Travis and connect it to your GitHub account. Go to your profile page and enable Travis for your new repository. You may have to click the ‘Sync account’ button for it to show up.

Add a .travis.yml file

The project needs a file called .travis.yml to tell Travis what to do when it runs a build. A build consists of three phases: install, script and deploy. For the purposes of this exercise, we’re going to skip the install and script phases – they can be used in the future to run automated tests against the application.

For the deploy phase we can use an integration Travis already has with IBM Cloud – albeit under the old brand name: Bluemix CloudFoundry.

With all that in mind, copy the following into your .travis.yml file:

language: node_js
node_js:
    - "node"
install: true
script: echo "Skipping build"
deploy:
  edge: true
  provider: bluemixcloudfoundry
  username: apikey
  organization: [email protected]
  space: dev
  manifest: manifest.yml

You’ll need to set the organization and space fields to match your own account details. The username must be set to apikey and the next step is to get a password we can use.

Generate an IBM Cloud API key

We need to generate an API key in our IBM Cloud account which we can use for the Travis deploy.

Log in to the IBM Cloud dashboard and select Manage -> Security -> Platform API Keys from the menu in the header.

Click the Create button, enter a sensible name for the key and click Create. The key will be generated and in the next dialog it will let you copy it to your clipboard. Make sure you copy it – once you close the dialog you will not be able to see it again and you’ll need to generate a new one.

Add the encrypted api key to your .travis.yml

Rather than paste this key into your .travis.yml file directly, Travis provides a way to encrypt the key first so it can be added safely.

To do this, you must first install the Travis CLI. How exactly you do that will depend on your OS and whether you have ruby installed and whether your PATH is setup correctly and lots of other things that may trip you up along the way. Suffice to say, if you have ruby installed, it should be a simple case of running:

gem install travis

You can then run:

travis encrypt --add deploy.password

It will prompt you to paste in your api key, hit enter then ctrl-d. If you look in your .travis.yml file you should see a password/secure section added under the deploy section.

If you get errors such as travis: command not found then you may need to update your PATH to include wherever gem installed the package.


Commit and push changes

At this point, all of the necessary changes have been made to the project files. You can commit the changes and push them up to GitHub. You can do this either via the git command-line, or from within Node-RED.

Committing via the command-line

From within the project directory, ~/.node-red/projects/<name-of-project>, run the commands:

git add package.json .travis.yml manifest.yml settings.js
git commit -m "Update project files"
git push

Committing via Node-RED

Within Node-RED, open up the history sidebar tab. You should see the four changed files in the ‘Local files’ section. If you don’t, click the refresh button to update the view. When you hover over each file a + button will appear on the right – click that button to move the file down to the ‘Changes to commit’ section.

Once all four are staged, click the ‘commit’ button, enter a commit message and confirm.

Switch to the Commit History section and you should see two commits in the list – the initial Create project files commit and the commit you’ve just done.

Click the remote button – the one with up/down arrows in – and click push in the dialog. This will send the changes up to GitHub.


Watch your build

If you go back to Travis, you should see the commit trigger a new build against your repository. If all is well, two to three minutes later the build should pass and you should be able to open http://<name-of-app>.mybluemix.net and be welcomed by the Node-RED editor.

If it fails, check the build log to see what went wrong.


Tell your application your credential secret

Now that your application has been created on the IBM Cloud, one final step is to tell your application the key it should use to decrypt your credentials file.

Go to the IBM Cloud dashboard page for your newly deployed application. On the ‘runtime’ page, go to the ‘environment variable’ section and add a variable called NODE_RED_CREDENTIAL_SECRET set to whatever credential key you set when you created your Node-RED project right at the start of this whole exercise.

Click the ‘save’ button and your application will be restarted, now with this variable set.


Next steps

If you’ve got this far, well done. You now have a pipeline going from Node-RED on your local machine to Node-RED running in the IBM Cloud. Whenever you make changes locally, commit and push them to GitHub, your application on IBM Cloud will be restaged thanks to Travis.

In the next part of this series, we’ll look at how to start building a simple application using this setup.

11 Aug 2017

TJBot is an open source DIY kit from IBM Watson to create a Raspberry Pi powered robot backed by the IBM Watson cognitive services. First published late last year, the project provides all the design files needed to create the body of the robot as either laser-cut cardboard or 3D printed parts.

It includes space for an RGB LED on top of its head, an arm driven by a servo that can wave, a microphone for capturing voice commands, a speaker so it can talk back and a camera to capture images.

There are an ever-growing set of tutorials and how-tos for building the bot and getting it hooked up to the IBM Watson services. All in all, its a lot of fun to put together. Having played with one at Node Summit a couple weeks ago, I finally decided to get one of my own. So this week I’ve mostly been monopolising the 3D printer in the Emerging Technologies lab.

A post shared by Nick O'Leary (@knolleary) on

A post shared by Nick O'Leary (@knolleary) on

After about 20 hours of printing, my TJBot was coming together nicely, but I knew I wanted to customise mine a bit. In part this was for me to figure out the whole process of designing something in Blender and getting it printed, but also because I wanted my TJBot to be a bit different.

First up was a second arm to given him a more balanced appearance. In order to keep it simple, I decided to make it a static arm rather than try to fit in a second servo. I took the design of the existing arm, mirrored it, removed the holes for the servo and extended it to clip onto the TJBot jaw at a nice angle.

The second customisation was a pair of glasses because, well, why not?

I designed them with a peg on their bridge which would push into a hole drilled into TJBot’s head. I also created a pair of ‘ears’ to push out from the inside of the head for the arms of the glasses to hook onto. I decided to do this rather than remodel the TJBot head piece because it was quicker to carefully drill three 6mm holes in the head than it was to wait another 12 hours for a new head to print.

In fact, as I write this, I’ve only drilled the hole between the eyes as there appears to be enough friction for the glasses to hold with just that one fixing point.

There some other customisations I’d like to play with in the future; this was enough for me to remember how to drive Blender without crying too much.

Why call him ‘Bleep’? Because that’s what happens when you let your 3 and 7 year old kids come up with a name.

I’ve created a repository on GitHub with the designs of the custom parts and where I’ll share any useful code and how-tos I come up with.

18 Apr 2014

Having recently added a Travis build for the Node-RED repository, we get a nice green, or sometimes nasty red, merge button on pull-requests, along with a comment from Travis thanks to GitHub’s Commit Status API.

From a process point of view, the other thing we require before merging a pull-request is to ensure the submitter has completed a Contributor License Agreement, or CLA. This has been a manual check against a list we maintain. But with GitHub’s recent addition of the Combined Status api I decided we ought to automate this check.

And of course, why not implement it in a Node-RED flow.

There are a few simple steps to take:

  1. know there has been a pull-request submitted, or that we want to manually trigger a check
  2. check the submitter against the list
  3. update the status accordingly

Rather than poll the api to see when a new pull-request has arrived, GitHub allows you to register a Webhook to get an http POST when certain events occur. For this to work, we need an HTTP-In node to accept the request. As with any request, we need to make sure it is responded to properly, as well as keeping a log – handy for debugging as you go.

nr-gh-flow-1

With that in place, we can register the webhook on the repositories we want to run the checks on.

  1. From the repository page, find your way to the webhook page via Settings ➝ WebHooks & Services ➝ Add webhook
  2. Enter the url of the http-in node
  3. Select ‘Let me select individual events.’ and then pick, as a minimum, ‘Pull Request’ and ‘Issue comment’
  4. Ensure the ‘Active’ option is ticked and add the webhook

When the POST arrives from GitHub, the X-GitHub-Event http header identifies the type of event that triggered the hook. As we’ve got a couple different types registered for the hook we can use a Switch node to route the flow depending on what happened. We also need a function node to access the http header as the Switch node can’t do by itself.

// GitHub Event Type
msg.eventType = msg.req.get('X-GitHub-Event');
return msg;

nr-gh-flow-2

Skipping ahead a bit, to update the status of the commit requires an http POST to the GitHub API. This is easily achieved with an HTTP Request node, but it does need to be configured to point to the right pull-request. Pulling that information out of the ‘pull-request’ webhook event is easy as it’s all in there.

// PR Opened
msg.login = msg.payload.pull_request.user.login;
msg.commitSha = msg.payload.pull_request.head.sha;
msg.repos = msg.payload.pull_request.base.repo.full_name;
return msg;

It’s a bit more involved with the ‘issue-comment’ event which fires for all comments made in the repository. We only want to process comments against pull-requests that contain the magic text to trigger a CLA check to run.

Even after identifying the comments we’re interested in, there’s more work to be done as they don’t contain the information needed to update the status. Before we can do that, we must first make another http request to the GitHub API to get the pull-request details.

// Filter PR Comments
msg.repos = msg.payload.repository.full_name;
if (msg.payload.issue.pull_request &&
       msg.payload.comment.body.match(/node-red-gitbot/i)) {
    msg.url = "https://api.github.com:443/repos/"+msg.repos+
              "/pulls/"+msg.payload.issue.number+
              "?access_token=XXX";
    msg.headers = {
        "user-agent": "node-red",
	    "accept": "application/vnd.github.v3"
    };
    return msg;
}
return null;

Taking the result of the request, we can extract the information needed.

// Extract PR details
var payload = JSON.parse(msg.payload);
msg.login = payload.user.login;
msg.commitSha = payload.head.sha;
msg.repos = payload.base.repo.full_name;
return msg;

nr-gh-flow-4

Now, regardless of which event has been triggered we have everything we need to check the CLA status; if msg.login is in the list of users who have completed a CLA, send a ‘success’ status, otherwise send a ‘failure’.

// Update CLA Status
var approved = ['list','of','users'];

var login = msg.login;
var commitSha = msg.commitSha;
var repos = msg.repos;

msg.headers = {
	"user-agent": "node-red",
	"accept": "application/vnd.github.she-hulk-preview+json"
};

msg.url = "https://api.github.com:443/repos/"+repos+
          "/statuses/"+commitSha+
          "?access_token=XXX";

msg.payload = {
    state:"failure",
    description:"No CLA on record",
    context:"node-red-gitbot/cla",
    target_url:"https://github.com/node-red/node-red/blob/master/CONTRIBUTING.md#contributor-license-aggreement"
}

if (approved.indexOf(login) > -1) {
    msg.payload.state = "success";
    msg.payload.description = "CLA check passed";
}
return msg;

nr-gh-flow-3

And that’s all it takes.

One of the things I like about Node-RED is the way it forces you to break a problem down into discrete logical steps. It isn’t suitable for everything, but when you want to produce event-drive logic that interacts with on-line services it makes life easy.

You want to be alerted that a pull-request has been raised by someone without CLA? Drag on a twitter node and you’re there.

There are some improvements to be made.

Currently the node has the list of users who have completed the CLA hard-coded in – this is okay for now, but will need to be pulled out into a separate lookup in the future.

It also assumes the PR only contains commits by the user who raised it – if a team has collaborated on the PR, this won’t spot that. Something I’ll have to keep an eye on and refine the flow if it becomes a real problem.

Some of the flow can already be simplified since we added template support for the HTTP Request node’s URL property.

The final hiccup with this whole idea is the way GitHub currently present commit statuses on the pull-request page. They only show the most recent status update, not the combined status. If the CLA check updates first with a failure, then the Travis build passes a couple minutes later, the PR will go green thanks to the Travis success. The only way to spot the Combined Status is currently ‘failure’ is to dive into the API. I’ve written a little script to give me a summary of all PR’s against the repositories, which will do for now.

Given the Combined Status API was only added as a preview last month, hopefully once it gets added to the stable API they will update how the status is presented to use it.

A final note, I’ve chosen not to share the full flow json here – hopefully there’s enough above to let you recreate it, but if you really want it, ping me.

26 Jan 2014

I’m a big fan of the Ghost blogging platform. Having heard Hannah Wolfe talk about it a couple of times, I was keen to try it out. In fact I purposefully held off setting up a blog for Node-RED until Ghost was available.

It was painless to set up and get running on my Web Faction hosted server – even though at that point Web Faction hadn’t created their one-click installer for it.

Ghost released their second stable version a couple weeks ago, which added, amongst many other new features, the ability for a post’s url to contain its date. This was a feature I hadn’t realised I was missing until I wrote a post on the blog called ‘Community News’. The intent was this would be a commonly used blog title as we post things we find people doing with Node-RED. However, without a date in the post, the urls would end up being .../community-news-1, .../community-new-2 .. etc. Perfectly functional, but not to my taste.

So I clicked to enable dated permalinks on the blog, and promptly found I had broken all of the existing post urls. I had half-hoped that on enabling the option, any already-published posts would not be affected – but that wasn’t the case; they all changed.

To fix this, I needed a way to redirect anyone going to one of the old urls to the corresponding new one. I don’t have access to the proxy running in front of my web host, so that wasn’t an option.

Instead, I added a bit of code to my installation of Ghost that deals with it.

In the top level directory of Ghost is a file called index.js. This is what gets invoked to run the platform and is itself a very simple file:

var ghost = require('./core');

ghost();

What this doesn’t reveal is that the call to ghost() accepts an argument of an instance of express, the web application framework that Ghost uses. This allows you to pass in an instance of Express that you’ve tweaked for your own needs – such as one that knows about the old urls and what they should redirect to:

var ghost = require('./core');
var express = require("express");

var app = express();

ghost(app);

var redirects = express();

var urlMap = [
   {from:'/version-0-2-0-released/', to:'/2013/10/16/version-0-2-0-released/'},
   {from:'/internet-of-things-messaging-hangout/', to:'/2013/10/21/internet-of-things-messaging-hangout/'},
   {from:'/version-0-3-0-released/', to:'/2013/10/31/version-0-3-0-released/'},
   {from:'/version-0-4-0-released/', to:'/2013/11/14/version-0-4-0-released/'},
   {from:'/version-0-5-0-released/', to:'/2013/12/21/version-0-5-0-released/'},
   {from:'/community-news-14012014/', to:'/2014/01/22/community-news/'}
];

for (var i=0; i<urlMap.length; i+=1) {
   var to = urlMap[i].to;
   redirects.all(urlMap[i].from,function(req,res) {
      res.redirect(301,to);
   });
}

app.use(redirects);

It’s all a bit hard-coded and I don’t know how compatible this will be with future releases of Ghost, but at least it keeps the old urls alive. After all, cool uris don’t change.