26 Sep 2018

My last couple of posts have focused on creating a managed Node-RED deployment pipeline to IBM Cloud. There’s still more to do in that series, but for this one, I’m taking a bit of a detour to the edge of the network.

One of the strengths of Node-RED is that it runs on devices just as happily as it does in cloud environments. This post looks at how we can replicate the deployment pipeline model from the first post, but this time target devices running in remote locations.

To do this, we’re going to use resin.io – a platform for managing fleets of connected devices, that makes updating them as easy as doing a git push. I’ve been meaning to play with Resin.io for ages and having got this working in no time today, I’m a tiny bit in love.

Getting started

Before we begin, you’ll need:

  • A Resin.io account – their free plan lets you create one application with up to 10 devices
  • A pair of Raspberry Pis

For this guide, I’m going to use two Raspberry Pis; one as the ‘development’ machine and one as a target device to push updates to via resin.io. You could just as easily use your laptop as the development machine to get started.

Create a new Node-RED project

As with the previous post you’ll need to enabled the Projects feature in Node-RED.

On the ‘development’ Pi, edit your settings.js file to set editorTheme.projects.enabled to true. You may find there’s already an editorTheme entry in the file with a menu property – you’ll need to add in the projects property alongside that:

    editorTheme: {
        projects: {
            enabled: true
        menu: { ... }

Then restart Node-RED using node-red-stop && node-red-start. If you aren’t using a Pi, you’ll need to use whatever platform-appropriate means you have to restart.

At this point you can either follow the previous post to create a GitHub repository for your project and clone it locally, or you can create an entirely local project. This guide doesn’t make use of GitHub, but the option is there if you want.

The most important thing is to make a note of the key you choose to encrypt your credentials file with – you’ll need that later.

Turn the project into a deployable application

Once again, we need to edit some of the project files so it can be deployed as a standalone application.

The package.json file is updated as before – make sure to leave the node-red section alone if you’ve picked different flow file names when creating the project:

    "name": "node-red-demo-1",
    "description": "A Node-RED Project",
    "version": "0.0.1",
    "dependencies": {
        "node-red": "0.19.*",
        "node-red-node-pi-sense-hat": ">0.0.18"
    "node-red": {
        "settings": {
            "flowFile": "flow.json",
            "credentialsFile": "flow_cred.json"
    "scripts": {
        "start": "node --max-old-space-size=160 ./node_modules/node-red/red.js --userDir . --settings ./settings.js flow.json"

I’ve included the node-red-node-pi-sense-hat nodes as the demo I’m building beyond this guide uses that particular accessory.

Add a settings file

The settings.js file can be created with the following:

module.exports = {
    credentialSecret: process.env.NODE_RED_CREDENTIAL_SECRET,
    httpAdminRoot: false

By setting httpAdminRoot to false, the editor and admin apis will be disabled.

Add a Dockerfile

Resin.io uses docker images as the unit of deployment. To that end, we need to add a Dockerfile to build our application:

FROM resin/raspberrypi3-node:8-slim

# use apt-get if you need to install dependencies,
# for instance if you need ALSA sound utils, just uncomment the lines below.
#RUN apt-get update && apt-get install -yq \
#    alsa-utils libasound2-dev && \
#    apt-get clean && rm -rf /var/lib/apt/lists/*

RUN apt-get update && apt-get install -yq \
      python3=3.4.2-2 sense-hat raspberrypi-bootloader i2c-tools build-essential \
      libssl-dev libffi-dev libyaml-dev python3-dev python3-pip python-rpi.gpio && \
    pip3 install sense-hat rtimulib pillow 

# Defines our working directory in container
WORKDIR /usr/src/app

# Copies the package.json first for better cache on later pushes
COPY package.json package.json

# This install npm dependencies on the resin.io build server,
# making sure to clean up the artifacts it creates in order to reduce the image size.
RUN JOBS=MAX npm install --unsafe-perm && npm cache clean --force && rm -rf /tmp/*

RUN apt-get remove build-essential libssl-dev libffi-dev libyaml-dev python3-dev python3-pip \
    && apt-get autoremove && apt-get clean && rm -rf /var/lib/apt/lists/*

# This will copy all files in our root to the working  directory in the container
COPY . ./

# Enable systemd init system in container

# server.js will run when container starts up on the device
CMD ["npm", "start"]

I won’t go into all the details of the Dockerfile. It took a bit of trial and error to get the right dependencies installed for the SenseHAT.

Commit and push changes

At this point, all of the necessary changes have been made to the project files and you should commit the changes. You can do this either via the git command-line, or from within Node-RED.

Committing via the command-line

From within the project directory, ~/.node-red/projects/<name-of-project>, run the commands:

git add package.json Dockerfile settings.js
git commit -m "Update project files"

Committing via Node-RED

Within Node-RED, open up the history sidebar tab. You should see the changed files in the ‘Local files’ section. If you don’t, click the refresh button to update the view. When you hover over each file a + button will appear on the right – click that button to move the file down to the ‘Changes to commit’ section.

Once they are all staged, click the ‘commit’ button, enter a commit message and confirm.

Switch to the Commit History section and you should see two commits in the list – the initial Create project files commit and the commit you’ve just done.

Setting up resin.io

Next we’re going to get the second Pi setup as a managed device. Resin.io provide a great getting started tutorial that you should follow up to and including the ‘Provision your device’ step.

That should get to the point where the Pi shows as connected in your resin.io dashboard.

Deploying your application

Before we can get deploy the application, we need to:

  • setup some SSH keys so our development Pi is able to push changes to resin.io,
  • tell Node-RED where to push the application,
  • configure your device with the credential key

Generate SSH keys

Open up the Node-RED settings dialog (from the main menu) and switch to the ‘Git config’ tab. Click the ‘add key’ button, give it a name, and optionally a passphrase, then click ‘generate key’. After a few seconds you’ll be shown you new public key, which you should copy to your clipboard.

Over in your resin.io preferences you can then add that public key to your account.

Configure your git remote

Resin.io provides a remote git repository you push your application to in order to trigger a deployment. In your applications dashboard page you should see a text input with a git remote command in. The full command will look like:

git remote add resin <user>@git.resin.io:<user>/<app>.git

You can either run that command in the directory ~/.node-red/projects/<name-of-project>, or you can add it via the Node-RED editor. To do it via the editor, open up the Project settings dialog by clicking the ... button next to the project name in the Info sidebar tab and then switch to the Settings tab. Click ‘add remote’, give it a name of resin and copy in the url from the command.

Tell your device its credential key

The final step is to give your device the key to decrypt its credentials file. We already setup the settings.js file to look for the NODE_RED_CREDENTIAL_SECRET environment variable. In the resin.io application dashboard you should add a new Environment Variable with that name and the value you gave when generating your project.

Push your application

At last we can push the project to resin. In the Commit History section of the Node-RED Project History sidebar tab, click the button with two arrows. Click the ‘Remote: none’ button to pick which remote to push to. It may prompt you to pick an ssh key at this point – pick the one you generated earlier. If you originally cloned the project from github, you’ll need to pick the ‘resin’ remote. Finally click ‘push’.

If all goes well, your project will be pushed to resin.io and its docker image built.

Now, confession time. In putting this guide together, I’ve hit a couple usability issues with the git integration in Node-RED. When pushing to a remote git repository, you don’t get any feedback – just a spinning animation. That’s not normally much of an issue as it is fairly quick, but in the resin.io case, when you push it runs the build and provides the full log in return. That’s invaluable in figuring out what you got wrong in the Dockerfile – but Node-RED doesn’t show you any of it. It can also take a minute or two – longer the first time you do it, but quicker in subsequent pushes.

I did find myself resorting to pushing from the command line, using git push resin master in the project directory so I could see that output. Something for us to improve in the future.

Assuming your push worked, you should see the update arrive in the resin.io dashboard and be able to track its deployment to your device. The dashboard also shows you the application logs and lets you ssh into your device from the browser. It really is a delightful experience.

All being well, you should see Node-RED startup in the device logs.

Developer workflow

With everything in place, you should now have a developer workflow that consists of:

  1. developing and testing on a local Raspberry Pi
  2. committing changes, giving you full version control
  3. optionally pushing those changes to GitHub, or any other hosted git service
  4. deploying the application to your managed devices, with a simple push to the resin.io remote.

One thing worth looking at is to enable the ‘Delta updates’ feature on resin.io. This keeps the size of any update to a bare minimum – which if its mostly just your flow file can be a huge saving compared to pushing a complete docker layer. I’m not entirely sure why this feature isn’t enabled by default, but you can find out how to turn it on here.

Next steps

With this basic workflow in place, we can start thinking about how it would work with multiple devices. The great thing with resin.io is that it’ll push your application to all of the connected devices. That does pose some challenges for us. For example, if we want each device to connect to Watson IoT Platform with its own set of credentials, we need a mechanism to give each device its own details and be able to make use of that in the Node-RED flow configuration. Thankfully you can go a long way with a few environment variables. But that’s for another post.

5 Jun 2018

In the first part of this series I showed how to create a deployment pipeline from Node-RED running locally to Node-RED running in IBM Cloud.

It got the basic pieces into place to let you deploy a version controlled Node-RED application to the cloud. The next task is to connect some other IBM Cloud services to the application.

IBM Cloud-aware nodes

The existing Node-RED boilerplate comes with some extra nodes that are IBM Cloud-aware. They are able to automatically detect available instances of their respective services using the VCAP_SERVICES environment variable that Cloud Foundry provides.

One such collection of nodes are for the Cloudant database service, which we’re going to add to our Node-RED application.

The challenge is how to develop against those nodes when running locally – outside of the IBM Cloud environment.

Setting up Cloudant

Create a Cloudant service instance

Open up the IBM Cloud catalog and select the ‘Cloudant NoSQL DB’ service. Create a new instance, making sure you select the same region as your Node-RED application.

Bind Cloudant to your Node-RED application

Go to the dashboard page for your Node-RED application and select the ‘Connections’ tab. Find your newly created Cloudant service in the list and click ‘connect’.

It will prompt you to restage the application which will take a couple of minutes to complete.

Once that’s done, go back to the ‘Runtime’ tab on the IBM Cloud dashboard and the environment variables section. You will see a section for VCAP_SERVICES – this is the environment variable that the platform uses to pass the application all of the details it needs to access the connected services. You should see an entry for our newly created cloudant instance – if you don’t, make sure the restage has completed and reload the page.

Beneath the credentials is an ‘export’ button – clicking that will download a copy to a file called <your-app-name>_vcap.json.

Copy that file into your Node-RED user directory, ~/.node-red – do not put it under version control.

Edit your local settings.js file – this is the one in ~/.node-red not the one in your project directory.

Add the following just above the module.exports line and then restart Node-RED. Make sure to replace <your-app-name>_vcap.json with the actual name of the file you downloaded.

var fs = require("fs");
var path = require("path");

// Load and export IBM Cloud service credentials
process.env.VCAP_SERVICES = fs.readFileSync(path.join(__dirname,"<your-app-name>_vcap.json"));

Your local Node-RED now has access to your service credentials in the same way as your Node-RED in IBM Cloud does.

Install the IBM Cloud-enabled Cloudant nodes

Open up the Palatte Manager from the drop-down menu in Node-RED. Go to the ‘Install’ tab, search for node-red-node-cf-cloudant and click install.

Once installed, you’ll have a new pair of Cloudant nodes in the ‘storage’ section of the palette. Drag one into your workspace and double-click to edit it. The ‘Service’ property should have the name of your Cloudant service listed. If it doesn’t, check you’ve follow the steps to get your VCAP_SERVICES setup correctly.

Close the edit dialog but do not delete the node – we’ll come back to this a bit later.

Add the Cloudant nodes to the project

Having installed the nodes locally, we need to add them to our project’s package.json file so they also get installed when deploying to the cloud. We can do this within Node-RED by going to the ‘information’ sidebar tab and clicking the button next to the project name. This opens up the Project Settings dialog.

Go to the ‘Dependencies’ tab where you’ll see a list of the modules our project depends on. This is a combination of modules already listed in package.json and modules which provide nodes we have in our flow. At this point you should have two entries node-red and node-red-node-cf-cloudant.

Ignore the offer to remove node-red from the project as we need that, but do click the ‘add to project’ button next to the Cloudant module.

Commit changes

If you switch back to the ‘history’ tab you should now have two entries in the ‘Local files’ section – manifest.yml and package.json. If you click on either filename it will show you a diff of what has changed in the file. Check the changes look correct then click the ‘+ all’ button to prepare both files for committing and then commit them. Switch to the ‘Commit History’ tab and push the changes up to GitHub.

Wait for the Travis build to redeploy you application and then reload it your browser. You should now have the Cloudant nodes available in the palette and, as before, when you add one to your workspace and edit it, your Cloudant service will be selected.

Separating development and production

At this point, an application built locally will use the same Cloudant instance as the one running in IBM Cloud.

If we consider the local Node-RED as a development environment and the IBM Cloud instance as the production environment, then they really should use separate instances.

This can be achieved by creating a second Cloudant instance to treat as the development instance. Rather than connect it to your Node-RED application, you can generate a set of credentials from the instance dashboard page.

Update the <your-app-name>_vcap.json file with the new credentials and after restarting Node-RED, your local instance will now be accessing the separate instance.

Next Steps

This post has shown how to connect IBM Cloud services to you Node-RED application with separate development and production instances. It’s another important step to creating production-ready applications with Node-RED in IBM Cloud.

In the next part of this series, we’ll look at how to start building a simple application using this setup. That’s what I said in the previous post, but I really mean it this time.

1 Jun 2018

Node-RED has been available in the IBM Cloud catalog since the early days of what was then known as IBM Bluemix. Provided as a boilerplate application, it gives a really quick and easy way to get started with both Node-RED and the wide range of services available in the catalog.

The boilerplate is optimised for ease of use and getting started. Applications running in the Cloud Foundry part of IBM Cloud cannot treat their local file system as persistent storage – any time the application is restarted the file system is wiped back to its original state. This is why the Node-RED boilerplate comes with an instance of the Cloudant database service; giving it somewhere to store its flows.

It also means that any nodes that are installed using the Palette Manager in the editor have to be dynamically reinstalled whenever the application restarts. This is not ideal as it takes longer to restart, exposes the application to random network/npm failures and also risks memory issues as Node-RED tries to reinstall multiple things at once.

The better solution is to enable the Continuous Deployment feature and edit the application’s package.json file to explicitly add any additional modules. That’s also good as it means your application is version controlled and can be easily restored.

Except that isn’t entirely true. The underlying Node-RED application might be version controlled, but the most valuable part, the flows, are still held in Cloudant.

In an ideal world, you’d have all of your application assets under a single source of version control. It should be possible to deploy that application to separate development, test and production environments. It should all fit in with more traditional developer workflows.

This is the first in a series of posts that will show how you can create just such a workflow.

You’ll be able to develop a Node-RED application on a local machine, pushing changes to a GitHub repository and have them deploy automatically to IBM Cloud using Travis CI.

Getting started

Before we begin, you’ll need:

  • A GitHub account – it’s free!
  • A Travis CI account – sign-up using your GitHub account – it’s free!
  • An IBM Cloud account – sign-up for a Lite Account; it’s free, doesn’t require a credit card, never expires and gives you enough resources to get started

You’ll also need Node-RED installed locally.

Create a new Node-RED project

Node-RED introduced the Projects feature in the 0.18 release. It allows you to manage your flows in a git repository along with all the other pieces you need to create a redistributable Node-RED application.

Enabling the Node-RED projects feature

In the 0.18 release, the Projects feature needs to be enabled. Edit your settings.js file and update the editorTheme setting to change the projects.enabled flag to true. If you don’t have an editorTheme setting, add one in:

    editorTheme: {
        projects: {
            enabled: true

You can see how it should look in our default settings file – although the default is set to disable the feature, so if you copy it, make sure you change it to true.

When you restart Node-RED, you’ll be shown a welcome screen that introduces the projects feature.

Create a new GitHub repository

To create our Node-RED project, we’re going to first create a new repository on GitHub for the project.

Login to GitHub, click the New repository option under the + menu in the header.

Give your repository a name and leave all of the options as they are – in particular, do not tick the Initialize this repository with a README option. Then click ‘Create repository’.

On the repository page, copy the git url to your clipboard as we’ll need it in the next step.

Clone the repository into a new project

Back in Node-RED, select the option to create a new project by cloning a repository. When prompted, paste in the git url from the previous step.

Once you create the project, you’ll get a message saying it’s empty and it will offer to create a default set of project files – an offer you should accept.

It will then prompt you for the name of the flow file to use – we’ll use flow.json. Next it will ask about encrypting your flow credentials – something you must enable as you will be publishing your flow to GitHub. Provide an encryption key and make a note of it for later on.

With that done, you’ll now have your project ready to start wiring up your flows.

Modify the project to run on IBM Cloud

In order to deploy your Node-RED project as a Cloud Foundry application on IBM Cloud we need to add some extra files and update an existing one. These changes need to be made outside of Node-RED in a text editor of your choice.

First we need to find the project files. Node-RED stores them in a directory under the runtime user directory. By default, that will be ~/.node-red/projects/<name-of-project>.

Update package.json

The project already has a default package.json file that needs some updates:

  • add node-red in the dependencies section – this will ensure Node-RED gets installed when the application is deployed.
  • add a scripts section to define a start command – this is how IBM Cloud will run the application. We’ll look at this in a bit more detail in a moment.
  • add an engines section to define what version of Node.js we want to run with. You could leave this out and just get whatever the current Node.js buildpack defaults to, but it is better to be explicit.
    "name": "node-red-demo-1",
    "description": "A Node-RED Project",
    "version": "0.0.1",
    "dependencies": {
        "node-red": "0.18.*"
    "node-red": {
        "settings": {
            "flowFile": "flow.json",
            "credentialsFile": "flow_cred.json"
    "scripts": {
        "start": "node --max-old-space-size=160 ./node_modules/node-red/red.js --userDir . --settings ./settings.js flow.json"
    "engines": {
        "node": "8.x"

Lets take a closer look at the start command:

    --max-old-space-size=160         (1)
    ./node_modules/node-red/red.js   (2)
    --userDir .                      (3)
    --settings ./settings.js         (4)
    flow.json                        (5)
  1. As we’re running with a fixed memory limit, this argument is used to tell node when it should start garbage collecting.
  2. With node-red listed as an npm dependency of the project, we know exactly where it will get installed and where the red.js main entry point it.
  3. We want Node-RED to use the current directory as its user directory
  4. Just to be sure, we point at the settings file it should use – something we’ll add in the next step
  5. Finally we specify the flow file to use.

With the current version of Node-RED, 0.18, you should restart Node-RED after editing this file. – it doesn’t know the file has changed and may overwrite any changes you’ve made if you modify the project within the editor later.

Add a settings file

We need a settings file to configure Node-RED for the IBM Cloud environment. Create a file called settings.js in the project directory and copy in the following:

module.exports = {
    uiPort: process.env.PORT,
    credentialSecret: process.env.NODE_RED_CREDENTIAL_SECRET,
    adminAuth: {
        type: "credentials",
        users: [],
        default: {
            permissions: "read"

This tells Node-RED to listen on the port IBM Cloud gives us from the PORT environment variable. It also sets the key used to decrypt the credentials – this time coming from the NODE_RED_CREDENTIAL_SECRET environment variable. That lets us provide the key to the application without having to hardcode it in the version controlled files. We’ll sort that out in a later step of this post.

Finally it configures the editor to be in read-only mode. In a future post we’ll turn off the editor entirely, but leaving it running is useful at this stage to help verify your application is running.

Add a manifest file

The next file we need is the manifest.yml file used to deploy the application. Here’s a minimal file to start with. Make sure you change the name field to something unique for your project – nr-demo is already used and will cause your deploy to fail if you don’t change it.

- name: nr-demo
  memory: 256MB
  instances: 1

Configure Travis

Next we’re going to get Travis to watch our GitHub repository and trigger a build whenever we push changes to it.

Enable Travis for your repository

Sign in to Travis and connect it to your GitHub account. Go to your profile page and enable Travis for your new repository. You may have to click the ‘Sync account’ button for it to show up.

Add a .travis.yml file

The project needs a file called .travis.yml to tell Travis what to do when it runs a build. A build consists of three phases: install, script and deploy. For the purposes of this exercise, we’re going to skip the install and script phases – they can be used in the future to run automated tests against the application.

For the deploy phase we can use an integration Travis already has with IBM Cloud – albeit under the old brand name: Bluemix CloudFoundry.

With all that in mind, copy the following into your .travis.yml file:

language: node_js
    - "node"
install: true
script: echo "Skipping build"
  edge: true
  provider: bluemixcloudfoundry
  username: apikey
  organization: [email protected]
  space: dev
  manifest: manifest.yml

You’ll need to set the organization and space fields to match your own account details. The username must be set to apikey and the next step is to get a password we can use.

Generate an IBM Cloud API key

We need to generate an API key in our IBM Cloud account which we can use for the Travis deploy.

Log in to the IBM Cloud dashboard and select Manage -> Security -> Platform API Keys from the menu in the header.

Click the Create button, enter a sensible name for the key and click Create. The key will be generated and in the next dialog it will let you copy it to your clipboard. Make sure you copy it – once you close the dialog you will not be able to see it again and you’ll need to generate a new one.

Add the encrypted api key to your .travis.yml

Rather than paste this key into your .travis.yml file directly, Travis provides a way to encrypt the key first so it can be added safely.

To do this, you must first install the Travis CLI. How exactly you do that will depend on your OS and whether you have ruby installed and whether your PATH is setup correctly and lots of other things that may trip you up along the way. Suffice to say, if you have ruby installed, it should be a simple case of running:

gem install travis

You can then run:

travis encrypt --add deploy.password

It will prompt you to paste in your api key, hit enter then ctrl-d. If you look in your .travis.yml file you should see a password/secure section added under the deploy section.

If you get errors such as travis: command not found then you may need to update your PATH to include wherever gem installed the package.

Commit and push changes

At this point, all of the necessary changes have been made to the project files. You can commit the changes and push them up to GitHub. You can do this either via the git command-line, or from within Node-RED.

Committing via the command-line

From within the project directory, ~/.node-red/projects/<name-of-project>, run the commands:

git add package.json .travis.yml manifest.yml settings.js
git commit -m "Update project files"
git push

Committing via Node-RED

Within Node-RED, open up the history sidebar tab. You should see the four changed files in the ‘Local files’ section. If you don’t, click the refresh button to update the view. When you hover over each file a + button will appear on the right – click that button to move the file down to the ‘Changes to commit’ section.

Once all four are staged, click the ‘commit’ button, enter a commit message and confirm.

Switch to the Commit History section and you should see two commits in the list – the initial Create project files commit and the commit you’ve just done.

Click the remote button – the one with up/down arrows in – and click push in the dialog. This will send the changes up to GitHub.

Watch your build

If you go back to Travis, you should see the commit trigger a new build against your repository. If all is well, two to three minutes later the build should pass and you should be able to open http://<name-of-app>.mybluemix.net and be welcomed by the Node-RED editor.

If it fails, check the build log to see what went wrong.

Tell your application your credential secret

Now that your application has been created on the IBM Cloud, one final step is to tell your application the key it should use to decrypt your credentials file.

Go to the IBM Cloud dashboard page for your newly deployed application. On the ‘runtime’ page, go to the ‘environment variable’ section and add a variable called NODE_RED_CREDENTIAL_SECRET set to whatever credential key you set when you created your Node-RED project right at the start of this whole exercise.

Click the ‘save’ button and your application will be restarted, now with this variable set.

Next steps

If you’ve got this far, well done. You now have a pipeline going from Node-RED on your local machine to Node-RED running in the IBM Cloud. Whenever you make changes locally, commit and push them to GitHub, your application on IBM Cloud will be restaged thanks to Travis.

In the next part of this series, we’ll look at how to start building a simple application using this setup.

16 Mar 2018

In our recent project update blog post, I marked the fact Node-RED has recently hit 1 million downloads. It’s a big milestone to reach for the project and a good opportunity to reflect back on how we’ve got here.

There are a couple of threads I want to explore.

  • Open by default
  • Low Code application development

Open by default

When we started Node-RED, it was a tool to help us do our day job, but our day job wasn’t to spend time writing development tools – it was to build real solutions for clients as part of our Emerging Technologies group. That had some consequences on how we approached developing Node-RED. It meant that we weren’t adding features speculatively – everything we did was in reaction to a real need to do something. We were also limited in how much time we could spend on it during the day – evenings and weekends were much more the norm.

As we discussed how to push it to a wider audience we knew that open sourcing it was the only route we wanted to take. The alternative was for it to remain a proprietary piece of code that would likely sit on a shelf and only get used by us – which wasn’t really an option.

I spoke at MonkiGras a couple years ago about our experiences going through the open-sourcing process at IBM. It was a straight-forward process and very much reflected a growing attitude of being open by default – something that has continued to flourish and become an important part of our culture.

Being an open source project has been absolutely instrumental to the success we’ve seen, but it was never a guarantee of success. A ‘build it and they will come’ approach may have worked in Field of Dreams, but with any brand new open source project, it takes hard work to spread the word and get people engaged with it.

It also takes deliberate attention to develop in the open. Discussions that would previously have happened over a coffee in front of a whiteboard need to happen in an open forum. The long list of ideas written on sticky notes need to be visible to the community. Communication is key and I think this is an area we can continue to improve on.

Low Code application development

Node-RED embodies a Low Code style of application development; where developers can quickly create meaningful applications without having to write reams of code. The term Low Code was coined by the Forrester Research in a report published in 2014 – but it clearly embodies a style of development that goes back further back then that.

There are a number of benefits to Low Code application development, all of which we’ve seen first hand with Node-RED.

It reduces the time taken to create a working application

This allows the real value to be realised much quicker than with traditional development models.

It is accessible to a wide range of developers and non-developers

Above all else, this is one of the most important benefits we’ve seen. Anyone who understands a domain-specific problem, such as a business analyst, a linguist or a building engineer, will know the discrete steps needed to solve it. Node-RED gives them the tools to express those steps within a flow and build the solution for themselves.

The visual nature helps users to see their application

Show, don’t tell” is a powerful concept. We often see Node-RED get used to demo capabilities of APIs, such as the Watson cognitive services. It’s so effective because the visualisation of your application logic shows the art of the possible without having to explain every semi-colon, bracket and brace. Not everyone thinks in lines of code; the visual representation of application logic is much more relatable.

This is all evident in the way Node-RED is often used as part of the code patterns my colleagues produce on IBM Code.

Low Code? But I *want* to write code

Low Code platforms may open up application development to a wider audience of developers, but they still have their critics in those who prefer to be able to tinker with the underlying code.

This is where the open by default approach of Node-RED brings us an advantage. Node-RED isn’t a closed platform that acts entirely as a black-box. Anyone is able to look under the covers and see what’s going on, to provide feedback or to suggest changes.

If someone finds a node that doesn’t do quite what they need, they can easily work with the author to add the desired features, or choose to create their own node.

I think one of the most important things we did in the project at the very start was to make it possible for anyone to self-publish their own nodes. We chose not to become gate-keepers to what the community could add to Node-RED. The fact there are over 1300 3rd party nodes today, a number that climbs steadily, is testament to that decision.

Getting to the next million

As we look to the future of the project, there is a lot still to come. The roadmap to 1.0 opens up so much potential for new uses of Node-RED that we’re all eager to get there as soon as we can. We continue to see more companies adopt Node-RED as part of their own developer experience. As the user community continue to grow, our real challenge is to grow the contributor community – helping drive the project forward.

I’ve talked about Low Code in this post and not mentioned IoT once. We are still, and will remain, IoT at heart. But there’s a much broader framing of the project to consider beyond IoT. I have more personal flows doing simple web automation tasks than I do anything related to IoT. There’s a huge developer audience to tap into who may otherwise be put off by our focus on IoT. How that framing takes shape is something we need to think carefully about.

Ultimately, Node-RED is an open source, low-code, event-driven, flow-based programming environment. It has a great community behind it and new users coming to it every day. I’m sure we’ll get to the next million much quicker than the first.

5 Jan 2018

hopefully it won’t be another 6 weeks before I write again

Nope. That didn’t really work out did it. Let’s not kid ourselves, I’m not about to recap the last 20 weeks. For all sorts of reasons, it’s taken all my energy to keep the various plates spinning with little room for much else.

A lot of my time has been spent heads down trying to make progress on the Node-RED roadmap we published in the summer. The content of the roadmap is good, but the timescales I set out for it were not. I knew they were ambitious and set myself and the community a challenge – but ultimately the weight of delivering the roadmap has fallen on my shoulders. And I’ve been feeling that weight heavily for a while.

One of my goals for last year was to improve the sustainability of the project; to get to the point where I could go do something else and the project would continue to progress. The last few months have shown that, despite some positive signs, it’s not there yet. There are times when I feel frustrated at having to churn out code as it leaves very little time for doing all the other things I could/should be doing. But I also recognise that getting the roadmap done will unlock so much potential in the project.

And this is one of my problems; having invested so much time, energy and emotion in the project over the last five years, when I talk to others about my goals and aspirations, I usually end up talking about them in relation to the project. I think I’m okay with that in the short term, but I need to remind myself that I’m not the project and the project isn’t me. I need to remember its okay to be selfish and that I’m allowed to have goals that don’t relate to Node-RED.

So here we are with 2018.

With all that said, my focus remains getting Node-RED to be a sustainable project and getting the roadmap delivered. Beyond that… time will tell.