5 Jun 2018

In the first part of this series I showed how to create a deployment pipeline from Node-RED running locally to Node-RED running in IBM Cloud.

It got the basic pieces into place to let you deploy a version controlled Node-RED application to the cloud. The next task is to connect some other IBM Cloud services to the application.

IBM Cloud-aware nodes

The existing Node-RED boilerplate comes with some extra nodes that are IBM Cloud-aware. They are able to automatically detect available instances of their respective services using the VCAP_SERVICES environment variable that Cloud Foundry provides.

One such collection of nodes are for the Cloudant database service, which we’re going to add to our Node-RED application.

The challenge is how to develop against those nodes when running locally – outside of the IBM Cloud environment.


Setting up Cloudant

Create a Cloudant service instance

Open up the IBM Cloud catalog and select the ‘Cloudant NoSQL DB’ service. Create a new instance, making sure you select the same region as your Node-RED application.

Bind Cloudant to your Node-RED application

Go to the dashboard page for your Node-RED application and select the ‘Connections’ tab. Find your newly created Cloudant service in the list and click ‘connect’.

It will prompt you to restage the application which will take a couple of minutes to complete.

Once that’s done, go back to the ‘Runtime’ tab on the IBM Cloud dashboard and the environment variables section. You will see a section for VCAP_SERVICES – this is the environment variable that the platform uses to pass the application all of the details it needs to access the connected services. You should see an entry for our newly created cloudant instance – if you don’t, make sure the restage has completed and reload the page.

Beneath the credentials is an ‘export’ button – clicking that will download a copy to a file called <your-app-name>_vcap.json.

Copy that file into your Node-RED user directory, ~/.node-red – do not put it under version control.

Edit your local settings.js file – this is the one in ~/.node-red not the one in your project directory.

Add the following just above the module.exports line and then restart Node-RED. Make sure to replace <your-app-name>_vcap.json with the actual name of the file you downloaded.

var fs = require("fs");
var path = require("path");

// Load and export IBM Cloud service credentials
process.env.VCAP_SERVICES = fs.readFileSync(path.join(__dirname,"<your-app-name>_vcap.json"));

Your local Node-RED now has access to your service credentials in the same way as your Node-RED in IBM Cloud does.

Install the IBM Cloud-enabled Cloudant nodes

Open up the Palatte Manager from the drop-down menu in Node-RED. Go to the ‘Install’ tab, search for node-red-node-cf-cloudant and click install.

Once installed, you’ll have a new pair of Cloudant nodes in the ‘storage’ section of the palette. Drag one into your workspace and double-click to edit it. The ‘Service’ property should have the name of your Cloudant service listed. If it doesn’t, check you’ve follow the steps to get your VCAP_SERVICES setup correctly.

Close the edit dialog but do not delete the node – we’ll come back to this a bit later.

Add the Cloudant nodes to the project

Having installed the nodes locally, we need to add them to our project’s package.json file so they also get installed when deploying to the cloud. We can do this within Node-RED by going to the ‘information’ sidebar tab and clicking the button next to the project name. This opens up the Project Settings dialog.

Go to the ‘Dependencies’ tab where you’ll see a list of the modules our project depends on. This is a combination of modules already listed in package.json and modules which provide nodes we have in our flow. At this point you should have two entries node-red and node-red-node-cf-cloudant.

Ignore the offer to remove node-red from the project as we need that, but do click the ‘add to project’ button next to the Cloudant module.

Commit changes

If you switch back to the ‘history’ tab you should now have two entries in the ‘Local files’ section – manifest.yml and package.json. If you click on either filename it will show you a diff of what has changed in the file. Check the changes look correct then click the ‘+ all’ button to prepare both files for committing and then commit them. Switch to the ‘Commit History’ tab and push the changes up to GitHub.

Wait for the Travis build to redeploy you application and then reload it your browser. You should now have the Cloudant nodes available in the palette and, as before, when you add one to your workspace and edit it, your Cloudant service will be selected.

Separating development and production

At this point, an application built locally will use the same Cloudant instance as the one running in IBM Cloud.

If we consider the local Node-RED as a development environment and the IBM Cloud instance as the production environment, then they really should use separate instances.

This can be achieved by creating a second Cloudant instance to treat as the development instance. Rather than connect it to your Node-RED application, you can generate a set of credentials from the instance dashboard page.

Update the <your-app-name>_vcap.json file with the new credentials and after restarting Node-RED, your local instance will now be accessing the separate instance.


Next Steps

This post has shown how to connect IBM Cloud services to you Node-RED application with separate development and production instances. It’s another important step to creating production-ready applications with Node-RED in IBM Cloud.

In the next part of this series, we’ll look at how to start building a simple application using this setup. That’s what I said in the previous post, but I really mean it this time.

1 Jun 2018

Node-RED has been available in the IBM Cloud catalog since the early days of what was then known as IBM Bluemix. Provided as a boilerplate application, it gives a really quick and easy way to get started with both Node-RED and the wide range of services available in the catalog.

The boilerplate is optimised for ease of use and getting started. Applications running in the Cloud Foundry part of IBM Cloud cannot treat their local file system as persistent storage – any time the application is restarted the file system is wiped back to its original state. This is why the Node-RED boilerplate comes with an instance of the Cloudant database service; giving it somewhere to store its flows.

It also means that any nodes that are installed using the Palette Manager in the editor have to be dynamically reinstalled whenever the application restarts. This is not ideal as it takes longer to restart, exposes the application to random network/npm failures and also risks memory issues as Node-RED tries to reinstall multiple things at once.

The better solution is to enable the Continuous Deployment feature and edit the application’s package.json file to explicitly add any additional modules. That’s also good as it means your application is version controlled and can be easily restored.

Except that isn’t entirely true. The underlying Node-RED application might be version controlled, but the most valuable part, the flows, are still held in Cloudant.

In an ideal world, you’d have all of your application assets under a single source of version control. It should be possible to deploy that application to separate development, test and production environments. It should all fit in with more traditional developer workflows.

This is the first in a series of posts that will show how you can create just such a workflow.

You’ll be able to develop a Node-RED application on a local machine, pushing changes to a GitHub repository and have them deploy automatically to IBM Cloud using Travis CI.


Getting started

Before we begin, you’ll need:

  • A GitHub account – it’s free!
  • A Travis CI account – sign-up using your GitHub account – it’s free!
  • An IBM Cloud account – sign-up for a Lite Account; it’s free, doesn’t require a credit card, never expires and gives you enough resources to get started

You’ll also need Node-RED installed locally.


Create a new Node-RED project

Node-RED introduced the Projects feature in the 0.18 release. It allows you to manage your flows in a git repository along with all the other pieces you need to create a redistributable Node-RED application.

Enabling the Node-RED projects feature

In the 0.18 release, the Projects feature needs to be enabled. Edit your settings.js file and update the editorTheme setting to change the projects.enabled flag to true. If you don’t have an editorTheme setting, add one in:

    editorTheme: {
        projects: {
            enabled: true
        }
    }

You can see how it should look in our default settings file – although the default is set to disable the feature, so if you copy it, make sure you change it to true.

When you restart Node-RED, you’ll be shown a welcome screen that introduces the projects feature.

Create a new GitHub repository

To create our Node-RED project, we’re going to first create a new repository on GitHub for the project.

Login to GitHub, click the New repository option under the + menu in the header.

Give your repository a name and leave all of the options as they are – in particular, do not tick the Initialize this repository with a README option. Then click ‘Create repository’.

On the repository page, copy the git url to your clipboard as we’ll need it in the next step.

Clone the repository into a new project

Back in Node-RED, select the option to create a new project by cloning a repository. When prompted, paste in the git url from the previous step.

Once you create the project, you’ll get a message saying it’s empty and it will offer to create a default set of project files – an offer you should accept.

It will then prompt you for the name of the flow file to use – we’ll use flow.json. Next it will ask about encrypting your flow credentials – something you must enable as you will be publishing your flow to GitHub. Provide an encryption key and make a note of it for later on.

With that done, you’ll now have your project ready to start wiring up your flows.


Modify the project to run on IBM Cloud

In order to deploy your Node-RED project as a Cloud Foundry application on IBM Cloud we need to add some extra files and update an existing one. These changes need to be made outside of Node-RED in a text editor of your choice.

First we need to find the project files. Node-RED stores them in a directory under the runtime user directory. By default, that will be ~/.node-red/projects/<name-of-project>.

Update package.json

The project already has a default package.json file that needs some updates:

  • add node-red in the dependencies section – this will ensure Node-RED gets installed when the application is deployed.
  • add a scripts section to define a start command – this is how IBM Cloud will run the application. We’ll look at this in a bit more detail in a moment.
  • add an engines section to define what version of Node.js we want to run with. You could leave this out and just get whatever the current Node.js buildpack defaults to, but it is better to be explicit.
{
    "name": "node-red-demo-1",
    "description": "A Node-RED Project",
    "version": "0.0.1",
    "dependencies": {
        "node-red": "0.18.*"
    },
    "node-red": {
        "settings": {
            "flowFile": "flow.json",
            "credentialsFile": "flow_cred.json"
        }
    },
    "scripts": {
        "start": "node --max-old-space-size=160 ./node_modules/node-red/red.js --userDir . --settings ./settings.js flow.json"
    },
    "engines": {
        "node": "8.x"
    }
}

Lets take a closer look at the start command:

node 
    --max-old-space-size=160         (1)
    ./node_modules/node-red/red.js   (2)
    --userDir .                      (3)
    --settings ./settings.js         (4)
    flow.json                        (5)
  1. As we’re running with a fixed memory limit, this argument is used to tell node when it should start garbage collecting.
  2. With node-red listed as an npm dependency of the project, we know exactly where it will get installed and where the red.js main entry point it.
  3. We want Node-RED to use the current directory as its user directory
  4. Just to be sure, we point at the settings file it should use – something we’ll add in the next step
  5. Finally we specify the flow file to use.

With the current version of Node-RED, 0.18, you should restart Node-RED after editing this file. – it doesn’t know the file has changed and may overwrite any changes you’ve made if you modify the project within the editor later.

Add a settings file

We need a settings file to configure Node-RED for the IBM Cloud environment. Create a file called settings.js in the project directory and copy in the following:

module.exports = {
    uiPort: process.env.PORT,
    credentialSecret: process.env.NODE_RED_CREDENTIAL_SECRET,
    adminAuth: {
        type: "credentials",
        users: [],
        default: {
            permissions: "read"
        }
    }
}

This tells Node-RED to listen on the port IBM Cloud gives us from the PORT environment variable. It also sets the key used to decrypt the credentials – this time coming from the NODE_RED_CREDENTIAL_SECRET environment variable. That lets us provide the key to the application without having to hardcode it in the version controlled files. We’ll sort that out in a later step of this post.

Finally it configures the editor to be in read-only mode. In a future post we’ll turn off the editor entirely, but leaving it running is useful at this stage to help verify your application is running.

Add a manifest file

The next file we need is the manifest.yml file used to deploy the application. Here’s a minimal file to start with. Make sure you change the name field to something unique for your project – nr-demo is already used and will cause your deploy to fail if you don’t change it.

applications:
- name: nr-demo
  memory: 256MB
  instances: 1

Configure Travis

Next we’re going to get Travis to watch our GitHub repository and trigger a build whenever we push changes to it.

Enable Travis for your repository

Sign in to Travis and connect it to your GitHub account. Go to your profile page and enable Travis for your new repository. You may have to click the ‘Sync account’ button for it to show up.

Add a .travis.yml file

The project needs a file called .travis.yml to tell Travis what to do when it runs a build. A build consists of three phases: install, script and deploy. For the purposes of this exercise, we’re going to skip the install and script phases – they can be used in the future to run automated tests against the application.

For the deploy phase we can use an integration Travis already has with IBM Cloud – albeit under the old brand name: Bluemix CloudFoundry.

With all that in mind, copy the following into your .travis.yml file:

language: node_js
node_js:
    - "node"
install: true
script: echo "Skipping build"
deploy:
  edge: true
  provider: bluemixcloudfoundry
  username: apikey
  organization: [email protected]com
  space: dev
  manifest: manifest.yml

You’ll need to set the organization and space fields to match your own account details. The username must be set to apikey and the next step is to get a password we can use.

Generate an IBM Cloud API key

We need to generate an API key in our IBM Cloud account which we can use for the Travis deploy.

Log in to the IBM Cloud dashboard and select Manage -> Security -> Platform API Keys from the menu in the header.

Click the Create button, enter a sensible name for the key and click Create. The key will be generated and in the next dialog it will let you copy it to your clipboard. Make sure you copy it – once you close the dialog you will not be able to see it again and you’ll need to generate a new one.

Add the encrypted api key to your .travis.yml

Rather than paste this key into your .travis.yml file directly, Travis provides a way to encrypt the key first so it can be added safely.

To do this, you must first install the Travis CLI. How exactly you do that will depend on your OS and whether you have ruby installed and whether your PATH is setup correctly and lots of other things that may trip you up along the way. Suffice to say, if you have ruby installed, it should be a simple case of running:

gem install travis

You can then run:

travis encrypt --add deploy.password

It will prompt you to paste in your api key, hit enter then ctrl-d. If you look in your .travis.yml file you should see a password/secure section added under the deploy section.

If you get errors such as travis: command not found then you may need to update your PATH to include wherever gem installed the package.


Commit and push changes

At this point, all of the necessary changes have been made to the project files. You can commit the changes and push them up to GitHub. You can do this either via the git command-line, or from within Node-RED.

Committing via the command-line

From within the project directory, ~/.node-red/projects/<name-of-project>, run the commands:

git add package.json .travis.yml manifest.yml settings.js
git commit -m "Update project files"
git push

Committing via Node-RED

Within Node-RED, open up the history sidebar tab. You should see the four changed files in the ‘Local files’ section. If you don’t, click the refresh button to update the view. When you hover over each file a + button will appear on the right – click that button to move the file down to the ‘Changes to commit’ section.

Once all four are staged, click the ‘commit’ button, enter a commit message and confirm.

Switch to the Commit History section and you should see two commits in the list – the initial Create project files commit and the commit you’ve just done.

Click the remote button – the one with up/down arrows in – and click push in the dialog. This will send the changes up to GitHub.


Watch your build

If you go back to Travis, you should see the commit trigger a new build against your repository. If all is well, two to three minutes later the build should pass and you should be able to open http://<name-of-app>.mybluemix.net and be welcomed by the Node-RED editor.

If it fails, check the build log to see what went wrong.


Tell your application your credential secret

Now that your application has been created on the IBM Cloud, one final step is to tell your application the key it should use to decrypt your credentials file.

Go to the IBM Cloud dashboard page for your newly deployed application. On the ‘runtime’ page, go to the ‘environment variable’ section and add a variable called NODE_RED_CREDENTIAL_SECRET set to whatever credential key you set when you created your Node-RED project right at the start of this whole exercise.

Click the ‘save’ button and your application will be restarted, now with this variable set.


Next steps

If you’ve got this far, well done. You now have a pipeline going from Node-RED on your local machine to Node-RED running in the IBM Cloud. Whenever you make changes locally, commit and push them to GitHub, your application on IBM Cloud will be restaged thanks to Travis.

In the next part of this series, we’ll look at how to start building a simple application using this setup.

16 Mar 2018

In our recent project update blog post, I marked the fact Node-RED has recently hit 1 million downloads. It’s a big milestone to reach for the project and a good opportunity to reflect back on how we’ve got here.

There are a couple of threads I want to explore.

  • Open by default
  • Low Code application development

Open by default

When we started Node-RED, it was a tool to help us do our day job, but our day job wasn’t to spend time writing development tools – it was to build real solutions for clients as part of our Emerging Technologies group. That had some consequences on how we approached developing Node-RED. It meant that we weren’t adding features speculatively – everything we did was in reaction to a real need to do something. We were also limited in how much time we could spend on it during the day – evenings and weekends were much more the norm.

As we discussed how to push it to a wider audience we knew that open sourcing it was the only route we wanted to take. The alternative was for it to remain a proprietary piece of code that would likely sit on a shelf and only get used by us – which wasn’t really an option.

I spoke at MonkiGras a couple years ago about our experiences going through the open-sourcing process at IBM. It was a straight-forward process and very much reflected a growing attitude of being open by default – something that has continued to flourish and become an important part of our culture.

Being an open source project has been absolutely instrumental to the success we’ve seen, but it was never a guarantee of success. A ‘build it and they will come’ approach may have worked in Field of Dreams, but with any brand new open source project, it takes hard work to spread the word and get people engaged with it.

It also takes deliberate attention to develop in the open. Discussions that would previously have happened over a coffee in front of a whiteboard need to happen in an open forum. The long list of ideas written on sticky notes need to be visible to the community. Communication is key and I think this is an area we can continue to improve on.

Low Code application development

Node-RED embodies a Low Code style of application development; where developers can quickly create meaningful applications without having to write reams of code. The term Low Code was coined by the Forrester Research in a report published in 2014 – but it clearly embodies a style of development that goes back further back then that.

There are a number of benefits to Low Code application development, all of which we’ve seen first hand with Node-RED.

It reduces the time taken to create a working application

This allows the real value to be realised much quicker than with traditional development models.

It is accessible to a wide range of developers and non-developers

Above all else, this is one of the most important benefits we’ve seen. Anyone who understands a domain-specific problem, such as a business analyst, a linguist or a building engineer, will know the discrete steps needed to solve it. Node-RED gives them the tools to express those steps within a flow and build the solution for themselves.

The visual nature helps users to see their application

Show, don’t tell” is a powerful concept. We often see Node-RED get used to demo capabilities of APIs, such as the Watson cognitive services. It’s so effective because the visualisation of your application logic shows the art of the possible without having to explain every semi-colon, bracket and brace. Not everyone thinks in lines of code; the visual representation of application logic is much more relatable.

This is all evident in the way Node-RED is often used as part of the code patterns my colleagues produce on IBM Code.

Low Code? But I *want* to write code

Low Code platforms may open up application development to a wider audience of developers, but they still have their critics in those who prefer to be able to tinker with the underlying code.

This is where the open by default approach of Node-RED brings us an advantage. Node-RED isn’t a closed platform that acts entirely as a black-box. Anyone is able to look under the covers and see what’s going on, to provide feedback or to suggest changes.

If someone finds a node that doesn’t do quite what they need, they can easily work with the author to add the desired features, or choose to create their own node.

I think one of the most important things we did in the project at the very start was to make it possible for anyone to self-publish their own nodes. We chose not to become gate-keepers to what the community could add to Node-RED. The fact there are over 1300 3rd party nodes today, a number that climbs steadily, is testament to that decision.

Getting to the next million

As we look to the future of the project, there is a lot still to come. The roadmap to 1.0 opens up so much potential for new uses of Node-RED that we’re all eager to get there as soon as we can. We continue to see more companies adopt Node-RED as part of their own developer experience. As the user community continue to grow, our real challenge is to grow the contributor community – helping drive the project forward.

I’ve talked about Low Code in this post and not mentioned IoT once. We are still, and will remain, IoT at heart. But there’s a much broader framing of the project to consider beyond IoT. I have more personal flows doing simple web automation tasks than I do anything related to IoT. There’s a huge developer audience to tap into who may otherwise be put off by our focus on IoT. How that framing takes shape is something we need to think carefully about.

Ultimately, Node-RED is an open source, low-code, event-driven, flow-based programming environment. It has a great community behind it and new users coming to it every day. I’m sure we’ll get to the next million much quicker than the first.

5 Jan 2018

hopefully it won’t be another 6 weeks before I write again

Nope. That didn’t really work out did it. Let’s not kid ourselves, I’m not about to recap the last 20 weeks. For all sorts of reasons, it’s taken all my energy to keep the various plates spinning with little room for much else.

A lot of my time has been spent heads down trying to make progress on the Node-RED roadmap we published in the summer. The content of the roadmap is good, but the timescales I set out for it were not. I knew they were ambitious and set myself and the community a challenge – but ultimately the weight of delivering the roadmap has fallen on my shoulders. And I’ve been feeling that weight heavily for a while.

One of my goals for last year was to improve the sustainability of the project; to get to the point where I could go do something else and the project would continue to progress. The last few months have shown that, despite some positive signs, it’s not there yet. There are times when I feel frustrated at having to churn out code as it leaves very little time for doing all the other things I could/should be doing. But I also recognise that getting the roadmap done will unlock so much potential in the project.

And this is one of my problems; having invested so much time, energy and emotion in the project over the last five years, when I talk to others about my goals and aspirations, I usually end up talking about them in relation to the project. I think I’m okay with that in the short term, but I need to remind myself that I’m not the project and the project isn’t me. I need to remember its okay to be selfish and that I’m allowed to have goals that don’t relate to Node-RED.

So here we are with 2018.

With all that said, my focus remains getting Node-RED to be a sustainable project and getting the roadmap delivered. Beyond that… time will tell.

11 Aug 2017

TJBot is an open source DIY kit from IBM Watson to create a Raspberry Pi powered robot backed by the IBM Watson cognitive services. First published late last year, the project provides all the design files needed to create the body of the robot as either laser-cut cardboard or 3D printed parts.

It includes space for an RGB LED on top of its head, an arm driven by a servo that can wave, a microphone for capturing voice commands, a speaker so it can talk back and a camera to capture images.

There are an ever-growing set of tutorials and how-tos for building the bot and getting it hooked up to the IBM Watson services. All in all, its a lot of fun to put together. Having played with one at Node Summit a couple weeks ago, I finally decided to get one of my own. So this week I’ve mostly been monopolising the 3D printer in the Emerging Technologies lab.

A post shared by Nick O'Leary (@knolleary) on

A post shared by Nick O'Leary (@knolleary) on

After about 20 hours of printing, my TJBot was coming together nicely, but I knew I wanted to customise mine a bit. In part this was for me to figure out the whole process of designing something in Blender and getting it printed, but also because I wanted my TJBot to be a bit different.

First up was a second arm to given him a more balanced appearance. In order to keep it simple, I decided to make it a static arm rather than try to fit in a second servo. I took the design of the existing arm, mirrored it, removed the holes for the servo and extended it to clip onto the TJBot jaw at a nice angle.

The second customisation was a pair of glasses because, well, why not?

I designed them with a peg on their bridge which would push into a hole drilled into TJBot’s head. I also created a pair of ‘ears’ to push out from the inside of the head for the arms of the glasses to hook onto. I decided to do this rather than remodel the TJBot head piece because it was quicker to carefully drill three 6mm holes in the head than it was to wait another 12 hours for a new head to print.

In fact, as I write this, I’ve only drilled the hole between the eyes as there appears to be enough friction for the glasses to hold with just that one fixing point.

There some other customisations I’d like to play with in the future; this was enough for me to remember how to drive Blender without crying too much.

Why call him ‘Bleep’? Because that’s what happens when you let your 3 and 7 year old kids come up with a name.

I’ve created a repository on GitHub with the designs of the custom parts and where I’ll share any useful code and how-tos I come up with.