11 Aug 2017

TJBot is an open source DIY kit from IBM Watson to create a Raspberry Pi powered robot backed by the IBM Watson cognitive services. First published late last year, the project provides all the design files needed to create the body of the robot as either laser-cut cardboard or 3D printed parts.

It includes space for an RGB LED on top of its head, an arm driven by a servo that can wave, a microphone for capturing voice commands, a speaker so it can talk back and a camera to capture images.

There are an ever-growing set of tutorials and how-tos for building the bot and getting it hooked up to the IBM Watson services. All in all, its a lot of fun to put together. Having played with one at Node Summit a couple weeks ago, I finally decided to get one of my own. So this week I’ve mostly been monopolising the 3D printer in the Emerging Technologies lab.

A post shared by Nick O'Leary (@knolleary) on

A post shared by Nick O'Leary (@knolleary) on

After about 20 hours of printing, my TJBot was coming together nicely, but I knew I wanted to customise mine a bit. In part this was for me to figure out the whole process of designing something in Blender and getting it printed, but also because I wanted my TJBot to be a bit different.

First up was a second arm to given him a more balanced appearance. In order to keep it simple, I decided to make it a static arm rather than try to fit in a second servo. I took the design of the existing arm, mirrored it, removed the holes for the servo and extended it to clip onto the TJBot jaw at a nice angle.

The second customisation was a pair of glasses because, well, why not?

I designed them with a peg on their bridge which would push into a hole drilled into TJBot’s head. I also created a pair of ‘ears’ to push out from the inside of the head for the arms of the glasses to hook onto. I decided to do this rather than remodel the TJBot head piece because it was quicker to carefully drill three 6mm holes in the head than it was to wait another 12 hours for a new head to print.

In fact, as I write this, I’ve only drilled the hole between the eyes as there appears to be enough friction for the glasses to hold with just that one fixing point.

There some other customisations I’d like to play with in the future; this was enough for me to remember how to drive Blender without crying too much.

Why call him ‘Bleep’? Because that’s what happens when you let your 3 and 7 year old kids come up with a name.

I’ve created a repository on GitHub with the designs of the custom parts and where I’ll share any useful code and how-tos I come up with.

26 Sep 2013

Node-RED is the visual tool for wiring the Internet of Things that I have been working on through the year. What started as an educational exercise to get to grips with technologies such as Node.js and d3, soon turned into a tool that provided real value in client engagements.

One of our goals all along was to open-source the tool; to build a community of users and contributors. Following a whirlwind pass through the appropriate internal process, we got all the approvals we needed at the start of September and have released Node-RED as an Apache v2 licensed open source project on github.

You can find us here: nodered.org.

Following on from that, I have had the opportunity to speak at a couple of external events.

First up was the Wuthering Bytes technology festival in sunny/rainy Hebden Bridge. I did a completely unprepared, but well received, 5 minute lightning-talk at the end of the first day which generated a lot of interest. On the second day, there was a series of workshops, one of which was based around MQTT on Arduino – using my PubSubClient library. The workshop was organised by a contact of mine from the IoTLondon community, who was happy to see me there to help out. We ended up using Node-RED as well in the workshop as it makes it easy to visualise topics and move messages around. It was a great feeling to glance over the room and see people I’ve never met before using Node-RED.

The second event was last night, at the monthly London Node.js User Group (LNUG) Meetup. This time I had a 30 minute slot to fill, so went prepared with a presentation and demo. Again, it was well received and generated lots of good comments on twitter and afterwards in person. The talk was recorded and should be embedded below – starts at 29mins.

We’ve got a couple more talks lined up so far. Next up is the November meetup of IoTLondon, exact date tbc, and then ThingMonk in December.

22 Jan 2013

One of the parts of the Internet of Things that I feel often gets overlooked are the Things. There seems to be a new IoT gateway, hardware platform or consortium to “standardise” every week.

These are all honourable activities to move things either forward, or in the case of standardisation consortium, in circles.

But what about the Things? This is the part of IoT that appeals to me the most. Crafting a Thing that will appeal to a wide audience takes so much more that a bit of hardware and software. It needs designing to be an object that people willingly accept into their homes. This means much more than just the physical nature of the Thing, but at all levels that it is interacted with.

This was one of the inspirations behind my Orb project. But that, in its current state, has a long way to go. It remains the preserve of those who are prepared to get a soldering iron out and write some code.

I’ve known Alex for a few years now. She gave me my Arduino Ethernet shield back in the early days of Tinker and has been a good friend since. As someone with a design background, working in that space, she gets the Things idea better than most.

Likewise, Adrian proved his IoT credentials long ago with Bubblino. Soon you’ll be able to say he (co-)wrote the book on the subject.

They are working to produce a real Thing for the world to buy. The Good Night Lamp is based on an idea Alex had a few years ago and is conceptually simple; a pair of internet connected lamps, one big, one small. When the big one is turned on, then on comes the small one – where ever it is in the world.

That’s it. Nothing more, nothing less.

Imagine giving the big lamp to a loved one abroad; a wife, a boyfriend or an elderly grandparent. As they come and go, turning the big lamp on and off, so does the small lamp on the shelf in your living-room. It gives you a level of ambient intimacy; knowing the loved one is there, without being there.

In a work setting, perhaps it signals you’re available for the Skype chat, without resorting to email/IM.

That’s it. Nothing more. Nothing less.

Good Night Lamp on Kickstarter from Good Night Lamp on Vimeo.

The geek in me screams “but that’s easy to do – just plug an Arduino into a lamp and you’re done”. But that misses the point. This isn’t simply a lamp with an Arduino in it; it’s a Thing that has been designed.

Making Things and shipping them is hard and expensive. To help make it a reality they are currently running a Kickstarter campaign. It’s nearing the end and, as it stands, they’ve got quite a way to go.

If you like the idea, go and back the campaign. Even if they don’t reach their funding target, an unsuccessful Kickstarter project isn’t going to stop Alex and co making this Thing a reality.

15 Jan 2013

I made a thing. I wanted to make lots of things last year, but didn’t. That made me sad. It being a new year and all, it felt like a good time to start making.

Last summer, I was fortunate enough to be invited to BERG’s Little Printer Hack Day. I went along with not only all sorts of ideas to play with but also a migraine and ongoing waves of nausea through-out the day. Which wasn’t ideal. Despite that, it was a fun day and having managed to not pass out, I worked with Kass to make the ASCII Meterogram – a weather report from the Norwegian Meterological Institute and NRK.

A couple weeks ago, whilst contemplating doing a write-up of the year I stumbled over the code from the hack day and decided I ought to do something with it. Later the same day, for reasons I don’t recall, I found myself falling down the rabbit-hole of Maze generation algorithms on Wikipedia.


photo courtesy of Alice @ BERG

Not long later, I had a Little Printer publication that serves up a new random maze every day. There isn’t much to it really – just your standard recursive backtracker algorithm in a few lines of javascript. As you do.

The code’s up on GitHub. It’s a Django app, so if that’s your thing, feel free to have a play. Otherwise, have a look at the sample, or subscribe your own Little Printer through the BERG Cloud remote.

It feels good to make a thing. To make a thing, ship it and see, as of the moment I’m writing this, 30 people subscribed to receive a daily piece of the thing.

More making this year. More bad puns in blog post titles.

11 Nov 2012

I’ve just tagged a new release of the Arduino Client for MQTT – v1.9.1. This release includes an API change that will break existing sketches, something I’m very concious of doing – particular as the last release had such changes as well. But ultimately I decided this one was needed.

Previously, an instance of the client would be created with a call such as:

PubSubClient client(server, 1883, callback);

The library would then create an instance of EthernetClient under the covers. This was fine for hardware that used that class, but there are an increasing number of shields that use their own instance of the common Client API.

So, now the constructors for the client require an instance of Client to be passed in. For the vast majority of existing sketches, this just means changing the above line to something like:

EthernetClient ethClient;
PubSubClient client(server, 1883, callback, ethClient);

Amongst the other functional changes are the ability to publish a payload stored in PROGMEM, connect with a username/password and also the changes described in the previous post to not fragment MQTT packets over multiple TCP packets.

There is also now a regression test suite for the library. Built using python and Ino, the suite first verifies each of the examples compiles cleanly, and if an Arduino is connected, it will upload each sketch in turn and run unit tests against it. So far the focus has been on the scaffolding of the test suite, with only a couple actual tests in place for the example sketches. I’ve tried to write it in a way to be agnostic of the library it’s testing. There’s more work to do on that side of things, but ultimately it could well be reused by others. I imagine it’ll branch into its own repository when it gets to that point.

I’ve also given the project page a tidy and moved the API docs to their own page.

Finally, I’m considering closing the comments on the project page. with 75 comments already, it isn’t the best medium through which to provide support. I’m not sure pointing to the library’s issue tracker on github is necessarily the right thing either – it’s an issue tracker, not a support forum. Perhaps I should set up a dedicated Google group for the library – or is that overkill?

Feedback, as ever, is welcome.