The first part I’ll cover is the scheduler – Node-RED. Many developers may not be familiar with it, but for more on why I think it’s a good fit, see the previous part of the series.

The best place to get started is the Getting Started part of the Node-RED site. Node-RED, as its name implies, is a NodeJS application. This does not mean you need to have an in-depth knowledge of NodeJS or JavaScript development. In some circumstances it doesn’t mean you even need NodeJS installed wherever you’re running Node-RED, though I do have it installed on my PC.

If you prefer IBM Cloud (formerly Bluemix) as a deployment location, the instructions are provided for how to do that. Using this approach stores the flows in a Cloudant database on IBM Cloud. Of course for an on premises installation alongside Domino, as John Jardin mentioned to me, it would be feasible to store the flows in an NSF, though that’s beyond the scope of this series.

If you prefer on premises, there are generic installation instructions as well as instructions for Windows. My personal preference is Docker. Docker is easy to install, easy to deploy applications to and is fast becoming a standard approach for DevOps, so much so that support for Domino on Docker was moved up the priority list. If you haven’t used Docker, now is the time and it’s supported on Windows 10 Professional, Mac and Linux, where Docker Community Edition is free.

Once Docker is installed and running (the “whale with containers” icon will be in the task bar) you are ready to get started. There is no GUI, so interaction is via the command line. My preference (thanks Oliver Busse) is Cmder, a nice console emulator for Windows. The installation instructions for Node-RED on Docker pretty much cover everything. There are two points to bear in mind:

  1. Docker apps run self-contained. If you’ve used some kind of VM software like Oracle VirtualBox, it’s the same kind of thing but at application-level rather than OS-level. “localhost” in your application (e.g. Node-RED of Docker) is Node-RED itself, not the PC or server running Docker. I’ll show a tool to manage that shortly.
  2. When you start up an app installed on your host PC or server, you’re starting up that installation. Each time you start it up from Docker, you’re starting up a clean, fresh installation from the image. So more likely than not you’re going to want to persist some data, to ensure your flows and additional nodes continue beyond startup. That’s not difficult, but worth remembering.

So the command to start up Docker with persisted data is:

docker run -it -p 1880:1880 -v C:/MyFolder/node-red-data:/data --name domino-node-red nodered/node-red-docker

Running this will deploy and run a Node-RED container on Docker. The “-p” command tells it which ports you want it to run on. The “-v” command uses a folder with the path defined up to the final colon. “:/data” means that it uses that for Node-RED’s data directory. The “–name” command assigns it a name. You can have multiple instances of Node-RED (containers) running with various names. But the container name must be unique within the current instance of Docker. So I could use the same container name on multiple instances PCs / servers, but if I try to use this name again, it’s going to fail. The final bit tells Docker what image to run – the name for the standard Node-RED package is “nodered/node-red-docker”. If you create a custom Docker image for a modified Node-RED installation, you would create it with a different name and use that name here.

I mentioned that the name needs to be unique. docker ps -a will list all Docker containers deployed.

So once Docker is running – here on port 1880 – you can then access Node-RED at http://localhost:1880. In the next part we’ll cover setting up a flow to trigger an XAgent.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top