When I was working with John Jardin on our session at IBM Think, one of the key aspects was testing the REST services I was writing. For that testing I used Postman, a standard REST service client. But the final test I needed to do comprised a call to a bulk process marking ToDos as overdue which posted back to a REST service endpoint. Postman is just a REST client, not a REST server. But I was aware from the video I did on Node-RED for scheduled processes that it was easy to set up a REST endpoint on Node-RED that could be called. As I did that, I had a light bulb moment, and realised that I could have created an end-to-end test suite on Node-RED.

So when I recently had to build a REST service, this became my preferred approach. It took some time setting it up, with some learning on the best approaches to make it reusable. And it’s still not fully refined. But with the increased focus of REST service access to Domino, now seems a good time.

The Role of The API Gateway

Firstly, it’s worth reinforcing that an API is not just basic CRUD REST services. It should be a gateway, validating access to endpoints, validating content and potentially manipulating outputs. The last is why GraphQL is such a powerful API gateway approach. But it’s worth expanding on the first two.

When developing a Notes Client or XPages UI it’s standard to restrict access to navigation options, views, forms, and especially fields based on Reader access, Author access and roles. Similarly, so should your REST API gateway. This can be done easily on an XAgent. My REST services in OSGi plugins use ODA, so “authentication” in the traditional sense doesn’t always apply, I tend to apply a variety of much more flexible “application-specific” authentication models, depending on my need. But even these can be expanded. Code can check for a Header parameter that’s expected, force authenticated access, check the current user’s role-level access to a database, or verify against content in a Notes document.

Similarly, validation of content is typically tied to UI design, because that’s the interface most often developed for. But any developer who’s worked with Domino for some time has coded LotusScript import agents pulling data from external systems. We’ve all learned from experience not to trust external data, so our import routines do some checking, maybe verifying the number of elements, trying to convert a code to a value or map to keyword documents or Person documents for access. If the validations fail, we do not process the data either fully or at all, depending on the criticality of the “bad data”. Similarly with a REST API gateway, it should validate passed data and throw a 400 error if it’s incomplete or bad. We don’t want to accept dates that are wrong so a document will fail to load or a GET request for the data will break. Similarly, we’ll need to manipulate data for output to requests, e.g. converting dates into a valid RFC date format or passing codes instead of values.

All of this should be in the API gateway. That could be the Java code behind your XAgent – JSON handling in REST services are a compelling reason for coding these in Java rather than SSJS. It could be in an OSGi plugin. Or with the rise of Node.js development on Domino, it could be in a Node.js Express app or a set of Node-RED endpoints. If Java is your preference, it could be in a Spring Boot or Vert.x application, again separate from the Domino server. But it should be somewhere, and should not allow direct access to any lower level that can bypass the gateway.

Obviously this in itself raises additional challenges. If you’re API gateway and the underlying database are separate, good DevOps means managing availability and scalability.

Back to Testing

Node-RED can be installed:

  • As a standalone application dependent on having Node.js already installed on your PC.
  • As a Docker container, for which you may want your data to be persisted in a separate volume.
  • As a cloud service on IBM Cloud (and presumably others). For IBM Cloud, for single-user or small development team use, the free option would be fine. With this, the flows are persisted to a Cloudant database.

I’ll be covering more on setup, configuration and security in my session at IBM Think. After that, it’s just a case of setting up the flows. I used separate flow tabs for different test cases, e.g. checking validation and administrative endpoints, particular endpoints under a certain category etc.

Flows were kicked off with the inject node, to allow me to trigger manually. I set msg.step to allow me to create a cyclical flow process, incrementing msg.step after each test and using a switch node to route to the next test. I used a subflow to set a base URL (e.g. msg.baseUrl = “https://www.myserver.com/myRestService/v1”) and headers for the REST service calls (msg.headers) for the API key and secret. That flow had parallel function nodes for my different environments – dev, test and production. I connected to the relevant environment and deployed the flows as and when required. This means I can run the identical tests on all environments, quickly identifying everything is working. The only thing that’s changing is the base URL and the key/secret. That subflow also had a switch statement based on a variable (msg.authType), to know whether to pass the correct api, key or secret or miss the completely. Function nodes passed the JSON response into a msg.output object and, before moving onto the next test, these were pushed to a debug node. This was just a quick and dirty way of checking each node was working correctly, and required me to know what to look for. I’m not as comfortable with JavaScript as some. Ideally, a more automated test would parse the response, maybe write it to a flow-level JSON object which got written out to a text file at the end of the run, and was also used to determine whether to abort or debug something easier to skim-read like “test passed” or “***test failed***”. Sometimes I also required information from previous tests. For example I had a flow covering all parts of CRUD. Obviously I would want to be able to check the underlying database, and sometimes they were on different flow tabs. So I set a global-level variable (using global.put() and global.get()).

This was very much my first attempt and there are obvious improvements, but it allowed me to quickly run end-to-end test suites across environments – including tests for invalid contents being passed. It also allowed me to quickly re-run tests to make sure that after each change to the code I hadn’t broken anything. And it’s a technique I certainly plan to use again in the future.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top