This article has been prompted by my previous one on Domino application development. One of the major differences between Domino application development in the past and the approaches for Node.js-related development in the future is architecture. But part of that could also be an inhibitor for existing customers, because of the incubated world of Domino.
I’m a big proponent that a big key to the future is understanding history. One of the drivers for basing the Notes Client on Eclipse and the use of OSGi in Domino was “easier” extensibility. With the Eclipse-based Notes Client we got plugins for things like File Explorer from Rene Winkelmeyer, Connections, Tungle and Gist (back before RIM bought them and cancelled the open source versions in favour of proprietary Contacts and Calendaring solutions for Blackberry devices). Business partners could also develop extensions for customers, and I was involved in developing one for adding conferencing information into mail, calendaring and applications. I’m not sure if this was possible in the old C++ client, but it would certainly have been more challenging. On the Domino server it allowed OSGi plugins extending the XPages, HTTP and DOTS environments to add fixes, add components and renderers for e.g. Bootstrap, extend the core Java and SSJS APIs, and add frameworks. Again, this would have been considerably harder and was even more specialised before OSGi.
But all were bundling functionality onto the existing Notes Client and Domino server, to allow them to be a monolithic “one-stop-shop”.
There were attempts to break out of that model, with blog posts from Jesse Gallagher about using Nginx and recommendations from IBM to use IBM HTTP Server as a web server in front of Domino. I’m not sure how many companies have deployed Nginx in front of Domino – it’s in use for OpenNTF. But the recommendation of using IBM HTTP Server became doomed when POODLE hit. Initial advice to deploy IBM HTTP Server was rejected by customers who demanded Domino HTTP be upgraded to TLS 1.2 rather than deploying a separate, better HTTP server as a proxy. So successful was that demand that the entitlement Domino customers had to deploy IBM HTTP Server with Domino has since been removed.
The World Outside Domino
Meanwhile, the world outside Domino moved on. Cloud and specifically cloud microservices proliferated. In the past some applications may have integrated with one, for example Google translation or Paypal or some other custom REST service. But they were often restricted to large or specialist services. But now there are a variety of Watson services, APIs for integration with large cloud services like Office 365 or Connections, and a huge rise in IoT. Cloud is no longer jsut about hosting of monolithic applications: it’s now more about microservices.
At the same time developers were building smaller and smaller applications. Node.js and Vert.x enabled a web server to be bundled into the application, as did Spring Boot for Java applications. Because applications integrated with various external services – including databases and authentication services – it made less sense to make the developed application a monolith. This brought added challenges like making sure all parts of the “application” were up and running. That brought the rise of things like Kubernetes and Istio. But it also brought a challenge of needing an operating system for each “compartmented microservice”, which brought the rise of Docker.
The challenges were offset by the benefit of being able to scale up and down as required, but also to switch in and out different “parts” of the application. Need a new front-end? That’s all you need to change and deploy. Changing the database layer is easier, because the architecture has encouraged a more segmented code structure. Want to get feedback on a new UI? Redirect those users to a different front-end server but leave everyone else where they are.
Packaging everything into a war and deploying it to a large-scale server like Tomcat or Websphere Liberty became challenged by this new architecture.
When Node.js support for Domino was announced earlier this year, the initial plan was for Node.js to run within the Domino HTTP task. That changed, for very good reason. The gRPC layer that Node.js talks to – the Proton task – is very efficient. So efficient that Jason Roy Gary generated a huge bill on a cloud provider by maxxing out a load of Node.js applications…which the gRPC layer of Domino responded to with ease!
But this probably means a learning curve for managers, administrators and developers. A monolith is binary – everything is working or nothing is working. A microservices architecture is less clear cut. It also requires deployment to a different architecture. Those who are already using cloud services may be ready to hit the ground running. Those who have stuck to on premises and/or virtualised environments each with their own OS will likely need to change.