Over the last few months I’ve been involved with an application that has needed quite considerable optimisation and architectural modification. A few weeks ago I published a blog post about aspects of developing for performance and then followed up with a discussion about performance and Domino. In this blog I’m going to go into more depth about some of the lessons learned and techniques applied to address performance as required, highlight how performance requirements can change as user requirements change, and draw relate back to the other blog posts as appropriate.
API Performance
Firstly, different APIs take a different amount of time. Some are quite slow. One that I blogged about back in November 2011 was the slow performance of .getCount()
on collections. Previously that was the standard was I checked whether a collection was worth iterating or not. Subsequently I always got the first entry and checked whether it was null (or nothing, in LotusScript). Of course since working with ODA I typically just use a for loop like for (Document doc : dc) {....}
, which avoids the problem completely.
So it’s important to have an awareness of which API calls are slower and code accordingly.
But some are inconsistent. database.search()
is one such API call. In benchmarking I’ve done occasionally it’s quicker than NoteCollection.build()
for the same query, but typically it is slower. Using an API call that has inconsistent performance can result in intermittent issues that could be tricky to pin down. So using an API call that’s more consistent avoids headaches.
Depending on the size of the database and the query being run, using a view may have significantly better performance, providing a view can be added. But knowing whether or not it’s required needs an appropriate amount of data in a development or test system. The performance may not be an issue in the early days, but may degrade and may be a significant issue if archiving was anticipated as a future phase but never occurred. This requires realistic expectations that can only be provided by a business owner, but it also requires relevant data in a development or test system. Having code to pre-load a database with a suitable amount of dummy data can be useful and can highlight problems at development time rather than when the system is live. However, obviously it requires additional development time, which may impact delivery timescales and budgets. Regardless of the decision made, expectations need to be managed accordingly.
This also highlights why I resist a programmatic analysis of an application’s code to identify performance bottlenecks. Even with APIs like database.search()
or .getCount()
there may be times when those kinds of API calls are required. If you need the count of entries or documents in a view, there’s no getting away from using those APIs and it’s not bad practice to use them. They’re the right way.
Background Tasks
In some scenarios, performance may be quick enough. At other times, it’s obvious from the start that it won’t be. If it’s not, this is when it’s worthwhile being aware of the options. One is XOTS, kicking off a background task – or more than one. It’s also worth considering how tasks can be chunked, to deal accordingly with performance impacts. In my scenario I wanted to gather certain Form properties, then analyse all documents in the database for those Forms. Getting the properties for the Forms was relatively straightforward and performant, so getting those sequentially for each Form in a single background runnable task gave acceptable performance and minimised the number of threads being used. But iterating the documents for each Form was never going to be quick for any reasonably-sized database. But documents can only be stored against a single Form name. So it made sense to kick off, from that one background AbstractXotsXspRunnable, multiple background AbstractXotsCallables – one for each Form – and aggregate the results. By doing so, I was able to reduce the running time for my initial target database from 35 minutes down to about 10 minutes. Moreover it required less than 50 lines of code changed and less than 90 minutes total development time.
Knowing what’s available, having a reasonable data-set and planning accordingly can ensure good performance from the start. But this also requires good experience on the platform. If your customers and management expect you to change and develop on a platform or with a framework you’re not experience, they should equally expect that you will not be to eke out the same degree of performance benefit.
But, as I say, you need to know what’s available. I’ve had positive feedback on the blog posts I’ve done on Xots over the years, but I’m aware there have been a number of changes, particularly in error handling and being able to access scoped variables. And I’m aware I have not been able to get a single, cohesive and up-to-date set of documentation. It’s been on my radar for a while, but I’ve finally got round to adding it to the ODA part of OpenNTF’s wiki. That’s finally been added.
Notifying Progress on Background Tasks
The challenge is notifying the users that a background task is running, updating them on progress and notifying them when it’s complete. I know some XPages developers have blogged about web sockets and there are definite experts in the community on the topic. But it’s not an area I’ve dug into.
Scoped variables are one option, to store in sessionScope and pick it up with a message when a new page opens. This can also be useful for giving a way of retrieving a report when it’s complete, providing the session is still available. For this application, I was kicking off a long-term process and expected the user to remain on the same page until it completed. But I wanted to notify of progress. So I kicked off the task and in the onComplete triggered a function to perform a partialRefreshGet on a specific area of the page until the content of an sessionScope variable displayed in a specific Computed Field component was “COMPLETE” or “ERROR”. It’s a pretty simple JavaScript approach, but one that demonstrates the progress being made and prevents the issue of someone thinking the process has hung or didn’t work, so clicking the button again.
Data Storage
How data is stored and retrieved can have a big impact on performance. Historically, in the Notes Client world, any data that needed to be displayed in a view was usually stored on the document that was to be shown in the view, even if it was actually set elsewhere. This has then caused challenges in keeping data up-to-date, because saving on one document required saving at others. On a side note, this is also a problem with basic single-document CRUD APIs on Domino, because you’re relying on success of two separate REST requests from the browser. There is no simpe in-built mechanism to troubleshoot when either or both has not been successful. And there is a reliance on whoever is coding the application – and indeed how the REST service is invoked – to make both calls with the relevant code. SmartNSF looks like it will handle this, if coded correctly, but normal DAS wouldn’t.
For standard architected applications that are XPages only, there is more flexibility in pulling data from different documents. A standard Data View will only retrieve 30 or maybe 50 entries. So retrieving data from an additional document for each does not have a massive impact on performance. But typically there’s still a reliance on a view lookup to retrieve the additional document.
This is useful, but for many years now I’ve used an alternative method to get a specific document – the UNID. It’s been well documented that the UNID is read-write and also that getDocumentByUNID()
is the quickest way to access a document. That is what I used in XPages Help Application for a user’s profile, to get a specific document for a specific user. (For those who are not aware, profile documents get cached, so getting the latest version of a profile document in the web is a big pain.) It’s what Nathan used as the basis of GraphNSF in ODA. It’s also what we’re recommending for serializing document references, what has been termed a metaversalID – the database path or replica ID + UNID. It’s also what I’ve used for documents that need to hold reporting data in the past, so I can just convert a standard string format to a UNID using session.evaluate("@Password(|myString|)")
. ODA also has DominoUtils.toUNID()
as well, although that calculates a different UNID result, so you need to choose one or the other. There is a (very minute, virtually negligible) risk of duplicating a UNID, but one way to avoid that is to store data with a standard UNID in one database and data with a generated UNID in another. But to be honest, I’ve never encountered a conflict.
But speed of getting the document is just one element. You still need to retrieve the data your application needs. Typically this has been through getting various field values. If you’re using SSJS, this may be the easiest option. But if you’ve moved into Java, typically your reporting will use a Java class. And Domino can store Java objects in a field. Not only that, but ODA can autobox it – just call replaceItemValue()
with your Java object and you’re done – and it gzips the resulting Java object, making it even smaller. The only caveat is that the Java object needs to be serializable, and if you change the signature of the Java class (e.g. adding a property) it can result in a ClassCastException when extracting the object into the new version of the class. But that’s easy to resolve. My Java classes, in this particular application as in previous ones, have a serializeToMap()
method that creates a HashMap with all properties converted to primitives (Strings, Doubles, Booleans etc) and a deserializeFromMap()
method that converts them back. The deserialization method can also handle setting defaults if the class changes. This makes it a lot easier to store and retrieve from a Notes Item, plus it automatically handles objects that contain other objects, because the serializeToMap()
method on one object calls the serializeToMap()
method in the child object and the same for deserialization. It makes reporting much easier, both on-the-fly and with stored reporting data. Plus it improves performance of reporting with cached data without affecting memory. Yes, you can’t review the content using Document Properties. But there’s nothing stopping you doing something to display the Map for support purposes.
Until now I’ve only used this mechanism for reporting data, but it also makes sense to do it for person profile information. The only caveat is if a username changes, then if you’re storing in a document based on the username then it’s not going to update. But there are ways round that, if it’s critical.
This really comes down to planning the architecture up front and knowing what can be stored in one place and what needs to be retrieved from a view on-the-fly. In some cases it may be possible to store some historical data as a HashMap in a field and just top it up. In some cases it may not benefit performance. But this knowledge and ability gives a lot more flexibility on caching to disk of reporting or profile information.
Location of Code vs Location of Data
This is something I identified in my recent blog post on performance and Domino. When I started off building my recent application, the application was going to be deployed on a server. But the version and plugin dependencies meant it could give challenges for some customers. So the decision was made to make it possible to run from a PC using Domino Designer local preview. (In a future phase there are plan for a more innovative architecture.) So all previous development and testing had been done by retrieving data from the current server. However, when retrieving data from a remote server an issue about performance was identified. This shows how changing requirements can necessitate changes in architecture. In my scenario, retrieving database properties had previously been done on-the-fly. But when running from a remote location, that needed to change. So the architecture of the application was amended to serialize that information to the data NSF as well and retrieve it from there, with the ability to refresh it as required.
Caching
Here we’re talking about caching information on disk, but in some cases it may be easier or more relevant to cache information to memory. XPages developers are very familiar with that, even if they’re not explicitly aware that they are familiar with it. viewScope is cached for the current page, sessionScope is cached in memory for the current browser session, applicationScope is cached in memory for the current application. ODA adds a number of other caches – serverScope cached in memory to the server, userScope cached in memory for the current application and the current user, and identityScope cached for the current user server-wide.
What’s worth bearing in mind is the lifetime for various caches. serverScope, identityScope and userScope are cached in ODA’d plugin itself, so kept alive for the duration of the server session. viewScope is kept alive for the duration of the browser page. sessionScope is kept alive for the duration of the current browser session plus the time it takes for HTTP timeout, unless cleared programmatically. That’s because the server can’t know that the session has ended and therefore sessionScope no longer required unless it is explicitly told. So it will stay for the duration of the HTTP timeout from after the last connection from that browser session. Similarly applicationScope will persist for the application timeout after the last session.
It’s worth bearing in mind that different parts of the XPages runtime use different ClassLoaders. This is a bit of a complex area for XPages developers, but it means an applicationScope for one ClassLoader is different to applicationScope for a different ClassLoader. This is something I know John Dalsgaard struggled against when trying to preload applicationScope with certain settings via a plugin. It’s worth bearing that in mind.
Profiling
In many cases, the amount of data being held in these caches may not be an issue. But it’s worth bearing in mind the size. For that, a profiler like XPages toolbox or a more heavy-duty profiler like YourKit can be useful. (Whenever I think of YourKit, I think of Nathan T Freeman’s comment that it’s like taking a bazooka to a knife-fight!) This can help you see the current memory allocated to those Java objects, making it easier to identify how the application will scale with the appropriate numbers of users. But of course it’s important if not critical to have a realistic understanding of the number of users expected. This can be difficult and may vary or fluctuate over time. The ability to test and monitor that may be important. It may need planning in with the business.
Another area of profiling is profiling of code blocks. XPages Toolbox provides the facility to do that by adding specific method calls around the relevant code blocks. Some ODA tests have also added standard Java time-checks. And that was something I added into my recent application. Methods have an additional boolean parameter for whether profiling should run. If so, the current time in milliseconds is stored in a variable at the start of a code block. At the end of the code block a method call is triggered that compares the current time with that start time and logs out to OpenLog with the time the block has taken to run and a message that identifies which code block it’s for. This helps identify how long certain loops take and where performance needs to be optimised or expectations managed accordingly.
Summary
Performance is a complex area. There are some decisions that can be made up front to optimise performance. An awareness of performance of specific APIs can also There are some techniques that can be used to run tasks in the background and manage expectations accordingly. There are techniques to spread parts of a process across parallel background threads, to cut down the time further. But it’s important to bear in mind other impacts on the server and ensure there are available background threads. How data is cached – in memory and in an NSF, and how within a specific Notes Document – can also benefit performance.
But key to all of this is regular monitoring and profiling. Remember agents can be profiled just be clicking a checkbox in Domino Designer’s agent properties. As it did for me six years ago, it may give some useful enlightenment. Java performance and LotusScript performance may differ, but the performance of specific API calls relative to others and performance slowdown for larger amounts of data will remain the same regardless of the language.
But one comment holds true regardless of the technology, whether it’s Domino, SQL, MongoDb or something else: knowledge, understanding and the desire to learn is key to optimising performance.
“But one comment holds true regardless of the technology, whether it’s Domino, SQL, MongoDb or something else: knowledge, understanding and the desire to learn is key to optimising performance.”
(: