Earlier today I posted this tweet. It followed another update from John Curtis teasing more enhancements coming to DQL. If you’ve been following closely presentations and announcements about DQL, there are a few things that have been said and seem clear to me, things that give insight into the approach with DQL that are interesting and demonstrate the team is thinking beyond history, beyond Domino, and building for the future. Let me elaborate.

Full Text Search

Full text searching is powerful within Domino. But it also has limitations and complications. This has been raised in sessions about DQL. Notes end users are not comfortable with building full text queries, which is why in the past I’ve deployed saved searches for users. Developers aren’t fully au fait with full text search syntax. I have the full text search syntax help page bookmarked and on more than one occasion I’ve had to fix a search in LotusScript because it was not coded correctly. So as a query language, it was never going to be the right approach for Node.js developers. Plus there are limitations around number of entries etc and performance enhancements were almost certainly required for heavy-duty interaction. A quick approach would have been a wrapper on top; a good approach was to build from the ground up.

Some work was done around that in Domino V10 – full text searches now update the index for up to 20 documents, if there are some that are unindexed when the search is performed. And although many applications will use full text search queries, and that may be a reasonable option for many years, the close relationship between DQL and Node.js mean that, once mature, DQL will likely be the preferable approach.


DQL was coded at the lowest language level, which means APIs can be added not only for Node.js, but also LotusScript and Java (and SSJS?). The other languages will come in 10.0.1. This key factor, along with the integration with Proton, are why it should become the preferable approach. But involving all those languages means better and more feedback, greater use and more reason to enhance.

The Design Catalog, DomQuery Tool

Building from the ground up allows for some improvements in fundamentals. The full text indexis stored in unreadable files and relies on the UNK table for datatypes, which can vary between environments and even replicas. I’m not sure if DQL also does, it’s possible that it does. But that can make it hard to work out why search syntax is not working as designed when called programmatically. I’ve answered a few questions on XPages full text searching by recommending that the developer opens the Notes Client and tests the search there. I’ve literally just found a technote with a notes.ini variable DEBUG_FTV_SEARCH=1 as well as others to give some debugging about full text searches. But I’ve never seen that used or suggested in a forum answer, and I suspect I’m not the only one who had never come across it.

With DQL we’ve got the DomQuery tool which gives a lot of information about how a search is built, where it’s looking, where the optimisations are. And the Design Catalog aggregates information about which views are and aren’t used. The logic behind all of this is documented, but I envisage a lot of databases not being optimised for DQL. But exposing the information allows some forensics to be developed by anyone who wishes, to help developers prepare a database better for DQL queries. It will be interesting to see if something is built into Domino itself to warn of sub-optimal query criteria that may be better optimised by creating a view, something along the lines of “no view available to optimise query criteria ‘XXXXXXX’, consider creating a view including ‘YYYYY'”.

Not Just “New FT Searches”

But the most significant aspect is that this is not just about making Domino easier to search for those who haven’t got a PhD from the School of Full Text Dark Arts”.

Already there’s been focus in conferences that it improves on MongoDB’s manual boolean tree construction. I can’t admit to being full conversant with quite what that means. But it means I don’t have to find out and code in a specific way. It’s improving the searching over other NoSQL search engines.

And the recent tweet talks about named and positional substitution variables to avoid even needing to understand about SQL injection and code around it. Again, improvements over known pain points from another platform.

It’s this approach which means DQL is being built fundamentally for the future, and not just a future for existing Domino developers or Node.js developers, but a bigger future.

Yes, there is other key functionality DQL will need – rich text and attachment handling, ORDER_BY equivalents, multi-database searching etc. That’s going to take some time and I would urge developers to understand that. And there are other areas of Domino that need work before it’s worthy of competing with other databases – flexibility in Docker deployments, external Java access, admin management, etc for example. But to build a mansion, you need to get the foundations right, or the pretty house comes tumbling down with the first heavy rains or earthquake. And it needs to appeal to a wide market, or you won’t get the investment needed to do it right. What I’m seeing with DQL shows some good foundational approaches and long-term planning.

7 thoughts on “The DQL Approach”

  1. From what I’ve read it looks like DQL won’t work in a local replica.

    Can you confirm this?

    Are you aware of any plans to make it work in local replicas?

    1. Hi Glen, DQL does work with local replicas. You must run updall -e which is not something usually done client side, but people have done it and proven it to work. Hope this helps, John

  2. I have hard time understanding what DQL actually is.
    At first glance it looked like some sort DSL build around existing domino data structures (views and sorted columns). But You seems to think that this could be something bigger that will eventually replace full text search and give us elasticsearch-like search engine without actually integrating elasticsearch with domino. Can You show me where did You actually get this idea from ? Because I don’t see how HCL could possibly do this with domino data structures – they are nowhere near ready to be basis for serious search engine. And I doubt that HCL can build their own search engine from ground up in under a year.

    1. I think you’re seeing full text as a single concept. Try separating it as the full text search syntax and the full text search index. DQL is an alternative to the full text search syntax. What that actually searches is separate and can be switched in and out as required. DGQF is the indexing layer, which pulls out the relevant search criteria and parses them, identifying if there’s a view that can be used, which parts will be quicker than others etc. This is what’s different – a full text search is run on a design element (view or database) whereas DGQF can mix and match on the fly. Jesse’s blog post covers that https://frostillic.us/blog/posts/7CA8A2A0950517FC852582D600550766.

      It’s interesting you mention ElasticSearch and a replacement search engine. Apache Tika has already been implemented in V10 as an alternative indexer for attachments, though I don’t understand that completely. The HCL Places POC shown at ICON UK use ElasticSearch for indexing the contents of the Domino server. Deployment and scalability of ElasticSearch alongside Domino could be interesting, I’ve heard mentioned in the context of IBM Connections Pink that it best needs three servers, but I don’t ElasticSearch well enough to offer an opinion. From the approaches on Node.js and gRPC, re-inventing the wheel and building something from the ground up is unlikely to be the current approach. The approach seems to be integration of standards outside Domino.

  3. Millroy Fernandes

    Does DQL have indexing issues such as in FT Search? That is to say if a query was being done whilst a record was getting indexed

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top