A number of years ago we decided to host the database for RPM Remote Print Manager® (RPM) in an external process. If for whatever reason we had a database problem, we didn't want to affect print job processing.
Naturally, we wanted to use some method of communication that was lightweight and reliable. We considered using the network like we did with the user interface (UI), but decided against that as the UI conversation can get very busy when there are a lot of jobs coming and processing. It seemed like a good plan to keep those two channels separated.
Having been a Unix network programmer in my younger and more glamorous days I decided to give message queues a try. I had read where message queues have been scaled to service client/server networks with many nodes, in commercial settings. So, with great anticipation, I tracked down a library for "shared memory message queues" and implemented my solution.
The only problem
Unfortunately, our message queue library didn't always work. Occasionally we would get a report like this while starting RPM:
Error starting - 2016-11-28 10:06:30.444 *** RPM Service failed to start ***] [Note: module name and message follow] Module: MsgQMgr Message: Overlapped I/O operation is in progress.
The message might be this or one similar, but regardless, RPM was not going to start. And unfortunately the only thing we found that would "clear" this was to restart the entire machine. And that is not a good solution.
Back to the drawing board
We literally reviewed every way developers use for Windows processes to communicate with one another. We tried a hot new method using shared memory segments but were not able to overcome some hurdles relevant to our customers' needs.
Ultimately we tried "Windows named pipes". Pipes are very familiar in the Unix world but "named pipes" or "fifos" were new to me. However, it seems that Windows uses them internally, so we hoped for the best.
I don't know if our results were the best but they were very good!
Good news #1: Stability
When I got the named pipes working, I sent a couple thousand jobs through the system. Not a hiccup. I sent 20 thousand jobs. No problems. I sent a quarter of a million jobs. Still, no problems.
Good news #2: Performance
The first thing I noticed was that jobs were being processed between four to six times as fast as before. I wasn't looking for a speed increase, but it's a nice bonus.
Ultimately with the very large job batches, the speed improvement seems to level out to at least double. Of course, that is going to be affected completely by your job processing requirements. At this point, we can say that performance should not be worse.
Good news #3: Throughput
I deliberately queued up over four thousand jobs in RPM, in a suspended queue, then restarted the process. One of the early queries in RPM is to scan all the job files. The query result coming back into RPM totaled more than 700,000 bytes. The entire block was transmitted in less than one millisecond on my system.
Good news #4: Error reporting
While we were working on the database code, we discovered some ways to improve error reporting. Now if there is an unexpected error in a database command, we're much more likely to get a report in the RPM event log than we were before.
It may be hard to call that an "improvement" but if there is a problem then we can get to the bottom of it faster.
This version should be excellent news for you if you have ever experienced a situation as we describe above. It's also worth serious consideration if you just want to have better performance. In the UI please go to "Help / Check for Updates" and look for the latest version for your platform. The database changes are in 188.8.131.521 dated June 22, 2017.
As always, contact your salesperson or technical support if you have any questions. You will need to have current support to upgrade.