Quantcast
Channel: Word of Pie » Design Patterns
Viewing all articles
Browse latest Browse all 7

Tips: Don’t Depend on That Sequential Object ID

$
0
0

I recently ran into a situation that challenged one of my basic beliefs in the setup of the Documentum repository, object ids may not be sequential!!!

What you say? Impossible you say? Yet it happened.

What I encountered isn’t a widespread phenomenon, but it could happen to you.

What Happened

I designed a reporting system, our little Data Shack, that was culling the audit trail, and some relevant object information, on a nightly basis into a second database for reporting.  There are a few hundred million audit trail records, so querying that live was a challenge.  If a user clicked on “History” in DA or Webtop, you could go get lunch, in another state. The goal was to put it into its own database designed for efficient reporting. If a bad query was issued, nobody suffered.

To streamline the transfer process, the system used the r_object_id field as a key field for querying records for retrieval.  It is the Primary key on every table and no other approach would be as fast. I checked.  The concept was simple, grab the maximum r_object_id already archived and then grab every row with a bigger r_object_id and move it over.

What could be simpler?  Nothing.

Well, after a while, the users were asking for a query regularly and it was decided to craft a Data Shack version of the query and add it to the standard reporting in the Data Shack.  The initial results were different from the live data results.  After some digging, it was determined that a very small percentage of audit trail records weren’t making it over every night.  After some more research, it was determined that if you sorted by the time_stamp field, you got a different order of objects.

This was not good.

Three Times the Fun

What appeared to be happening is quite simple.  We have three Content Servers supporting the same repository in Production.  Apparently, each server would grab a range of potential IDs and dish them out as needed.  This is an accepted Design Pattern, that I can’t find a reference to right now, which prevents duplicate IDs from being created or different processes from blocking each other. Grabbing blocks of ids is an important aspect when it is expected that a lot of them will be needed. It is a solid approach.

The only problem is that I wasn’t aware of this little feature when the Data Shack was designed. So every night, a few random rows were missed.

Now it is possible that different applications, like the Indexer and our Web Interface, which happen to reside on different machines but use the same Content Server, grabbed the IDs and not the Content Servers. I suspect that is not the case, but I can’t rule it out without some mind-numbing digging (or some luck).

Regardless of the culprit, the solution was fixed. Everything now works on the time_stamp field in the Audit Trail and not r_object_id. The Data Shack is up and humming again and all the numbers match.

So the tip is to depend on the dates for determining the order of events, not the object ids.  They may be tempting to use as they are the primary keys on every table, but they are not the arbiter of succession.



Viewing all articles
Browse latest Browse all 7

Latest Images

Trending Articles





Latest Images