I've
written before about the perennial build or buy dilemma, and here it is again.
I was desperate for a solution to the following kinds of problems:
- I have a bunch of heavyweight tests running in parallel, but using the same database. These tests unfortunately tend to create a mess, and every once in a while the database needs to be reset. During the reset, no access by automated tests should be permitted.
- Integration and UI tests run against a deployed version of the software. Most of these tests can be run concurrently, but not all - but more importantly, none of those tests will deliver a reliable result if a new deploy yanks away the services while a test against them is running. (yes, I know, they should be "highly available", but they aren't, at least not on those puny test environments).
This cries out for some sort of locking mechanism.
Ideally, it should be a hierarchical locking mechanism to support controlled sharing for concurrent tasks, while still being able to lock the root resource.
If you use naive methods, you are likely to get hurt in several ways:
- Builds are fickle. They fail, crash or get killed by impatient developers. Any mechanism that requires some sort of cleanup action as part of the build itself will not make you very happy.
- Developers have a strong sense of fairness and will get impatient if some builds snag a lock right out from under their nose. So we need a first come first serve queuing mechanism.
- If you use touch files with timestamps or some similar device, you run up against the lack of an atomic "test and set" operation, and also run the risk of creating a deadlock.
I've googled and searched, and even though I found lots of interesting papers, I haven't found anything remotely close to what I wanted. Most systems I found solve problems way more complicated that what I want here.
I'm sure that the moment I hit publish, I'll locate the original wheel hiding in plain sight someplace. Maybe some kind reader will reveal it for me...
Meanwhile, I have something that works.
The basic idea is simple:
- My services keeps a hash (or dict) of queues.
- Every time someone requests a resource, the queue for that resource is either created from scratch, or, if it already exists, searched for an entry by the same requester.
- If the request is already in the queue, a timestamp in the request is updated, otherwise the the request is added at the end of the queue.
- If the request is first in line, we return "ok", otherwise we return "wait".
- In regular intervals, we go through all queues and check if the head item has timed out. If yes, we remove it, and check the next item, removing all the dead items until we find an unexpired one, or until the queue is empty.
- If a request to release an resource comes in, we check the queue associated with the resource and set the timestamp to zero. This will cause the request to be automatically purged when it comes up.
This is about 100 lines of code in node. Even though node is quite
controversial, it is ideally suited for the task at hand:
- The event loop model ensures my code "owns" the data while processing a request. No shared memory or threading issues.
- The requests and responses are very short, so no hidden traps triggered by folks submitting megabyte sized PUT requests .
- The queues tend to be short, unless you have a big bottleneck, which really means you have other problems. This means that processing the requests is essentially constant time. A large number of resources is not really a problem, since it is easy to shard your resource set.
Adding hierarchical locks to this service is relatively simple. We only need to change two things:
- We introduce shared vs exclusive locks. Shared locks have many owners, each with their own timestamp. When enqueuing a shared lock, we first check if the last element in the queue is also a shared lock, and if yes, we merge it instead of adding a new lock to the queue.
- We introduce resource paths, and request an exclusive lock for the last element in the path, but shared locks for all the parent elements.
This appears to give us the right behavior. So, in our database example at the beginning, the tests would each say:
grab mydatabase/mytest
This will create a shared lock on
mydatabase, and an exclusive lock on
mydatabase/mytest.
If the lock holder reaches the front of both queues, then the grab request will succeed, and the task can proceed.
If another task comes along and requests:
grab mydatabase/yourtest
then the shared lock request for
mydatabase is merged, and that task will also be at the head of the queues for all of its resources, and can proceed.
Now, if the big database cleanup task comes along, it will just say:
grab mydatabase
Since
mydatabase here is at the
end of the resource path, it will request an exclusive lock which will
not be merged with the previous ones, but queued after them. The cleanup task will have to wait until all of the owners of the shared lock release their part, and only then can it proceed.
Should one of the test tasks decide to give it a try, they will again end up requesting a shared lock on
mydatabase. Since a shared lock cannot be merged with an exclusive lock, the new shared lock will be queued after the exclusive lock, and the test task will have to wait until the cleanup task is done. Fairness is preserved, and it appears that we have all the right properties....
Code is
here - please be gentle :)