As previously mentioned
the most recent UnixLinux adapters improve overall reconciliation performance. In the lab we've been playing with other ways to further improve their performance. One way we've identified is a endpoint-side cache of the reconciliation data that is refreshed asynchronously from the reconciliation process. The downside of this approach is that the cache needs to be refreshed via code deployed on the endpoint either via remote execution of the script on the endpoint or crontab on the endpoint itself. Because of this limitation it is unlikely to be rolled into the GA adapters but it might be useful for specific endpoints (slow ones or large ones for instance) or for customers who have tight control over their endpoints and want the reconciliation performance boost.
The caching change is done in the UnixLinux reconciliation scripts included with the adapter. The change is to detect if a cache file exists and is "fresh enough". If it is fresh enough we return the cached content, appropriately filtered, and exit. If it isn't fresh enough it is deleted. The reconciliation data is then calculated, cached, and returned. "Fresh enough" can be whatever it is you want it to mean -- in the example below it's defined as '1 day old'. This would allow up to daily reconciliations and ensure that the data being returned is, at most, 24 hours old.
Here's a unified diff of the LinuxShadowPConnRes.sh script with the changes. Similar changes could easily be made for the other scripts.
# diff -u 5.1.5_adapter/LinuxShadowPConnRes.sh LinuxShadowPConnRes.sh-cache
--- 5.1.5_adapter/LinuxShadowPConnRes.sh 2010-04-19 09:23:40.000000000 -0600
+++ LinuxShadowPConnRes.sh-cache 2010-05-11 15:27:22.000000000 -0600
@@ -40,6 +40,39 @@
+# use the cache if possible
+# check to see if the file exists and is bigger than 0 bytes
+if [ -s $cache_file ]; then
+ # check to the see if the cache is fresh enough. to do this we create an
+ # empty file with a specific timestamp, and use the -nt comparison operator
+ # to test for freshness.
+ # fresh enough, in this case, is 1 day old (date -d'-1 day')
+ # this can be changed to any date supported by the date -d command
+ touch -t `date -d'-1 day' +"%Y%m%d%H%M.%S"` $cache_file_timetest
+ if [ $cache_file -nt $cache_file_timetest ]; then
+ rm $cache_file_timetest
+ # we're fresh enough, return contents appropriately filtered
+ cat $cache_file | $filter
+ exit 0
+ rm $cache_file_timetest
+ # cache isn't fresh enough, remove it
+ rm $cache_file
+# only generate the cache if our filter is "everything"
+if [ "$1" = "grep -e :" ]; then
+ touch $cache_file
+ chmod 600 $cache_file
#confirm that faillog is installed/configured
$faillogcmd -u root > /dev/null 2>&1
@@ -161,6 +194,9 @@
+ if [ "$generate_cache_file" = "true" ]; then
+ echo $oneline >> $cache_file
#reset the Internal Field Separator
The modified script would replace the instance in the RMI Dispatcher as well as be the one run on the endpoint (hint: for all scripts the second argument is 'grep -e :' to return all users (ie: unfiltered) and the third is 'true' if using sudo or 'false' otherwise). The asynchronous run of the script would need to be done as the user doing the recon (ie: root or non-root as sudo) as the cache file is owned by that user and is only accessible to the owning user (chmod 600).
The nice thing about the caching changes as currently coded is that it can be used for all endpoints without negatively impacting performance but those endpoints that are updating the cache asynchronously will see a significant speed improvement.