The Razorback team has released version 0.4.1 (yeah, we would have released 0.4, but we found some critical bugs that we really needed to fix before general release).  You can find the new version of Razorback here:  http://sfi.re/zQQOQ4.  We've done a lot of work both on the internals of Razorback as well as a couple of new nuggets.

First and foremost, you need to know that we made changes to the API and the database schema.  We're getting close to the point where we will lock the API, but we've had to make some changes to support our goals, so you'll need to rework any custom nuggets you are working on and retool your database.  You'll also see some changes in the queueing setup, but that happens automatically.  I can tell you there will be additional changes in the 0.5 release and then, hopefully we're done.  Hopefully :)

The first change I'd like to discuss is the concept of locality.  Because of the large network traffic that could be generated by the system, we wanted to be able to take advantage of a shared file system to increase throughput and, depending on what kind of shared system you are using, reduce the amount of network traffic.  To do this we've developed the concept of locality.  If you share a locality with the dispatcher, you will access the data blocks directly from the shared disk system.  By default, the dispatcher locality is 1.  Any other locality will result in datablocks being transfered over the network in the standard way.  So if you have a fast shared disk system, be it NFS or a SAN setup, set the locality in api.conf and this should increase your throughput.

Speaking of data transfer over the wire, we've added encrypted data transfer.  Not surprisingly, this is fairly CPU intensive in certain cases, but depending on your needs this may be a required piece of functionality.  The transfer goes over SSH/SFTP and uses the standard libssh library.  We've removed transfering the datablock by putting it in the queue (this resulted in too much data in the queue) so currently this is the supported network data transfer.  The username is automatically the nugget id, and this is checked against the database setup and the password is configured in the api.conf.

Another place we use the locality concept is in our new master/slave high-availability setup for the dispatcher.  Dispatchers in a master/slave setup must share a locality so as they fail over they have all the datablocks that have been transfered to the dispatcher.  In the master/slave setup the routing table is shared between the servers so in the case of failover none of the nuggets will need to be re-registered.  Because of the queue server setup, all of the pending requests will then be handled by the new master nugget on failover.  CLI commands for viewing the status of the high-availbility setup and force-failing servers have also been added.

We've added three new nuggets.  Two of them, syslogNugget and razorTwit, demonstrate the new output nugget capability.  As alerts and status changes come into the dispatcher, details are placed into a queue that output nuggets can subscribe to.  This capability is intended to reduce the impact of database queries and to provide an alternate, installation-specific, capability for logging.  See the nugget samples for more information, they are fairly straightforward.

Finally, we are introducing a beta build of the new PDFFox nugget built by Ryan Pentney.  The idea behind this nugget is to provide an open-source pdf analyzer that doesn't rely and a costly commercial application to work.  The project is still coming together, and you should probably run the latest from trunk, but its coming along nicely and provides solid detection.

So that is it, besides a bunch of bug fixes from 0.3.  We're now working on the Q1 2012 development cycle and it entirely revolves around the Windows API and several nuggets to plug into anti-virus systems.  We're also deploying our first major in-house production build of Razorback, so numerous performance tweeks are in the works.  Expect information about how to tweak each of the components for maximum performance as we complete the Q1 run.

For those that use our VM, the updated 0.4.1 VM should be up shortly; Tom Judge is making a few tweaks on that side before we release it.

We want to say thanks for giving Razorback a try, let us know if you have any issues via the mailing list or #razorback on freenode.