Tuesday, January 26, 2010

Rule release for today - January 26th 2010

A few additions, some modifications. Mostly a maintenance release. Check it out:


Monday, January 25, 2010

Using byte_jump as a Detection Mechanism

This is just a quick tidbit about writing effective snort rules that I thought I would share. I was writing a Snort shared object (SO) rule for demonstration purposes. I was going to use a "vulnerability" where the DATA section, which is the last part of the packet, specifies a size that is smaller than the actual amount of data left in the payload.

The idea is based on a fairly standard vulnerability we see often, i.e. the size specified in the packet is used by the server to allocate memory and then the code simply copies all of the remaining bytes from the payload, causing an overflow condition if there are more bytes remaining than the size reported in the size field. Initially, I did the obvious, which was to write an SO rule that reads the size value from the payload then calculates the number of bytes remaining and alerts if there are more than the specified number of bytes left.

Here's the thing: It's actually better done from a very simple text rule using byte_jump as a detection mechanism, for example:
alert tcp $EXTERNAL_NET any -> $HOME_NET 4444 (msg:"MISC byte_jump invalid
size test"; flow:to_server,established; content:"MESG"; content:"NAME";
content:"DATA"; byte_jump:4,0,relative; classtype:misc-activity; sid:64999;)
This rule will alert if the byte_jump succeeds, meaning there is extra data after the specified size in the DATA section. As you can see this shows that byte_jump, which is typically used to move the detection cursor to another location in the payload for further data content checks, can also be used effectively as a detection mechanism itself.

Now on to rework my example to something that actually requires C code to solve!

The following payload was used with the above example rule.

Protocol structure:
MESG[Total size of all remaining data]
NAME[Size of Record Name][Record Name]
DATA[Size of Data][Data]
Size fields are four byte integers, big endian.

Packet payload:
00000000  4d 45 53 47 00 00 00 64  4e 41 4d 45 00 00 00 10  |MESG...dNAME....|
00000010  42 6c 6f 67 20 49 6e 66  6f 72 6d 61 74 69 6f 6e  |Blog Information|
00000020  44 41 54 41 00 00 00 1f  42 65 20 73 75 72 65 20  |DATA....Be sure |
00000030  74 6f 20 63 68 65 63 6b  20 6f 75 74 20 6f 75 72  |to check out our|
00000040  20 62 6c 6f 67 20 61 74  20 68 74 74 70 3a 2f 2f  | blog at http://|
00000050  76 72 74 2d 73 6f 75 72  63 65 66 69 72 65 2e 62  |vrt-sourcefire.b|
00000060  6c 6f 67 73 70 6f 74 2e  63 6f 6d 2f              |logspot.com/|

Wednesday, January 20, 2010

The Acrobat JavaScript Blocklist Framework

Adobe recently announced and released the Adobe Reader and Acrobat JavaScript Blocklist Framework. I've had a little bit of time to play with it and would just like to share my thoughts. First of all, I am very pleased with this new blocklisting feature. Until now, when we knew about 0-day being actively exploited in the wild using JavaScript in some manner, we would just turn off JavaScript in Adobe products (Reader, Acrobat, etc...) all together. Personally, I could live without having JavaScript in my documents, but that's a totally different discussion. I understand why some people might want that feature for their PDF documents and why for them at least, turning JavaScript completely off would not be an option. So let's say, for example, that you are running Adobe Reader 9.2.0 which is vulnerable to the DocMedia.newPlayer JavaScript API bug. You decide that it is in your best interest not to allow that method from ever executing. How would you go about blocking that? The official document put out by Adobe says: the "JavaScript Blocklist can be in two locations" on a 32-bit Windows system:
- HKLM\SOFTWARE\Adobe\<product>\<version>\JavaScriptPerms\tBlackListand- HKLM\SOFTWARE\Policies\Adobe\<product>\<version>\FeatureLockDown\cJavaScriptPerms\tBlackList
The first key is "modified by Acrobat and Adobe Reader patches whenever an API is deemed vulnerable" (this feature is currently in testing with a select group of beta testers). I decided to modify the second registry key. The manual configuration of this registry key was tricky since...it did not exist on my system with Adobe 9.2.0 installed. Thankfully it's not hard to create a registry key, and I did just that. Note that everything you type is case-sensitive when it comes to the registry keys related to the blocklist, from the value name to the values themselves. I spent a ridiculous amount of time trying to figure out why the blocklist wasn't working. It was because I had manually created a key called tBlacklist instead of tBlackList. Now came the time to test the effectiveness of the blocklist. I entered the Javascript API function docmedia.newplayer in the registry as indicated in the document by Adobe. I typed in:
How to verify if I had typed it in correctly? There was obviously no confirmation that I had blocklisted docmedia.newplayer. I went through the options of Adobe Reader and nowhere was there a mention that docmedia.newplayer was blocklisted. What was I going to do to next? Wait until I received a PDF that had code to exploit the vulnerability to see if the blocklist worked as it was supposed to? I decided to create a simple, harmless PDF that invoked that function to see if the API call would get blocked. I could successfully open the file without the function being blocked. This time, I quickly pinpointed the reason for that: API functions are case-sensitive and entering docmedia.newplayer is not the same as entering DocMedia.newPlayer. My concern was then that obfuscation techniques in Javascript could fool and circumvent the blocklist blocking. I tried basic evasions techniques:
  • obfuscation function names, function contents
  • lexical transformation
  • control transformation
  • data transformation (data structure)
There was no fooling Adobe Reader into executing the blocked function. It seems like Adobe Reader is hooking the function calls and is not going through the code trying to perform a string match. As of today, there isn't an official list of Adobe Javascript API functions to block, but I'd suggest adding the following to your blocklist just because these functions have been heavily exploited in the past several months: Util.printf (CVE-2008-2992) Collab.getIcon (CVE-2009-0927) Spell.customDictionaryOpen (CVE-2009-1493) Doc.syncAnnotScan (CVE-2009-2990) Doc.getAnnots (CVE-2009-1492) DocMedia.newPlayer (CVE-2009-4324) Very often, malware will escape Javascript code in order to avoid detection. The code is unescaped and evaluated at runtime. Therefore, these two function are commonly seen in malware and usually used one right after the other:
Unfortunately, these functions cannot be blocklisted through the Acrobat Javascript Blocklist Framework. Maybe it's just because they aren't, per se, Adobe Javascript API functions? We would love to be able to do that in the future, though. I also wanted to blocklist these two function because they have been exploited in the past:
app.CheckForUpdate (CVE-2008-2042)Collab.collectEmailInfo (CVE-2007-5659)
Turns out these are unpublicized Adobe Javascript functions and perhaps because of their nature, cannot be blocklisted. Finally, here's the blocklist that I propose, should you want to use it:
And here are simple harmless PDF files to test the implementation of your blocklist. Upon the functions being successfully blocked, you will see a yellow bar displaying "A JavaScript that this document uses is disabled for security reasons".

Friday, January 15, 2010

Rule release for today - January 15th 2010

It seems that a couple of large companies were targeted with a vulnerability in Internet Explorer. Today's release contains a rule to detect attacks targeting this vulnerability.

Check out the details at http://www.snort.org/vrt/advisories/2010/01/15/vrt-rules-2010-01-15.html

Thursday, January 14, 2010

January 2010 Vulnerability Report

Sourcefire VRT Vulnerability Report January 2010 from Sourcefire VRT on Vimeo.

January 2010 Vulnerability Report

This month Alain Zidouemba talks about Microsoft Tuesday, Adobe patches, Snort and ClamAV releases. From the beach. Where it's warm. While the rest of us freeze. Just saying. Putting it out there.

Tuesday, January 12, 2010

Microsoft Tuesday Coverage for January 2010

One advisory from Microsoft to start the year, one rule from us to cover it. Check it out here: http://www.snort.org/vrt/advisories/2010/01/12/vrt-rules-2010-01-12.html

Friday, January 8, 2010

VRT Guide To IDS Ruleset Tuning

Everyone who's ever used Snort, or any other IDS for that matter, for any length of time knows that in order to get the most of out of their system, they need to tune it. Most people have at least a basic idea of what that means - choosing the right rules to run, placing the system at the right spot in the network, etc. - but judging from some of the questions that routinely come in to the VRT, apparently there are a lot of people out there who lack a full understanding of how to pick the right rules for their environment. I'm hoping this guide will help those people, and serve as a reminder to those who already know what they're doing.

Let me start off by saying that simply turning on all of the VRT Certified Rules - or all the rules from any published ruleset - is a Bad Idea™, especially if you're running in IPS mode and dropping packets. A number of rules are meant to be advisory in nature; for example, SID 11968 ("VOIP-SIP inbound INVITE message"), if configured with IP lists appropriate to your voice network, will tell you if you've got SIP traffic on segments of your network where it's not supposed to be. If you blindly enable that rule on a production network, you could instantly take down your phone system. Other rules can be performance-intensive, and should only be run if you really need the coverage. Thinking that the more rules you have, the better your protection will be (I'm looking at you, MSSPs) can cause you a world of hurt if you're not careful.

Your life as an IDS analyst will be much easier if you start by eliminating large chunks of the ruleset from your policy, so that you've got a manageable number of rules to individually look through. To that end, the VRT has done a pair of things to help you out. Historically, our rules have been broken down into large categories - Attack-Responses, FTP, Oracle, Web-IIS, etc. It's trivial to look at the 53 different categories we provide and turn entire groups of rules off at a time depending on their relevance to your situation; after all, if you don't run any Oracle servers, you can turn off the entire Oracle category without even worrying about it. For open-source users, this is as simple as commenting out, say, the "include RULE_PATH/oracle.rules" file in your snort.conf; Sourcefire customers can take a similar step through the administration interface.

Some of those categories, however, are relatively broad, and can't be turned off in one fell swoop - for example, Web-Client encompasses attacks against everything from Adobe Reader to Internet Explorer, from Mozilla Firefox to RealPlayer. Recognizing this shortcoming, Sourcefire added the "metadata" keyword to the ruleset in January of 2006. That keyword's primary purpose is to help collect rules into default policies, maintained by the VRT, so that users can assess the level of security they think is relevant to their network, and then have a recommended collection of rules to fit their security stance. One of the three default policies we maintain - Connectivity Over Security, Balanced, and Security Over Connectivity - should be a reasonable starting point for most real-world Snort administrators. People with Sourcefire appliances can already choose one of these policies as a starting point for their own custom IDS policy; open source users are now able to use a recently released feature from JJ Cummings' Pulled Pork tool to create policies based on metadata as well.

We weigh a number of factors when determining which policies, if any, to include a rule in; since these factors will also be relevant to anyone reviewing an individual rule on their network, they're worth listing here:

  • Impact of the vulnerability: Essentially the same process that is used to determine a CVSS Score. Can the vulnerability be exploited remotely? Are authentication credentials required? How much user interaction is required? How reliably can the bug be exploited, and what type of compromise results from successful exploitation? Are there public exploits or proofs-of-concept? How widely adopted is the software in question? Obviously, a simple exploit in, say, a core Windows component that gives administrative privileges and has a virus in the wild is going to be included in all policies, where as an unproven bug in Jim Bo Bob's PHP Bulletin Board that results in cross-site scripting will not. For the end user attempting to figure this out on their own, a close reading of the relevant CVE entry, Bugtraq listing, and vendor response (if available) should provide most, if not all, of this information; additionally, CVSS scores are publicly available, and can often serve as a simple shortcut for determining vulnerability impact.

  • Reliability of the rule: Simply put, some rules do detection better than others. When SID 14896 - which catches the exploit used by Conficker - fires, you're virtually guaranteed that malicious activity is in progress. DCE/RPC is a well-defined protocol that the VRT understands very well, and a very specific set of conditions must be present in order to trigger the vulnerability; thus, false positives will be minimal to nonexistent. On the other hand, when SID 7070 - which is designed as a generic rule to catch cross-site scripting attacks - fires, the likelihood of a false positive is relatively high, since the rule was intentionally written very broadly. While it may be difficult for an end-user to gauge a rule's reliability accurately, a good rule of thumb to use if you're trying to figure this out yourself is to look at the number of rule options and the size of the content matches - in both cases, the more, the merrier.

  • Performance impact of the rule: While a given rule's performance will necessarily vary based upon the environment it's operating in, there are several things that we know will almost always result in either a fast rule or a slow rule. For example, if your rule consists solely of a pair of long, unique content matches, it should be blazingly fast; in fact, the fast pattern matcher becomes exponentially faster as you feed it longer and longer content matches. Options like byte_test and byte_jump are also particularly quick. Complex PCREs - especially those that contain constructs like ".*" - will be slow, as will two- or three-byte content matches - especially common ones such as |BM| for bitmap headers - particularly if they're not bounded by clauses such as depth, offset, etc. Again, this can be tough for an end-user to fully understand on their own; as a general rule, though, the longer and more unique the content match you start with, the faster the rule will be.

  • Applicability of the rule: This factor is, of course, the most variable of all, depending on the environment a rule is being run in. However, some rules are clearly more applicable to a broad base of users than others: for example, a rule that catches an in-the-wild exploit will appeal to many more people than a rule designed to block Yahoo Messenger conversations. The good news for those of you playing at home is that this is the easiest metric to asses on-site; after all, you know your company's IT policy, you know what software you run, and as a result, you know whether a given rule will apply in your environment.

  • Age of the vulnerability: The longer it's been since a patch for a vulnerable piece of software was released, the higher the likelihood that any given system running that software has been patched, and the smaller the number of vulnerable hosts remain. That's why, for example, SID 2318 - which covers a vulnerability in the open-source CVS content tracking system from 2003 - is not included in any policy, despite the fact that the exploit allowed attackers to write arbitrary files to vulnerable servers. If you've patched all of your machines against a given exploit, there's no reason to be having your IDS look for that exploit (with the one important exception that if a rule is looking for a vulnerability that has occurred across a whole class of software - i.e. a buffer overflow in an FTP command - it may be a good idea to keep it enabled to protect against future vulnerabilities of that type).

So what do you do once you've narrowed your ruleset down to something more manageable? If you're in a hurry to get things deployed, it's probably OK to start running Snort at this point; you can tune as you go. From here on out, it's a matter of reviewing individual rules, which can take a considerable amount of time to do for the thousands of potential rules you may wish to be running. I'll go through some examples here, to give people a feel for what all can be involved in the process.

  • SID 818 ("WEB-CGI dcforum.cgi access"): Access to a known vulnerable script from 2001. This one's obvious - turn it off, no one runs this any more - but I'm including this one for a reason: to make the point that with any rule more than 5 years old (i.e. those with a SID under 3000 or so), the default assumption is that it should be turned off, and that a good reason should be found before you decide to enable it.

  • SID 2412 ("ATTACK-RESPONSES Microsoft cmd.exe banner"): The banner that is displayed when a Windows shell opens has left your network on a port other than FTP or Telnet (where you might expect to see such a banner normally). I'm including this as an example of an older rule where it's met the burden of proof to stay on: any time you see a shell opening up across the network, you'll want to know about it, because chances are high it means you've been compromised.

  • SID 13415 ("EXPLOIT CA BrightStor cheyenneds mailslot overflow"): This one requires a bit more thought. Obviously, you can disable it if you're not running CA BrightStor; however, if you are, you need to assess your patching process, and determine how confident you are that a 4-year-old server-side bug has been patched throughout your organization (which, unfortunately, is not always the case). Given that it has a 19-byte content clause, relatively simple detection otherwise, and is running on a specific port (138), if you're at all unsure as to your patch status, it wouldn't hurt to run it - particularly since this is a server-side exploit that could result in administrative privileges for an attacker, and an exploit was known to be running around in the wild.

  • SID 1842 ("IMAP login buffer overflow attempt"): Sure, the oldest reference in here is from 1999; however, you can see that there are lots of references across the years, up through 2007. Combine that with the fact that no specific product is mentioned in the name, and it's obvious that this rule catches a type of vulnerability commonly found among many different IMAP servers. If you're running one at all, it's probably best to leave this rule enabled.

  • SID 13517 ("EXPLOIT Apple QTIF malformed idsc atom"): Based solely on the name, you might be tempted to discard this. Looking up the CVE entry, however, shows that it's a buffer overflow in a widely deployed program, QuickTime, and that it's only two years old (which is still fairly current in the world of client-side exploits, where patch management is a much bigger issue than server-side). Checking the Bugtraq entry's exploit section, we see that there are no known exploits; combine this with the fact that it's got a four-byte content match and a single byte_test, which means it may be prone to false positives, and it makes the decision of whether to enable the rule almost a matter of personal preference and/or paranoia level.

  • SID 6510 ("WEB-CLIENT Internet Explorer mhtml uri shortcut buffer overflow attempt"): Yes, this is an older rule - the vulnerability is three and a half years old - but you can be virtually guaranteed that someone in your organization is running Internet Explorer, and it would be no surprise if you had, say, a sales guy whose laptop hadn't been updated in that time frame. The SecurityFocus exploits page only has proof-of-concept exploits, but buffer overflows are notoriously easy to turn into attacks that result in code execution. At first glance the rule may look slow - one of the two content clauses is "URL", and the PCRE is 64 characters long - but in reality performance should be solid because "mhtml|3A|//" is rare in web traffic, and the PCRE is looking for a relatively well-defined string. If I were looking at this, I'd leave the rule on for another year or so, just to be sure.

  • SID 15727 ("POLICY Attempted download of a PDF with embedded Flash"): This rule doesn't detect any specific exploits, just a type of document that has been known to have security problems that are difficult to detect accurately. Given the recent rash of 0-day attacks against Adobe products, you need to consider whether your organization actually has legitimate reason to be working with PDF files that have Flash videos embedded within them (a practice that's not particularly widespread). Unless you can identify specific reasons that you need such files, it's probably a good idea to enable this rule, to help protect against vulnerabilities you may not even know exist yet.

Does this sound like a lot of work? Definitely - but at the end of the day, doing this work will save you time later, as tuning like this will help to alleviate potential false positives, and will allow you to focus on actual attacks against your systems.

Wednesday, January 6, 2010

Adobe Responds to Vendor Response Blog Post

Hey folks, Brad Arkin, Director, Product Security & Privacy for Adobe Systems left a note in the comments section of my blog entry on Vendor response (http://vrt-sourcefire.blogspot.com/2009/12/matts-guide-to-vendor-response.html). In that post, I expressed my concern on a number of issues related to Adobe Systems' response capability. Since most people who read that entry would not see the comment, I thought it fair to post Brad's response here. If you had forwarded the link of the original post, please do so again so Adobe's side of the story is heard. Brad Arkin's Comment Hi Matt, A couple correction regarding your JBIG2 timeline. Adobe learned of the bug on January 16, 2009 and issued the first patch for version 9 on Mac/Win platforms on March 10, 2009. Although this is a tighter range than you mention in your post, we certainly weren't satisfied with this response time. Our investments since then in improving patch turn around time allowed us to get three zero day patches out to users in around two weeks in April, July, and October. The ship schedule for the patch to the December bug was complicated by a variety of factors, not all of which were covered in the ComputerWorld article. My ASSET blog post provides some details here: http://blogs.adobe.com/asset/2009/12/background_on_reader_update_sh.html and a podcast I did with the Threatpost guys provides further insight into what went into the schedule decision: http://threatpost.com/en_us/blogs/brad-arkin-adobe-reader-zero-day-flaws-and-security-response-121709 Our goal in this incident and every incident is to help protect as many users as possible as quickly as possible against the threats that we are aware of. Happy to talk more if you are interested. I'm @bradarkin on twitter or you can get my mobile number from Matt W. if you'd like to talk on the phone. Thanks, Brad Arkin Director, Product Security & Privacy Adobe Systems Matt's Reply First: Brad, thanks for providing the corrections to the dates I used in my post. I'll update my material and, in the future, I will provide more accurate data. I have reviewed both your blog post and the podcast you referenced in your response. I strongly recommend that anyone reading this do the same. Brad fully lays out Adobe's reasoning for delaying the patch until the quarterly update there. The podcast is particularly informative, as Ryan Naraine did an excellent job of challenging Brad on several different fronts. That being said, I do want to say a couple of things after reading Brad's blog: Lets start on a positive note, let me say that I was very pleased to see the JavaScript Blocklisting functionality when you delivered it. We actually brought it up in the October 2009 Vulnerability Report (http://vrt-sourcefire.blogspot.com/2009/10/october-2009-vulnerability-report.html). I thought it was an excellent mitigation possibility for those organizations who simply couldn't do without Javascript. We've tested it here and it seems that, while it is unusually difficult to configure, it does do an excellent job of blocking those functions that it can (I am disappointed the we can't target unescape() though). Just remember that it is a mitigation only for those who have the infrastructure and expertise to use it. My real concern is that Adobe continues to make decisions and statements that some (me, for example) might read as indicating that either Adobe does not understand the impact that actively exploited vulnerabilities have on their customers or that Adobe simply does not place a great deal of value on that impact. Take, for example, this statement from Brad's blog: "Customer schedules - The next quarterly security update for Adobe Reader and Acrobat, scheduled for release on January 12, 2010, will address a number of security vulnerabilities that were responsibly disclosed to Adobe. We are eager to get fixes for these issues out to our users on schedule. Many organizations are in the process of preparing for the January 12, 2010 update. The delay an out-of-cycle security update would force on the regularly scheduled quarterly release represents a significant negative. Additionally, an informal poll we conducted indicated that most of the organizations we talked with were in favor of the second option to better align with their schedules." In reading that, all I can think is this: What "significant negative" could possibly justify delaying the roll-out of a patch that addresses an actively exploited vulnerability. With the exception of the Illustrator 0-day, which I also feel should be patched immediately, what is in Adobe's January 12th patch that approaches the severity of what is facing their customers right now? Clearly there is a benefit to reaching out to your larger customers, those who have the most momentum and infrastructure to manage, and working to understand their needs. But there are many, many people, companies and organizations who don't even have the ability to use Adobe's blocklist, don't have the expertise to understand the threat and don't have the budget to affect the decision making process of large software vendors. That set of people are utterly ignored in the decision making process that Adobe has laid out. I do want to end by saying I've seen significant improvement in Adobe's response to these types of issues. Don't let the fact that I have complaints about where Adobe is now lead you to believe I don't appreciate the improvements they've made. The difference between Adobe's response to the December 0-day and their ability to respond to the JBIG2 exploit clearly indicates that Adobe has put time, effort and money into improving their response capability. I anticipate continuing to see improvements in their ability to respond to rapidly developing threats. I also look forward to seeing improvements in the Adobe software set that will make it easier to update and manage. Brad, you can find me at kpyke on twitter (I'm now following you and I now see that you are following me (hi!)) and you (or anyone else) can hit me with my first initial and last name @sourcefire.com. Also, I'll probably drop you a call sometime later this week.

Rule release for today - January 6th 2010

First rule release of the year, a few updates a few modifications. Check it out here: http://www.snort.org/vrt/advisories/2010/01/06/vrt-rules-2010-01-06.html

Friday, January 1, 2010

New Year, New Snort

(I'm doing this now mainly to bump the bosses post down a slot... :))

Hey folks, we have some updated Snort information for you. Here is some information ofon the latest production build of Snort, and our first beta build of Snort 2.8.6.

Snort Update:

A quick note about the latest Snort release, Snort (find it here: http://dl.snort.org/snort-current/snort- This release resolves an issue discovered during the NSS testing that allowed for an evasion method in RPC rules. Note these RPC rules are not Windows RPC/SMB/NETBIOS etc..., but the Sun RPC flavor. This is the only evasion case that occured during NSS testing, so we're excited to have it out. Make sure you keep your Snort up to date!

Snort 2.8.6 BETA!

OK, so the Snort 2.8.6 beta is up on snort.org. (Don't tell marketing, but you can just wget http://dl.snort.org/snort-beta/snort-2.8.6-beta.tar.gz) While there is still some features that the dev team is planning on adding, this beta build already serves up some features that folks have been clammoring for. Steve Sturges, supreme high commander of the Snort development team, has put together some information for me to pass on.

First up, is the ability to handle gzip compressed server response data over http connections. To activate this feature, make sure you add the --enable-zlib argument to your configure script, and modify your http_inspect_server configuration field. There are two new depth fields associated with this feature. The first is the compress_depth field, which specifies the maximum amount of packet payload to decompress. If you don't modify the configuration at all, Snort will use the default value of 1460 bytes. Also, there is a decompress_depth configuration that specifies the maximum amount of decompressed data to unpack. The range is quite large, from 0 to 20480 bytes, with the default setting being 2920. As part of your testing for your environment, play with different values in these depth fields, so you get an understanding of any performance impact this feature will have.

Another frequently requested features is the ability to detect the transfer of Personally Identifiable Information (PII) passing across the wire. Snort 2.8.6 will have a sensitive_data preprocessor that will sort of combine pattern recognition and thresholding. There are some preset patterns that you can call: credit_card, us_social, us_social_nodashes and email. But you can also perform a limited regex-style match to define your own critical patterns. The rule format then allows you to specify how many times you need to see the patterns before you alert. So, for example:

alert tcp $HOME_NET any -> $EXTERNAL_NET $SMTP_PORTS (msg:"Credit Card numbers sent over email"; gid:138; sid:1000; rev:1; sd_pattern:4,credit_card; metadata:service smtp;)

This would alert if four or more credit card numbers were seen going over the defeined SMTP_PORTS. As an aside, the nice thing about the credit card pattern match keyword is that it matches 15 or 16 digit numbers, seperated by spaces, dashes or nothing and covers valid numbers for Visa, Mastercard, Discover and American Express. Note that the GID for this must be 138, and you may not combine the sd_pattern keyword with any other rule options.

We'll put together some more sample rules and get them out to you here. We might also throw together some PCAPs and toss them up on labs so you can test off the wire to make sure your configuration is good. The VRT and the Snort dev team are also very interested in what sort of patterns you decide to use on your end, so if you come up with something you're willing to share, let us know.

Now, deep in the recesses of Snort, and after a number of those engineering meetings that leave ancient mythical symbols on the white boards, a new pattern matcher has been born (hatched?). The new ac-split pattern matcher is much more efficient than the current default matcher (ac-bnfa) but is closer to ac-full in its memory consumption.

To give it a shot, modify your snort config to include:

config detection: search-method ac-split

Now, I'm going to be honest, I don't completely understand what this means (yet...its been a busy December...) but the High Commander tells me that this is an alias for search method ac, split-any-any. The High Commander also says that, in order to keep memory usage in check, try the following modifications to the config detection line:

max-pattern-len 20, search-optimize.

Which would result in the following full config line:

config detection: search-method ac-split, max-pattern-len 20, search-optimize

For my money, I'd try the ac-split method with and without the memory optimizations and see how your throughput is. Make sure you are looking at a typical amount of traffic and keep some stats on memory usage, cpu usage and throughput.

More info on 2.8.6 as things develop, get to testing and remember that you can provide feedback to snort-beta@sourcefire.com. We'll also get you some more in-depth technical information on the various features as the final build firms up.

Happy New Years, folks. Be safe and give your loved ones a hug.