Thursday, December 31, 2009

The Last List of 2009 - Predicting Security in 2010

As the guy in charge I've been too busy with the day-to-day operations of the Sourcefire VRT to create the cliched, annual "Top 10 List" of things that have come and gone, or things that will happen in the future. However I've procrastinated long enough on this topic, so without further ado, here are my predictions for 2010. I only managed to come up with five, but I hope you will enjoy them. If it turns out that I'm wrong, I expect you to hold them against me in 2011.

1. The Cloud Bubble Will Burst - We've seen the initial technical problems in this space. Twitter going offline, the loss of all the T-Mobile SideKick data, EC2 having terrible up-time and Gmail outages. These failures reflect the standard trend of all new or emerging services. Next comes the predictable trend of using these services for nefarious things, which we are already starting to see with EC2 and Twitter being used as C&C. Soon I predict we'll see a compromise of a prominent Cloud provider that spews forth data at a volume never before seen. Finally, watch out for snake oil in the Cloud security market, in 2010 everything is going to have the word Cloud in front of it. The great thing about the Cloud is you don't have to deal with how it works, the worst thing about the Cloud is you don't get to know how it works.

2. The Apple Honeymoon is Over - I love apple. I have my Mac, my iPhone, my iPod, my Airports, and all manner of other Apple devices either on my person or in my house. These devices do what I tell them, don't break very often, and have the features I use all the time. The only real problem I have with them is every single applications has 1000 vulnerabilities waiting to be discovered. For years Apple has pounded us with the message that they are more secure than Microsoft, and that if Mom and Dad buy one of these shiny devices they won't have to worry about malware and viruses any more. Well Mom and Dad listened, and now Apple owns a large segment of the high-end laptop market and just about the entire smart phone market. This means that Apple now has market share, and with market share they become an attractive target. As Windows 7 makes exploitation difficult, and bad guys increasingly rely on social engineering in their mass attacks, this segment of the market won't be forgotten. Expect more vulnerabilities and malware for Apple in 2010.

3. Mobile Device Targets - In a similar light to Apple, watch out for the emergence of mobile phone vulnerabilities and active threats. We've started to see people dip their toes into this segment (remember the hacked iPhone worm?) with some targeted pieces of malware for various platforms. Also, Charlie Miller probably has all our iPhone data at this point. But that's no matter, as at least you know where your data is. I'm willing to bet we'll see a number of other vulnerabilities and more sophisticated targeted attacks against your favorite mobile phone in 2010.

4. Prolific Desktop Software Takes a Beating - Adobe represents the first major crack in the dam of vendors who are going to take a security beating in 2010. If you make software and lots of people use it, you are going to be a target for vulnerability hunters. There is just too much money in it to pass up, either through programs like VCP (iDefense) or through some grayhat/blackhat vulnerability purchasing program. Once the first vulnerability shows up in any prolific software package, expect a hundred more to come shortly thereafter. If you are a vendor and you are not prepared for this onslaught, be prepared to lose market share and take a PR beating. Make sure you have your bug triage process in place, and have a plan for communicating with your customers about problems and getting them timely updates.

5. Critical Infrastructure Goes Sideways - The debate over critical infrastructure security, controls, and hype spins out of control in the political sector. If most of Congress believes the Internet is a bunch of tubes, it'll be beyond funny how this plays out in the media and in the compliance space. If you're classified as a critical infrastructure provider, I suggest you start getting your ducks in a row when it comes to security. If you don't, Washington is going to have O-Scope IDSes sitting on your analog controls before they're done making the world safe. Also, expect at least one garage door opener to turn off some neighborhood, now that everyone is rolling out Smart Meters.

While these are just predictions I can give you some things that will happen in 2010. It's a complete guarantee that the VRT will blow something up, film it, and post it on the Internet for your enjoyment. I can also guarantee that in 2010 one of us will do something completely genius (stupid may be a better word) that will once again get a substance or kids' toy banned from the office. Until then, I'm off, it's late, and it is time for a drink.

Wednesday, December 30, 2009

Matt's Guide to Vendor Response

Well...it's that weird period between Christmas and New Years, and I've realized that I hadn't gotten anything for those wonderful people that keep the VRT employed. So as a gift to you, software vendor, I present Matt's Guide to Vendor Response. Now...this is a complicated subject. It has to be, because every company I've dealt with has had to work through it painfully. Hopefully, this guide should help whoever follows in the footsteps of vendors past to more easily move from "security through no one is looking at me" to a company who operates in balance with business drivers and obligations to customers.

First, let’s cover some concepts:

VALUE OF VULNERABILITY -- The most important thing to understand about vulnerabilities is that they have value. Depending on several factors (the same ones your end users use to gauge risk: ease of exploit, how widely deployed the software is, remote/local etc...) it can be worth a great deal. When someone comes to you with vulnerability data, you should be grateful. There are a number of commercial entities that will purchase vulnerability data and most of them will then pass it on to you. But there are a number of other avenues that a security researcher could take. Various government agencies in the U.S. Government will purchase vulnerability data, especially if it’s been researched fully to the point of exploit. Other organizations around the world, both governmental and non-governmental, will also pay money for vulnerabilities.

Know that when I say pay money, I mean good money. (Matt Watchinski in a presentation to DojoSec about the Adobe JBIG2 issue estimated the value of the bug from between 75 to 100 thousand dollars). Here's a pro-tip: Head to Vegas and hit some of the late night DEFCON parties. Search through the crowd and find the group wearing the custom suits, nice watches and Italian shoes. Congrats, you've found some of the real deal. (By the way, don't discount the dude in the green shirt with green spiky hair...he's just keeping it on the down-low). Here is my point, when you get vulnerability data, someone has given up what could be a substantial sum of money to provide you with that data. Treat them well. Be respectful, when you announce the patch, put their name in it. They've done you and your customers a great service, and it costs you nothing more than having someone on your staff act like a half-way decent human being.

IMPACT OF 0-DAY -- I really believe that a lot of the software development community simply haven't internalized what an 0-day means in terms of impact to customers. When there is an in-the-wild exploit of your software, you should turn all of your resources to alleviating that problem. Most operations folks flail when 0-day comes out, no one will give them enough information to mitigate the issue, they are at the mercy of the vendor's patch cycle and at the core of it all is a gaping hole into their data. Depending on your application, this could be state secrets, medical records, credit card numbers etc... So when you sit in your meeting to figure out how to handle the mess that the failure in your secure software development lifecycle (you do have one of those, oh...please don't make Michael Howard cry...) has caused, keep in mind the real world impact (big or small...) that the problem is causing.

Now, I'm going to keep this simple. There are a ton of policies and spread sheets and threat matrices and God knows what else that some CISSP somewhere will help you build. But here is how, at the absolute root of the issue, to handle security issues in your products:

CASE 1: You've been confidentially notified by a security researcher that there is vulnerability in your product.

RESPONSE:

First, thank the researcher and let him have a point of contact on your security team to discuss the issue. As soon as your triage of the bug is done, notify the researcher of the timetable to patch. Feel free to put in whatever caveats you need to, but keep that communications channel open. You never know when it will be useful to you.

Second. Don't tell anyone who doesn't need to know anything about the vulnerability. Configure your bug handling software to have a security flag that allows viewing only by a dedicated security response development team. If you have an external bug tracking system, remove immediately any information about the bug. Many of you might wonder why this is...especially after I've repeatedly demanded that as much information as possible be given to operations folks. But here is the deal...as soon as any information about an unpatched vulnerability is out, attackers will start the process of converting that information into a working exploit.

For example, when the Adobe JBIG vulnerability came out, Sourcefire was only aware that there was an issue in PDF files that had JBIG tags. So Lurene and Matt set the fuzzer on it, starting at the JBIG tag. It took less than 10 minutes to find the vulnerability, and a short time later we understood the execution path. We're good...but so are the attackers. So until such time that there is information in public, lock down any info you have.

Third. Triage appropriately and patch as soon as is reasonable. I'm not suggesting that you bend over backwards to get a patch out tomorrow for undisclosed bugs. Keep an eye on your security feeds, notify your security vendor partners to be on the lookout for particular strings or conditions and get to work on patching within your normal development framework. Just ensure that you have a process in place to expedite the patch process if the vulnerability becomes actively exploited or the vulnerability details are released.

A quick note on "security vendor partners". If you, as a company, are truly concerned about the operational impact of vulnerabilities in your software on your customers, you should create relationship with security vendors NOW so that they can help protect your customers when vulnerabilities arise. Contact IPS/HIDS/SIM/AV vendors so that when you are under the gun, you have both resources you can reach out to and the ability to tell your customers that while you are working on the patch, their security vendors can provide coverage to protect them. This goes a long way towards making security operations people feel better about your product. Microsoft's MAPP program (http://www.microsoft.com/security/msrc/collaboration/mapp.aspx) that Sourcefire participates in is an excellent example of this kind of program.

Finally, deliver the patch. When delivering the patch, remember to credit the researcher and, if appropriate, his or her company. Also, understand that once you deliver the patch, sufficient data is now in the public arena to create an exploit. So provide as much data as you can so operations folks can make an intelligent decision on how quickly to roll out the patch, and what mitigations are available prior to that patch roll out. Remember that, for better or worse, if there is no active exploit, many organizations require extensive testing prior to a patch roll-out, so give the ops team options to handle things prior to patching.

CASE 2: Information has been publically disclosed regarding a vulnerability, but you have seen no active attacks.

RESPONSE:

This is the tricky one, and is one of the reasons you need to have a dedicated team with the appropriate skill set to handle security issues. First, you need to interpret the level of detail that has been provided to the public. If there is a Metasploit module, just skip straight down to active attacks, because it's on like Donkey Kong (TM, or something). If it is an announcement to bugtraq or full-disclosure, have someone who understands exploitation look at the data and help you determine the risk. If you are completely clueless on exploit development (and really, you have to be willing to admit this) reach out to your security vendor partners for assistance.

Consider reaching out to the original poster of the data. Yes, this can hurt your pride, but remember that your company has caused (minor|moderate|severe) issues and you should feel some obligation to suck it up and fix it. Remember at this point few people are really going to beat you up over the fact that your code has bugs (whose doesn't?) but it’s how you respond that will define how you are perceived.

Now you've got enough data to make some decisions. Based on that data, and your policies, pick from the following:

1) If you feel that the level of disclosure is such that it creates a near-immediate threat to your customers, you need to skip immediately to the active exploit case.

2) Announce the fact that you are aware of the vulnerability and that a patch is in the works. Give a set of mitigation options for people to use until a patch comes out. Provide no unnecessary data. Engage your security vendor partners.

3) Announce the fact that you are aware of the vulnerability and that a patch is in the works. Provide all the technical details you can including locations of exploit code, how error logs look when the exploit is attempted, mitigation strategies and suggested detection techniques. Engage your security vendor partners.

Number three will probably look pretty extreme to some folks, particularly of the severe white-hat persuasion. But let me tell you a secret: There are no secrets. If the vulnerability is out there, you and every ops group on the planet are in a race against the hackers. In all likelihood, in some areas, you've lost the race already. But either way the only fair way to treat customers when the threat is imminent is to provide them every tool they can to get in place to weather the storm while you get your patching in order.

I'm going to bang on Adobe pretty hard here in a minute, so let me point out something they did that was actually close to right. There is exploit code available for Adobe Illustrator CS3 and CS4 for an .eps file parsing overflow. It has not, to the best of my knowledge, been seen in the wild, but it would be an excellent targeted attack vector. Adobe announced that it was aware of the issue, gave a date for the patch and advised that customers not open .eps files from untrusted or unknown providers.

Now, while I would love a better set of mitigations, I'm not sure what you could do in Illustrator. But I have enough information at this point to write mail filters, IDS rules and log parsers looking for .eps transfers. (You can find the notification here: http://www.adobe.com/support/security/advisories/apsa09-06.html)

All that being said, here is why I said "close to right". There is publicly available exploit code that includes the Metasploit windows/adduser and windows/shell_bind_tcp shellcodes. You can find that code here: http://www.exploit-db.com/exploits/10281. This is important because it allows you to write much more specific detection, allows you to test your detection with actual exploits and the mere fact that it exists tells you that there is a true threat to you if you use Illustrator and .eps files. Adobe did not provide a link to the exploit, which is very useful to organizations with advanced security practices. Adobe did not even disclose the existence of exploit code (which changes the response priority in most organizations).

CASE 3: You discover that a previously unknown vulerability is being exploited in the wild.

RESPONSE:

First, have a meeting and explain to everyone that your company has screwed up. Say it just about in those terms, get people in the mindset that you have an obligation to your customers. Make them understand that there are companies that will be adversely affected, perhaps dramatically, by the vulnerability in the software that your company makes.

Second, notify your dedicated security development team that they will now be earning their money. Overtime as necessary until it is resolved. Notify your QA team that they will have to put on hold whatever they are doing and prep for testing of a patch. Notify your web and infrastructure team that you will be having an out-of-band patch and they need to be ready whenever the patch is ready to publish.

Third, immediately notify your customers that there is an active exploit. Give every possible piece of data you have. Exploit code, sample files, log snippets, IP addresses of known attackers...everything. You owe them every piece of information you can possibly give them while your team slaves away on a patch. Now is not the time to be sly or hide behind your website. Get out there and earn the trust of your customers.

OK...now is Adobe smashing time. Let me tell you the absolute WRONG way to go about it. I'll use Adobe twice, because they've screwed up at least twice this year, and they are dead-center of the client-side bullseye right now.

First, with the JBIG2 problem, it was abundantly clear that they simply didn't have a plan in place for handling active exploitation. The vulnerability, which was very, very easy to take to exploitation was first announced publically sometime around the 19th of February by ShadowServer (http://www.shadowserver.org/wiki/pmwiki.php/Calendar/20090219). Also, it was noted that this vulnerability was actively exploited in the wild. It became apparent that this had been exploited since at least January 1st (we're thinking earlier...which is a bang against anti-virus vendors (us included) for not finding it earlier). We also find out that Adobe was notified by McAfee around January 11th of the problem. Long, sad story short, no patch was issued until March 23rd and then only for one version of Reader and no versions on the Unix platform.

A second, actively exploited 0-day was found in December. This time Adobe released a security advisory with well documented mitigation techniques (find the advisory at http://www.adobe.com/support/security/advisories/apsa09-07.html) including some accurately worded (reduced risk...) information about the role of DEP in the exploit.

Now, I missed this on the first pass through the bug, as I assumed the JavaScript Blacklist Framework technote was their standard blurb about the Blacklist. But the technote is actually where most of the data I would want to have from an ops side is located, including naming the function with the problem (DocMedia.newPlayer). With the information in the technote more accurate mitigation is available. I would have liked to of had that on the front page, but a diligent ops guy would find it. (Find the technote at http://kb2.adobe.com/cps/532/cpsid_53237.html)

The real issue, though, is that the patch would not be available until January 12th. For organizations where the JavaScript Blacklist Framework wasn't an option, such as those groups running non-Windows systems, or Windows users that don't have the ability to fix up all of their end-user registry settings or home users who most likely have no idea there is a threat at all, this time period is one where there is very little in the way of mitigation. Microsoft patches every month and still out-of-band patches when there is a significant 0-day threat. Why can't Adobe break into its quarterly patch cycle for the same reason?

ComputerWorld seemed to think this was a problem also, and had an article entitled "Adobe explains PDF patch delay" (http://www.computerworld.com/s/article/9142479/Adobe_explains_PDF_patch_delay). The first paragraph pretty much sums up Brad Arkin's (Adobe's director for product security and privacy) explanation of the delay:

"Adobe chose to wait until mid-January to patch a critical PDF bug because issuing an emergency update would have disrupted its quarterly security update schedule, the company said today." -- ComputerWeek's Gregg Keizer, December 18, 2009

Unacceptable. Seriously, there is no other way to describe this situation. You cannot justify pushing out an actively exploited 0-day vulnerability because "it would disrupt the quarterly update schedule". I invite you to read the article to get Brad Arkin's exact wording. Part of his justification was that they had deployed the JavaScript Blacklist Framework, which if you were at the correct version, would allow you to lock down JavaScript enterprise-wide.

What is so important in the quarterly patch that it can't be put off? The obvious answer is the Adobe Illustrator bug with public exploit code...maybe that should be rolled into the out-of-band patch, don't you think? If this weren't a blog that represents a group of people that are part of a company, I'd say this more plainly than this, but here is my final thought on Adobe over the past year:

WHAT THE HELL?

Look, Adobe has been doing better lately, but I think they still have a ways to go.

CONCLUSION

If you skipped to the conclusion (and who could blame you, it was almost too long, did not write...), let me lay it out for you:

1) Have a plan

2) If an exploit is underway, give customers every piece of information you have and patch as quickly as you possibly can.

3) If no one knows but you, keep your mouth shut while you patch.

4) Treat your security researchers well, they are doing you a huge service.

5) Know that you have an obligation to your customers to protect them from vulnerabilities in your software, give them every tool and piece of information possible to protect themselves while you code the patch.

Not surprisingly, the VRT is part of the Sourcefire vulnerability response process. We really do try and follow the ideas I've expressed here. Hell, we even credited Neil Mehta (an ISS employee, you know...one of our competitors) for the work he did on the Back Orifice buffer overflow. Which we patched, tested and released within 80 hours of learning of the problem.

We need to demand more from our software vendors (including Sourcefire, keep us honest). Use whatever leverage you have (and it is pitifully small in many cases) to get a better response from your software providers.

As always, I welcome your comments in the comments field below.

Thursday, December 17, 2009

DEP and Heap Sprays

Usually when you need to use a heap spray, you're SOL when it comes to DEP. The reason for this has to do with why you used the heap spray in the first place. In the case of a vtable overwrite you need a chain of pointers to get the job done. A neat way to deal with this is to find an address (like 0x0c0c0c0c) that has a few special properties. It's the same char over and over, it is executable as a NOP generally, and with a spray you can fill memory up to and including that address. If you can't execute on the heap however due to hardware DEP, the fact that it's a NOP doesn't really help you, does it?

Normal vtable Overwrite Crash

mov edx, [eax]     <- Crash occurs here
mov ecx, eax
call dword ptr [edx+4]

In the above example, we control EAX. This means that in order to get
execution, we need EAX to be a pointer, that points to another pointer
(to be placed in EDX) which in turn points to code we wish to execute.
None of these pointers can be volatile for this to work. In this case we
can use a heap spray of 0x0c0c0c0c to satisfy our requirements. If a
heapspray is not possible, we will need to find a real pointer chain. In
order to find such pointers you can make use of the byakugan functionality
!jutsu searchvtptr.

!jutsu searchvtptr 4 call [eax+4]

If, however, the vtable control isn't control of the pointer to the table, but of the table itself (as may be the case in say, a use after free vuln where you have strong heap control and grooming ability) then you only need to find a ptr->ptr and the heap spray isn't needed as a nop sled. You'd only use the heap spray to ensure that despite movement of data, you still hit executable code. The sled will contain a list of return addresses only now. This will require the pointer you choose be a multiple of 4, so that the return addresses are aligned properly. 0x0c0c0c0c is still a good bet for this, but we simply wont fill the heap spray with 0c, we'll instead use a pointer that we'll find in just a moment.

Full vtable Pointer Control Crash

mov edx, [eax]     
mov ecx, eax
call dword ptr [edx+4]    <- Crash occurs here

In this example, we have control over the actual function pointers, rather
than the pointer to the table of function pointers. We no longer need a
ptr -> ptr -> ptr, we only need a pointer to a pointer to executable
code that doesn't move. In this case we can use a heap spray of library
addresses and eschew execution on the protected heap.

So, assuming we have control of data on the heap that a register points to, and also perfect control of EIP, we can use a technique which "flips" the heap to the stack to get execution on DEP systems. What we'd do is find an instruction in memory that wont move around which will move that register into ESP, then return. On non-ASLR systems such as XP, this is a simple matter. By placing this address in EIP, we will effectively make the heap (which we fully control in this case) become the stack. With stack control, we can return to already mapped library code which also will not move, allowing us to make arbitrary system calls, with arbitrary arguments.

Flipping the Heap to the Stack

mov esp, eax

If EAX points to a structure on the heap which you control, then the above
code will make that controlled memory become the effective contents of
the stack. From here you will be able to begin a chained ret2lib style
payload with arbitrary arguments. You can easily find non-moving return
addresses like this code with byakugan's searchOpcode functionality.

!jutsu searchOpcode mov esp, eax

From here, you can go the hard route, mapping pages, re-protecting them, copying your shellcode to them, then returning back to that newly mapped space which is of course mapped executable. The alternative is to just jump to the turn DEP off function (NtSetInformationProcess), and return to your new stack. (See Uninformed v2 article 4: "bypassing windows hardware-enforced dep" for details: http://www.uninformed.org/?v=2&a=4&t=txt

If a vendor tells you that DEP will protect you from a vulnerability, do not assume that your hardware will protect you. Perform other mitigations as well.

Sourcefire VRT Labs

We are opening the Sourcefire VRT Labs for business. We've had a few useful things floating around in the jungle for a while now and we decided to make everything available, in one place, for everyone to use. Right now, Labs has a few resources on it we thought folks might find useful, such as Brian's Shared Object Rule Generator, Lurene's AWBO exercises and PE-Sig (also written by Brian).

We have a lot more material and tools to add over the coming months and shortly, we will start a series of blog posts on using the SO rule generation tool which will walk you through using the tool and what to do with the source code that gets generated. We hope you will find it useful and hopefully, you will more easily be able to write your own custom detection SO rules.

We would also welcome your input on the site and what content you would like to see on there.

You can find it here: http://labs.snort.org/ and when the SO rule blog posts start in January we will be announcing the official availability of the site via the Snort mailing lists. Stay tuned, there is a lot more to come.

Tuesday, December 15, 2009

Adobe Reader media.newPlayer() Analysis (CVE-2009-4324)

First off its not Friday, and hopefully you'll have a better weekend. The reason for that is you are set with rules and clam sigs.

Now what the heck am I talking about….

Last night Adobe released an advisory detailing an in the wild exploit for Adobe Acrobat that is currently circulating in a number of places. Due to all the confusion and hype last time around with the famous JBIG2 vulnerability we figured we'd take a deep dive into the specifics surrounding this vulnerability so everyone can better understand what it is doing and how to protect yourself against it.

First off the executive summary for those who don't have a lot of time.
  1. The in the wild exploit is detected by both our Snort signatures and the ClamAV signatures (SIDs 16333 and 16334, ClamAV Exploit.PDF-4619 and Exploit.PDF-4620)
  2. Disabling javascript actually neuters the exploit this time, as this bug exists in the javascript module
  3. Analysis of the in the wild exploits and malware seem to indicate multiple people have this information and are using this to attack organizations.
  4. The following sites have additional information on the attack: http://www.shadowserver.org/wiki/pmwiki.php/Calendar/20091214
    http://extraexploit.blogspot.com/search/label/CVE-2009-4324
  5. Enabling DEP will stop the in the wild samples we've seen, but is not fool proof


If you're interested in the details of the bug to better defend yourself and recognize samples, here are the basics. The bug is contained in the doc.media object and is triggered when sending a null argument to the newPlayer() method like so.
try {this.media.newPlayer(null);} catch(e) {}
If I had to guess, I'd say this makes use of a vtable pointer that hasn't been initialized, due to a use after free issue. The sample in the wild makes use of util.printd to create an object of the same size as the media object, which is allocated to the same spot. Then, when the vtable is loaded, the data from the printd is used instead. The printd is what is used in the wild, but probably isn't the only way to get here.

Basic detection can be developed from the following:
try {this.media.newPlayer(null);} catch(e) {}
util.printd("12345678901234567890123456 : 1231234", new Date());
Luckily, the attacker assumes that this is enough to get reliable execution, however, if there are chunks in the lookaside list of that size, they might be used for the printd space rather than the addresses the attacker requires to be here. We have not seen any in the wild samples that compensate for this problem.

Once the vtable is controlled, it is possible to make use of a standard javascript heapspray to ensure that the pointer you control goes somewhere useful. Currently, this approach does not work on machines with DEP on, however there are techniques to circumvent it.

From the number of blogs and other messages we've seen surrounding this issue, we know a lot of people have the samples and are actively working on both protections and exploits for this vulnerability. This is something to tend to immediately.

Rule release for today - December 15th, 2009

More problems with Adobe Acrobat and Acrobat Reader via the media.newPlayer function. Couple of rules to cover it, check here: http://www.snort.org/vrt/advisories/2009/12/15/vrt-rules-2009-12-15.html for more details and changelog.

Monday, December 14, 2009

Operation: Don't Tell Lurene We're Working On This

If you've been following this blog for a while, you might have noticed that Lurene only shows up when there is evil to be done. This is why she is here; she's really, really good at it. She is also the analyst team lead and makes sure we are all keeping the fuzzers running, studying emerging exploit techniques and generally getting up to no good.

But recently, in talking to some folks, we've become aware that there are some edge-case detection things that folks are looking for solutions for. So we're (trying) to break off some time to do some research on weird detection needs some of you have. Most of these seem to be CPU intensive efforts where it would be difficult to keep the Snort engine running at line speed. Some involve processing traffic in a different way or generating a batch of rules up front after some specific calculations.

These requests come mostly from organizations with an established security team with excellent reverse engineering and forensics skills, looking for help with problems specific to their environment and the threat it faces. But I would think there are a lot of you out there that might want to leverage Snort and its optimized search engine, rules language and SO capability to do very specific detection, where you aren't as concerned about overall performance.

My recommendation, if you wanted to work on edge case detection, would be to use the SO rules language (blog posts coming soon, I swear) and place them on a dedicated IDS sensor. Use the optimizations in snort: fast-pattern matching, normalized, decoded buffers and the snort rules language as a start, and then add to that using C. Prototype your detection and don't worry about performance impact.

With the caveat that you may never see one line of code from us on these items, here are some of the things we're kicking around:

  • File parsing in Snort
  • Writing rules directly into the detection structures based on specific detection needs
  • Looking at detection under various assumptions (no frags, no reassembly needed,etc..)
  • Working on the ability to shunt traffic offline for "near real-time" detection (for heavy duty detection that would never be able to happen at line speed)

So here's what I want to know: If you didn't have to worry about line speed, false positives, false negatives, reassembly, throughput, buffering or any of the other factors that impact the balance between detection and performance, what would you do? What would you want to see?

Drop us a note at [email protected], or leave a comment below. If necessary, our pgp key can be found here: http://www.snort.org/vrt/vrt-pgp-public-key/

Friday, December 11, 2009

I hope you're happy Bejtlich...you cost me a ton of sleep

So after two days of getting up at the crack of dawn, having to deal with other VRT folks before they've had their coffee and then driving through commuter traffic and getting on the Metro, I came home from the SANS Incident Detection Summit completely exhausted. But as my head hit the pillow my brain was working overtime and at full capacity, trying to process all of the ideas, opinions and tools that came up at the conference. This led to a night of restless sleep as my brain would not stop turning over ideas and to-do lists that were generated by the conference. I'm pretty sure that as far as I'm concerned that was the most useful conference I've ever attended.

Before I get to the talks, let me talk about the audience. I wish I could have trapped them all in a room and just talked for hours. The ones I did get to chat with were knowledgeable, were brimming with high-end problems and high-end ideas and were completely willing to talk your ear off about what they had done, what they needed and what they were worried about. Anyone who was at the conference that I was missed, get a hold of me, I'd love your thoughts.

The talks...now because of traffic issues, we missed the early part of day one. Now, I'll be honest, my favorite part of day one was participating in the two panels I was on and yelling at a room full of people about my crazy ideas. Yeah, I have opinions. But one of my main points was the importance of generating in-house data, and the CIRT/MSSP talk, along with the commercial security intelligence talks were very interesting.

Day two, in my mind, really took it up a notch, but that may be because I was forced (for the most part) to shut up and listen instead of flapping my pie hole. Right off the bat was easily the best talk of the conference (even better than my rants!) and it was Aaron Walters and Brendan Dolan-Gavitt's review of the Volatility Framework, which is a memory forensics tool. I was really impressed by the technology and felt that it would be very useful to some of our in-house research projects.

Another project that has long been on my radar is the Honeynet Project, and Brian Hay was there from the University of Alaska Fairbanks. I got to chat with him after the talk and that generated a ton of ideas.

The day was really packed, and it ended strong. Michael Cloppert moderated the Noncommercial Security Intelligence Service Providers panel, which also ended up in a number of post-talk chats on various topics. I was disappointed that Team Cymru's representative, Jerry Dixon, was unable to be there. They do a lot of work that I've used over the years.

The very last panel was on Commercial Host-centric Detection and Analysis Tools. The topics ranged all over the map, and I couldn't help but chime in with a couple of questions. There have been a lot of developments in the advanced persistent threats space over the last year or so, and it was really informative to hear about what these guys have seen.

So here is the TL;DNR version:

  1. I like yelling at people about what I think
  2. You should never miss this conference if you're interested in incident detection
  3. Some of the best information happens when you trap the speakers after the talks
  4. I'm really tired right now

Wednesday, December 9, 2009

December 2009 Vulnerability Report

Sourcefire VRT Vulnerability Report December 2009 from Sourcefire VRT on Vimeo.



December Vulnerability Report.

This month, Alain Zidouemba talks about Microsoft Patch Tuesday, Adobe patches and Google's DNS offering.

Tuesday, December 8, 2009

Microsoft Tuesday Coverage for December 2009

Six more advisories from Microsoft this month. Coverage is applicable for MS09-070, MS09-071, MS09-072, MS09-073 and MS09-074.

There's also a patch or two from Adobe this month. By our count, that's the third "quarterly" patch this quarter. We think we've spotted a trend.

Anyway, details are available here: http://www.snort.org/vrt/advisories/2009/12/08/vrt-rules-2009-12-08.html

Actual Conversation - botnets explained

[11:04] <[?] someone > Pusscat: basically im trying to walk an non-technical person though a simple irc bot
[11:04] <[?] someone > my goal was for my mom to be able to accurately describe a botnet
[11:04] <[?] someone > like code chunk - this is the c&c interface it blah blah
[11:04] < Pusscat> A loosing battle. Ur fightin' it.
[11:05] <[?] someone > haha
[11:05] <[?] someone > yeah
[11:05] <[?] someone > so i thought my mom understood what i did
[11:05] <[?] someone > no..oh no.
[11:05] <[?] someone > my internal work is basically read by our marketing group then goes on radio/interview type stuff
[11:05] <[?] someone > when he described my lab as a "malware zoo" my mom suddenly understood ish what i do
[11:05] <[?] someone > it was pretty funny
[11:06] <[?] someone> so i am trying to explain a bot to say an 8 year old
[11:06] <[?] someone > with code examples
[11:06] <[?] someone > hehe
[11:06] <[?] someone > its hard
[11:06] < Pusscat> do you need code?
[11:06] <[?] someone > really picking what to address it the hardest part
[11:06] <[?] someone > nah i have a great one
[11:06] < Pusscat> maybe you need just a good metaphor
[11:06] <[?] someone > very simple java bot so its easy to read
[11:07] <[?] someone > and it has a lot of cool features we only saw in bots like conficker or waladec last year
[11:07] <[?] someone > (damn anyone who tries to tell me they are the same - they are not!!)
[11:07] < Pusscat> like... bots are injuns, and the botmans are chiefs! and they tell the injuns what to do with smoke signals, and those smoke signals are c&c!
[11:07] <[?] someone > hahahaha
[11:07] < Pusscat> they use smoke signals since cowboys caint read 'em!
[11:07] <[?] someone > i should send my boss a draft like that
[11:07] < Pusscat> my job is to hunt injuns and learn smoke signals!
[11:08] <[?] someone > it would be great
[11:08] < Pusscat> no wai. Thats my blogpost now

Wednesday, December 2, 2009

Hand Parsing Packets for False Negative Glory

Yesterday, on the Snort-Sigs mailing list, we had a report of a potential false-negative in an older Snort rule. While he was unable to provide a full packet capture at the time, the author of the email was able to provide a copy-paste of the packet data. A lot of times, Alex Kirk takes point on these complaints, but he was still trying to catch up from his jaunt down to Brazil to speak at Hacker2Hacker. So I grabbed the data and worked on the issue. I thought it might be interesting for folks to know how we approach reports like this.

So the issue was with the following rule:

alert tcp $EXTERNAL_NET any -> $SQL_SERVERS 1433 (msg:"SQL SA bruteforce login attempt TDS v7/8"; flow:to_server,established; content:"|10|"; depth:1; content:"|00 00|"; depth:2; offset:34; content:"|00 00 00 00|"; depth:4; offset:64; pcre:"/^.{12}(\x00|\x01)\x00\x00(\x70|\x71)/smi"; byte_jump:2,48,little,from_beginning; content:"s|00|a|00|"; within:4; distance:8; nocase; reference:bugtraq,4797; reference:cve,2000-1209; reference:nessus,10673; classtype:suspicious-login; sid:111113543;)


And the attack pcap is as follows:

0000 00 14 bf 52 fe 40 00 d0 2b 77 75 01 08 00 45 20 ...R.@.. +wu...E
0010 00 bc 1e 56 40 00 6c 06 xx xx 79 0b 50 ce xx xx ...V@.l. xxy.P.xx
0020 xx 7a 08 2b 05 99 a4 51 cc 4d b1 be 2b 43 50 18 xz.+...Q .M..+CP.
0030 ff ff 3d 81 00 00 10 01 00 94 00 00 01 00 8c 00 ..=..... ........
0040 00 00 01 00 00 71 00 00 00 00 00 00 00 07 d0 19 .....q.. ........
0050 00 00 00 00 00 00 e0 03 00 00 20 fe ff ff 04 08 ........ .. .....
0060 00 00 56 00 06 00 62 00 02 00 66 00 01 00 68 00 ..V...b. ..f...h.
0070 00 00 68 00 0e 00 00 00 00 00 84 00 04 00 8c 00 ..h..... ........
0080 00 00 8c 00 00 00 00 1c 25 5b 6f ff 00 00 00 00 ........ %[o.....
0090 8c 00 00 00 44 00 57 00 44 00 57 00 34 00 44 00 ....D.W. D.W.4.D.
00a0 73 00 61 00 b3 a5 xx 00 xx 00 2e 00 xx 00 xx 00 s.a...x. x...x.x.
00b0 xx 00 2e 00 xx 00 xx 00 xx 00 2e 00 31 00 32 00 x...x.x. x...1.2.
00c0 32 00 4f 00 44 00 42 00 43 00 2.O.D.B. C.


So, the first thing I wanted to do was to take a quick look see to check if the packet should alert. It was kind of sloppy (this cost me some time), but here is what I did:

Looking at the rule, it requires content:|10| at depth 1. As it turns out, there is only one 0x10 in the pcap, so I just assumed this was the begining of the packet payload (lazy). As it turns out, I was right. So I took each portion of the detection in the rule and laid it out and compared it to the packet:

Original packet data, serialized:

10 01 00 94 00 00 01 00 8c 00 00 00 01 00 00 71 00 00 00 00 00 00 00
07 d0 19 00 00 00 00 00 00 e0 03 00 00 20 fe ff ff 04 08 00 00 56 00
06 00 62 00 02 00 66 00 01 00 68 00 00 00 68 00 0e 00 00 00 00 00 84
00 04 00 8c 00 00 00 8c 00 00 00 00 1c 25 5b 6f ff 00 00 00 00 8c 00
00 00 44 00 57 00 44 00 57 00 34 00 44 00 73 00 61 00 b3 a5 xx 00 xx
00 2e 00 xx 00 xx 00 xx 00 2e 00 xx 00 xx 00 xx 00 2e 00 31 00 32 00
32 00 4f 00 44 00 42


content:"|10|"; depth: 1;
10

content:"|00 00|"; depth: 2; offset: 34;
00 00

content:"|00 00|"; depth: 4; offset: 64;
00 00 00 00

pcre:"/^.{12}(\x00|\x01)\x00\x00(\x70|\x71)/smi";
10 01 00 94 00 00 01 00 8c 00 00 00 01 00 00 71

byte_jump:2,48,little,from_beginning;
62 00 [Read little endian, decimal: 98]

content:"s|00|a|00|";
44 00 57 00
"D" 00 "W" 00

So, a note. I totally messed the last match up, because I failed to notice that the content match had a within: 4; distance:8; set of modifiers. So at this point, I thought there was a problem with the rule. So I decided to hand decode the pcap. Nothing says dedication like hand decoding packets in VI, but I was free for a while, and for some reason very motivated to nail down the issue. The original author was actually very awesome in this regard, because he provided an excellent link to a reference that detailed the protocol, you can find it at http://www.freetds.org/tds.html#login7.

So...at first I didn't know what the first 8 bytes were, so I cleverly wrote:

10 01 00 94 00 00 01 00 I have no idea what this does

This is fine, you don't have to know everything, but don't forget that you don't know it, because if you get stuck later, its an avenue to explore. Then I got down to actually working on the decoding of the login data. Here is the full decode that I did:

[Login Packet Decode]
Total Packet Size [4]: 8c 00 00 00 4
TDS Version [4]: 01 00 00 71 8
Packet Size [4]: 00 00 00 00 12
Client Version Program [4]: 00 00 00 07 16
PID of Client [4]: d0 19 00 00 20
Connection ID [4]: 00 00 00 00 24
Option Flags 1 [1]: e0 25
Option Flags 2 [1]: 03 26
Sql Type Flags [1]: 00 27
reserved flags [1, mbz]: 00 28
time zone [4]: 20 fe ff ff 32
Collation Info [4]: 04 08 00 00 36
Position of client hostname [2] 56 00 [86 decimal] 38
Hostname length [2] 06 00 40
Position of username [2]: 62 00 [98 decimal] 42
Username length [2]: 02 00 44
Position of password [2]: 66 00 [102 decimal] 46
Password length [2]: 01 00 48
Position of app name [2]: 68 00 [104 decimal] 50
Length of app name [2]: 00 00 52
Position of server name [2]: 68 00 [104 decimal] 54
Length of server name [2]: 0e 00 56
Int16 [2, mbz] [2]: 00 00 58
Int16 [2, mbz] [2]: 00 00 60
Position of library name [2]: 84 00 [132 decimal] 62
Length of library name [2]: 04 00 64
Position of language [2]: 8c 00 [132 decimal] 66
Length of language [2]: 00 00 68
Position of database name [2]: 8c 00 [132 decimal] 70
Length of database name [2]: 00 00 72
Mac address of the client [6]: 00 1c 25 5b 6f ff 78
Position of auth portion [2]: 00 00 80
NT Auth Length [2]: 00 00 82
Next position [2]: 8c 00 [132 decimal] 84
Int16 [2, mbz]: 00 00 86
Hostname [n(6)]: 44 00 57 00 44 00 57 00 34 00 44 00 98 (DWDW4D)
Username [n(2)]: 73 00 61 00 102 (sa)
Password [n(1)]: b3 a5 104 (encrypted)
Server Name [n(14)]: xx 00 xx 00 2e 00 xx 00 xx 00 xx 00 2e 00 xx 00 xx 00 xx 00 2e 00 31 00 32 00 32 00 132 (xx.xxx.xxx.122)
Library Name [n(4)]: 4f 00 44 00 42 00 43 00 140 (ODBC)
So the numbers on the far right are a running count of the offset from the begining of the TDS Login Packet data fields. I did this because all of the provided offsets (Position of....) are in relation to the begining of the Login Packet fields, and if I have to continuously recalculate where I am I will eventually screw it up. So I do a little extra work to be sure I know what I'm looking at.

Next I recheck the snort detection methodology using the decoded information so I understand what it is that the rule is trying to do. When I do this, I finally notice that the last content match actually has additional modifers to it. As I review each check, I make notes next to the checks to tell me what was going on:
content:"|10|"; depth: 1;                          [Not immediately apparent what this is, as it is part of the undescribed header]
content:"|00 00|"; depth: 2; offset: 34; [Checking the Sql Flags and the Reserved flags are 00 00]
content:"|00 00 00 00|"; depth:4; offset:64; [Checking the 4 must-be-zero bytes at offset 58 and 60]
pcre:"/^.{12}(\x00|\x01)\x00\x00(\x70|\x71)/smi"; [Verifying that we have an appropriate version field at offset 8]
byte_jump:2,48,little,from_beginning; [Grab the offset of Username, jump the offset from begining of packet value here is 62 hex, 98 decimal]
content:"s|00|a|00|"; within 4; distance: 8; [Check for username "sa", adjusting for 8 byte header]
OK, now we're working, but I want to know what the |10| is for, so I look around and find the TDS header specification a little above the login spec, so I take a moment and break that down:
[TDS Packet Header]
Packet type: 10 (TDS 7.0 login packet)
Last Packet indicator: 01
Packet Size: 00 94
Unknown: 00 00 01 00
So now I can describe in plain language what the rule is trying to do. First, check to make sure this packet is a TDS login packet (as it turns out, 10 is valid for 7.0 and 8.0). Then check to make sure that fields that are known to be set as |00| for TDS login packets are indeed set to |00|. This ensures that we are looking at the correct kind of packet, by verifying that these structures are located in the correct place. Now that we've done some basic checking, we have enough information to justify calling the PCRE engine and checking that the version is set correctly. Observant readers will note that the PCRE allows for four variants, but the only valid variants are 00 00 00 70 and 01 00 00 71. The original rule writer determined that this was an acceptable false positive risk and chose to write the rule in this way. He could also have done a full four byte OR between the two values, but that doesn't substantially impact performance or accuracy, so I'm not concerned with changing it now. Finally, you grab the offset to the Username field. This is at a known location 48 bytes from the begining of the payload. We know that this field is 2 bytes long, and written in little endian. We then move the DOE pointer that many bytes from the begining of the file. Finally, we check for the unicode string "sa" 8 bytes from where the DOE is located. We do this because we know that the TDS header is of a fixed size of 8 bytes, and all offset values are off by 8 when you calculate them from the begining of the payload.

So now I know the detection should have triggered on this pcap. I also know that there is thresholding in the rule, and that there may be some issues with that. But I have a hard time checking that with just this pcap that isn't really a pcap. So I decide to decode the Ethernet, IP and TCP headers to ensure that they line up with the rule (basically checking that the dst port is 1433):
[Layer 2/3 HEADERS]
Eth
00 14 bf 52 fe 40 dst
00 d0 2b 77 75 01 src
08 00 type

IP
45 Ver 4, Header size
20 TOS
00 bc Total length
1e 56 ID
40 00 Flags and frag info
6c TTL
06 Protocol (TCP)
xx xx Checksum
79 0b 50 ce 121.11.80.206
xx xx xx 7a xx.xx.xx.122

TCP
08 2b src 2091
05 99 dst 1433 (Correct port for rule)
a4 51 cc 4d checksum
b1 be 2b 43 ack number
50 18 hdr len/reservered/flags (ack/psh set flags consistent with stream state)
ff ff window size
3d 81 tcp checksum
00 00 urgent pointer
A couple of notes on this. First, the dst port was indeed 1433, so we're good there. But I wanted to point out something the original author of the email did that was very, very clever and important. He was careful to obfuscate the destination IP address so we don't know what network or company we're discussing. But he went further than that, and also obfuscated the checksum field so we couldn't use that as a check as we tried to work out what the IP address was prior to obfuscation. Very nice. I also noticed that the source IP address was left in, so I had to whois it. Turns out its an address in the Chinanet AS, so I'm assuming this is a live attack capture.

So...now I've pretty much completely decoded the packet, and am pretty certain it isn't a problem with a rule. But I hit up our bugtracking and search for old bugs that involve this SID. I had noticed that the revision number of the rule was 4, so I was hoping there was some evolution of the rule that would indicate something that might help me, but all the modifications were either process driven (standardizing the order in which modifiers come after the content: option) or adding documentation. But there was a PCAP that was built by Alex Kirk when they were first doing research on it. So I grabbed it and tested it against every snort rule we had:
Snort Test Suite v.0.3.0

Alerts:
1:3273:4 SQL sa brute force failed login unicode attempt Alerts: 93
1:3543:4 SQL SA brute force login attempt TDS v7/8 Alerts: 93
So the first thing that caught my eye here was that I had a new alert, so I grepped over the rules file to see what was going on there:
alert tcp $SQL_SERVERS 1433 -> $EXTERNAL_NET any (msg:"SQL sa brute force failed login unicode attempt"; flow:from_server,established; content:"L|00|o|00|g|00|i|00|n|00| |00|f|00|a|00|i|00|l|00|e|00|d|00| |00|f|00|o|00|r|00| |00|u|00|s|00|e|00|r|00| |00|'|00|s|00|a|00|'|00|"; threshold:type threshold, track by_src, count 5, seconds 2; reference:bugtraq,4797; reference:cve,2000-1209; reference:nessus,10673; classtype:unsuccessful-user; sid:3273; rev:4;)
OK, very cool. This indicated to me that I had an attack pcap against an actual server that responded correctly, so my confidence in the rule grew. Now I was wondering what impact the thresholding was having on the rule, so I copied the rule into my local.rules file. I then made a copy of the rule and removed the thresholding. This would allow me to see how many alerts of each were being generated. I used the local.rules file for two reasons. One, I was going to modify a rule, and I don't want to accidentally leave a non-published rule in my testing rule set, and two, it takes a long time to load up every snort rule, so I just load the two I want and things are much quicker. Here is what my local.rules looked like:
alert tcp $EXTERNAL_NET any -> $SQL_SERVERS 1433 (msg:"SQL SA brute force login attempt TDS v7/8"; flow:to_server,established; content:"|10|"; depth:1; content:"|00 00|"; depth:2; offset:34; content:"|00 00 00 00|"; depth:4; offset:64; pcre:"/^.{12}(\x00|\x01)\x00\x00(\x70|\x71)/smi"; byte_jump:2,48,little,from_beginning; content:"s|00|a|00|"; within:4; distance:8; nocase; threshold:type threshold, track by_src, count 5, seconds 2; reference:bugtraq,4797; reference:cve,2000-1209; reference:nessus,10673; classtype:suspicious-login; sid:3543; rev:4;)

alert tcp $EXTERNAL_NET any -> $SQL_SERVERS 1433 (msg:"SQL SA brute force login attempt TDS v7/8"; flow:to_server,established; content:"|10|"; depth:1; content:"|00 00|"; depth:2; offset:34; content:"|00 00 00 00|"; depth:4; offset:64; pcre:"/^.{12}(\x00|\x01)\x00\x00(\x70|\x71)/smi"; byte_jump:2,48,little,from_beginning; content:"s|00|a|00|"; within:4; distance:8; nocase; reference:bugtraq,4797; reference:cve,2000-1209; reference:nessus,10673; classtype:suspicious-login; sid:1;)
I then reran the test against the same pcap:
Snort Test Suite v.0.3.0

Alerts:
1:1:0 SQL SA brute force login attempt TDS v7/8 Alerts: 470
1:3543:4 SQL SA brute force login attempt TDS v7/8 Alerts: 94
That's a good result as well. I have 94 alerts on the thresholding rule, but 470 alerts on the rule with only the base detection. This tells me that the thresholding is behaving correctly. I've pretty much gone through everything that I can on this end. To recap the process:
  1. Did a quick eyeball check, screwed it up and thought there was a problem.
  2. Did a much more focused check after decoding the packet, discovered that the core functionality was fine.
  3. Checked the layer two and three headers, gave a thumbs up to checksum obfuscation, but didn't see anything problematic there.
  4. Checked our bug system, pulled the research notes and retested the rules using the original test pcap.
  5. Pulled the thresholding out of the rule to verify that that was working correctly.
Everything looked good on our end, so I reported back my findings to the original author. He indicated that the attacks were coming in roughly every 2.5 seconds, which would not trigger the threshold of 5 every 2 seconds (threshold:type threshold, track by_src, count 5, seconds 2;). But this is what we love about Snort, because a quick copy paste to the local.rules file and changing the threshold to 1800 seconds will certainly give him enough alerts to deal with.

So that is how we approach things from a problem rule perspective. Hopefully there is something in here you can apply to your own rule writing and troubleshoot, or at least you know that your reports don't just go into void and that we actually have a process in place to deal with them. Let us know if you have any questions.

Tuesday, December 1, 2009

require_3whs and the Mystery of the Four-Way Handshake

So, Tod Beardsley over at Breakingpoint Labs decided to kick around RFC793 some, and came across the "simultaneous connection". You can read the RFC at http://www.faqs.org/rfcs/rfc793.html, check around page 32 or look for the phrase "Simultaneous initiation". However, for a slightly more user-friendly description, check out Tod's blog entry at Breakpoint:
http://www.breakingpointsystems.com/community/blog/tcp-portals-the-three-way-handshake-is-a-lie

Long story short, there is an acceptable method of session establishment that goes something like this:

SYN ->
<- SYN SYN/ACK ->
<- ACK

Or, as I call it, the 4-way handshake. In Tod's testing, he found that this connection method worked on Ubuntu, OSX and Windows XP. This essentially reverses the flow of the connection establishment, but functionally does not change how data is transfered. We caught the link to this on Twitter, and scheduled some testing time. But we got tied up, and the folks over at Malware Forge posted their research:

http://malforge.com/node/20

Using some Python-fu, they found that it was possible to bypass Snort detection when a malicious server was modified to accept incoming sessions using the simultaneous connection method. The VRT verified this and worked on narrowing down the root cause, and then kicked it over to the dev team.

Enter Russ Combs, stream reassembly guru here at Sourcefire. He told us that Snort had a configuration option that would ensure that the four-way handshake didn’t impact the Stream5 preprocessor’s ability to correctly tag a stream and the subsequent direction of traffic.

The modification is to add the following value to your "preprocessor stream5_tcp:" line: require_3whs

To be clear, in the testing I'm going to show below, here are my values:

(failed test)
preprocessor stream5_tcp: policy first, use_static_footprint_sizes

(passed test)
preprocessor stream5_tcp: policy first, use_static_footprint_sizes, require_3whs


Here are the contents of my local.rules file I used in testing:

alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"Get with http_inspect method check"; flow: to_server, established; content:"GET"; http_method; sid: 3;)
alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"Get withstandard content match and flow check"; flow: to_server, established; content:"GET"; http_method; sid: 4;)
alert tcp $HOME_NET any -> $EXTERNAL_NET $HTTP_PORTS (msg:"Get with standard content match and no flow check"; content:"GET"; sid: 6;)

Here is the output I got when I ran the test, (failed tests first and Using the fake.pcap from http://malforge.com/node/20):

Snort Test Suite v.0.3.0
Alerts:
1:6:0 Get with standard content match and no flow check
Alerts: 1

In this case, we only alerted on the standard content match without flow enforcement. This indicates that stream5 has incorrectly interpreted the stream. Remember that both the flow keywords, as well as the http_method modifier require stream5 to have properly marked a stream in order to function.

Here are the test results after I added the require_3whs:
Snort Test Suite v.0.3.0
Alerts:
1:3:0 Get with http_inspect method check
Alerts: 1
1:4:0 Get with standard content match and flow check
Alerts: 1
1:6:0 Get with standard content match and no flow check
Alerts: 1

We now correctly alert on checks in both the http_inspect preprocessor and the flow direction. So, if you’re concerned about this evasion case, make the appropriate modifications, and then get to testing.

Hacker2Hacker and the State of Computer Security in Brazil

I was lucky enough to attend the 6th Annual Hacker2Hacker Conference this weekend in Sao Paulo, Brazil as a speaker sent by Sourcefire. As it was my first time in South America, the trip was an enlightening one - not only did I learn all about the awesomeness that are caipirinhas, the "unofficial official" drink of Brazil, I picked up a thing or two of interest about the network security community down in our largest neighbor to the south.

From a purely technical perspective, Brazil isn't much different from the United States - the people down there who make up the security community are professionals who know what they're doing, and they're working on interesting new web fuzzers, shellcode creation techniques, etc. Even though I speak only very minimal Portuguese, sitting in on some of the technical talks without translation still gave me the clear impression that these guys have the skills, and that anyone who might think otherwise because Brazil isn't a first-world country is sorely mistaken.

Where Brazil diverges from the US, though, is the perception surrounding the information security community. Heading down there is like taking a step back in time: the business community is extremely distrustful of the entire security industry, and the term "hacker" is nearly 100% synonymous with "criminal". The perception of anyone dealing with computer security in Brazil is so bad that Graça Sermoud of Decision Report, who led a panel discussion entitled H2CSO, or "Hackers 2 Chief Security Officers" at the conference, praised the bravery of those who joined the panel, given the potential risk to their reputations for doing so. The concept of White Hats vs. Black Hats is completely foreign to the Brazilian business community - and to most of the IT industry as well, according to many of the local conference attendees I spoke to.

That's not to say that the US is purely a bastion of enlightenment and forward-thinking people, of course; five minutes spent listening to CNN's coverage of anything computer security-related will show you that's clearly not the case. The difference is that here in the States, a substantial portion of the people making business decisions about computer security realize that just because you understand how to break into a network doesn't necessarily mean that you're doing so (at least not without the invitation of a company asking you to test its defenses), and that sometimes it takes someone with knowledge of how to be evil to stop truly evil people.

I'm sure this attitude won't persist forever; the fact that over 600 business professionals were watching the H2CSO panel live during the conference Saturday suggests that perceptions may be starting to shift in a positive direction already. In the meantime, though, if you want to talk IT security with someone in Brazil, you'd do well to keep in mind that your audience may not be as friendly as you'd think.

Wednesday, November 25, 2009

Rule release for today - November 25th, 2009

Extra coverage for the Microsoft Internet Explorer tag issue.

Changelogs etc, available here http://www.snort.org/vrt/advisories/2009/11/25/vrt-rules-2009-11-25.html

Monday, November 23, 2009

Rule release for today - November 23rd, 2009

Microsoft Internet Explorer suffers from a programming error that may allow a remote attacker to execute code on an affected system.

Advisory and changelog here: http://www.snort.org/vrt/advisories/2009/11/23/vrt-rules-2009-11-23.html

Help us help you

Remember how you've been hearing for years that cybercriminals would start targeting smartphones "soon"? Well, we've seen 2 iPhone worms this month alone. The first worm is "rickrolling" jailbroken iPhones in Austria Australia. The worm uses a simple hack to get a foothold on these iPhones: it is taking advantage of the fact that many users have installed SSH and have not changed the default SSH password on their phones. The second worm, which has been getting some press over the weekend, is taking advantage of the same hack and targeting ING bank customers in the Netherlands to redirect them to a phishing website.

As of August 2009, there were an estimated 13M iPhones in the US. 8.4% of these phones, or 1.1M, were jailbroken. That's a lot of phones. If you are part of that 1.1M and have SSH installed but have not changed the default SSH password, please please please do it now. Like take out your iPhone as you are reading this and follow the steps below now. Don't allow a script kiddie to mess with you or steal your data. Here's how to do it:
  • Download the MobileTerminal from the Cydia Store if you don't already have it
  • Launch MobileTerminal
  • At the prompt type 'su root'
  • You will be asked to enter the current root password to elevate your privilege. Enter 'alpine'
  • Type 'passwd' to change the password
  • You will be asked to enter the current root password. Enter 'alpine'
  • You will be prompted to enter a new password. Enter a strong password that cannot be easily brute-forced
  • Type 'exit' to exit the root account
  • At the prompt type 'passwd' to change the password of the current user
  • You will be asked to enter the current password. Enter 'alpine'
  • You will be prompted to enter a new password. Again, enter a strong password that cannot be easily brute-forced
That wasn't too hard, was it? Thanks for helping in the fight against malware.

Have a Happy Thanksgiving!

Wednesday, November 18, 2009

Rule release for today - November 18th, 2009

Rules added and modified in several categories. As usual, go here: http://www.snort.org/vrt/advisories/2009/11/18/vrt-rules-2009-11-18.html for the changelog.

Wednesday, November 11, 2009

November 2009 Vulnerability Report

Sourcefire VRT Vulnerability Report November 2009 from Sourcefire VRT on Vimeo.



November Vulnerability Report.

This month, Alain Zidouemba talks about Microsoft Patch Tuesday, the SSL renegotiation flaw and the iPhone worm.

Tuesday, November 10, 2009

Microsoft Tuesday Coverage for November 2009

A number of advisories from Microsoft this month, expect us to cover the most pressing ones in our upcoming Vulnerability Report. For now, here's a quick overview:

Microsoft Security Advisory MS09-063:
The Web Services on Devices API (WSDAPI) in Microsoft Windows Vista contains a programming error that may allow a remote attacker to execute code on an affected system.

A rule to detect attacks targeting this vulnerability is included in this release and is identified with GID 3, SID 16227.

Microsoft Security Advisory MS09-064:
A vulnerability in the Microsoft License Logging Service may present a remote, unauthenticated attacker with the opportunity to execute code on a vulnerable system.

Rules to detect attacks targeting this vulnerability are included in this release and are identified with GID 3, SIDs 16238 and 16239.

Microsoft Security Advisory MS09-065:
A vulnerability exists in the Windows kernel that may allow a remote attacker to execute code on a vulnerable system.

Rules to detect attacks targeting this vulnerability are included in this release and are identified with GID 3, SIDs 16231 and 16232.

Microsoft Security Advisory MS09-066:
A programming error in the Microsoft Active Directory NTDSA implementation may allow a remote attacker to cause a Denial of Service (DoS) against an affected system.

A rule to detect attacks targeting this vulnerability is included in this release and is identified with GID 3, SID 16237.

Microsoft Security Advisory MS09-067:
Multiple vulnerabilities exist in Microsoft Excel that may allow a remote attacker to execute code on an affected system.

Rules to detect attacks targeting these vulnerabilities are included in this release and are identified with GID 3, SIDs 16226, 16228, 16229, 16230, 16233, 16235, 16236, 16240 and 16241.

Microsoft Security Advisory MS09-068:
A vulnerability in Microsoft Word may allow an attacker to execute code on an affected system via the processing of a specially crafted Word document.

A rule to detect attacks targeting this vulnerability is included in this release and is identified with GID 3, SID 16234.

Changleogs on snort.org here: http://www.snort.org/vrt/advisories/2009/11/10/vrt-rules-2009-11-10.html

Thursday, November 5, 2009

DoJoSec meeting - November 5th

Tonight's DoJoSec has a change in lineup, since Lurene is on the PUP list for today, Matt Olney is stepping in to take her place and give a talk on "Custom Intrusion Detection Techniques for Monitoring Web Applications". This is something similar to the presentation he will give next week at OWASP Appsec DC 09 in that it has the same title. However, tonight's presentation will not be the same talk, instead it is geared more towards the audience for DoJoSec.

If you can attend, we'll see you there. There will be a few of us on hand to answer questions and chat about general security issues.

Wednesday, November 4, 2009

DoJoSec and DoJoCon

Tomorrow evening, starting at 6:00 pm, Capitol College, Laurel MD. Lurene Grenier will be giving a presentation on Byakugan. Following this event, on Friday morning, our Senior Director of the Vulnerability Research Team, Matt Watchinski, will be speaking at DoJoCon.

Check here for DoJoSec: http://www.saecur.com/dojosec.php

Check here for DoJoCon: http://www.dojocon.org/

Members of the VRT will be present at both events, and on Friday and Saturday they will be in attendance at the Sourcefire booth for DoJoCon. Come along with questions if you like or just to say hi.

Tuesday, November 3, 2009

Rule release for today - November 3rd, 2009

Adobe Adobe Adobe Adobe, we thought you only did patch releases once per quarter, guess we were wrong. Anyway, a few vulnerabilities with Shockwave. Get your rules on here: http://www.snort.org/vrt/advisories/2009/11/03/vrt-rules-2009-11-03.html

Monday, November 2, 2009

Paranoia and the rise of fake antivirus

This weekend I got a call from my father, who wanted my advice as the computer security guy in the family. It seems that my younger sister's laptop had become infected with a nasty little virus called Block Watcher, which had popped up a series of messages telling her that her computer was infected with a virus, and that she should go and purchase their product - for the low, low price of $30 - in order to clean her machine. Recognizing that something wasn't right, my sister called my father, who had in turned called me with his theory on how to best remove Block Watch, since his early attempts had been unsuccessful.

I quickly suggested that he Google for a removal tool, since modern malware is much more difficult to remove than anything he'd be familiar with (his last experience removing a virus was some time in the early-to-mid 1990's). A half-hour or so later, he called back, and said that while he'd found a removal tool, something about the site made him uneasy, and he wanted me to take a look and see if I could tell whether it was legitimate. When I pulled up the site - hxxp://removal-tool.com (WARNING: LIVE MALWARE!) - it seemed just as odd to me as it had to him, so I decided to do a bit of research on the site itself. When I put the domain name itself into Google, one of the first hits was a blog post from respected malware researchers TrendMicro showing how this exact site was delivering malware itself!

I downloaded a copy of the executable that the site suggested could be used to remove Block Watch and ran it through the free ThreatExpert.com analysis tool; the results are here. In addition to creating several files and registry entries on the target machine, the program opened up UDP port 1053 - as clear of a sign of a back door as you'll ever get (in fact, SANS shows a recent uptick in activity on this port, and lists a pair of trojans associated with it.

The question I'm sure you have by now is, "So what? Why do I care?". The answer is simple: this sort of fake anti-virus scam is on the rise, and many users on networks that you run and/or are charged with defending aren't as suspicious as my father and my sister. In fact, according to a recently released report from Symantec, there were roughly 43 million attempts to install fake anti-virus software between July 1, 2008, and June 30, 2009. If you're watching over even a moderately large network, chances are that at least a few of your users have run across something like this.

Clearly, it's in your best interests as a network security professional to educate your users about scams like these - perhaps with the simple rule of thumb that "if any program on your system tells you that you have a virus, contact the IT department immediately." It doesn't hurt to run the VRT Certified rule set, either, since our spyware category contains rules for some of the most prevalent threats, like Spyware Guard 2008 (SIDs 16134 & 16135).

Oh, and whatever you do, don't trust McAfee's SiteAdvisor for a determination on whether a particular web site is clean - they rate removal-tool.com as clean, despite the fact that 11 of the 17 user-submitted reviews on McAfee's own page say the page contains "Adware, spyware, or viruses". Clearly someone over there isn't paying attention. ;-)

Thursday, October 22, 2009

Rule release for today - October 22nd, 2009

A few modifications in this release, most notably a fix for a false positive issue that raised it's ugly head from the Microsoft Tuesday release.

Microsoft Security Advisory (MS09-059):
A vulnerability in the Microsoft Local Security Authority Subsystem Service (LSASS) may allow a remote attacker to cause a Denial of Service (Dos) against an affected system.

A previously released rule to detect attacks targeting this vulnerability has been modified to reduce the incidence of false positive events. It is included in this release and is identified with GID 3, SID 16167.

As always, changelogs: http://www.snort.org/vrt/advisories/2009/10/22/vrt-rules-2009-10-22.html

Snort 2.8.5.1 Release

Hot on the heels of the Snort 2.8.5 release, a new Snort tarball is now available that fixes a few issues:
  • Fixed syslog output when running on Windows.
  • Fixed potential segfault when printing IPv6 packets using the -v option. Thanks to Laurent Gaffie for reporting this issue.
  • Fixed segfault when additional policies were added during a configuration reload.
There's nothing particularly pressing with any of these issues, but as always you should download and install now.

Wednesday, October 21, 2009

Rapid7 make bold statement acquiring Metasploit Project

Normally the acquisition of an Open Source product by a commercial product wouldn’t make the VRT blog, but in this case I believe this acquisition is going to cause some interesting developments in the threat landscape and in the vulnerability management space. I also think this is a very bold endeavor for a vulnerability management company like Rapid7, more on that in a bit.

First up a quick Troll shoot.
  • The license for Metasploit stays BSD.
  • Metasploit continues to be a community driven project.
Next up, why this is interesting to the threat landscape.
  • When an Open Source project gets commercial backing the developers on that project don’t need day jobs anymore. They also get resources, tools, and budgets. This in my opinion means a lot of new code for this project in a short period of time. I saw exactly this when I started with Sourcefire almost 7 years ago, no more small releases just big old feature releases.
  • Faster exploit development. If you have resources and people you can quickly setup development environments, test things, reverse things, and build Metasploit modules. I’m guessing the number of exploits in Metasploit will quickly eclipse CORE and Immunity within a 6-month timeframe. I’m guessing this will follow the same course as with the Sourcefire VRT; go from 3k rules to 5k rules overnight.
  • Stability and Reliability. If you buy something you want it to work and if you’ve got resources your Open Source users expect a higher quality product. I’d assume they are going to hit this area first.
So what does this have to do with the threat landscape? Well two things, the first is more exploits, the second is a more reliable assessment platform which means I now have a much better way to pen-test my network. Pen-testers, network admins, systems administrators and security guys are going to get a better tool for finding vulnerabilities, determining if they are real, and being able to prove it to the boss man. At the same time, my own day job gets a little busier as everything they crank out I will need to investigate for detection purposes.

On the Vulnerability Management side, I think this changes the game for guys like nCircle and Tenable as Rapid7’s NeXpose™ product will be the only Vulnerability Management tool that can actually prove what it is reporting. It also gives Rapid7 the interesting advantage of being able to live test mitigation strategies and defenses. This is something that other vulnerability management solutions can’t do out of the box. That said it is going to be interesting to see how this integration takes place, and how many people are willing to click the “exploit host” button if that is how it is done.

Outside all that, I always loving seeing Open Source products make it into the commercial game as it continues to show the value of Open Source in the enterprise, and that just because software is free doesn’t mean it’s not worth more than the sum of all its license text.

Tuesday, October 20, 2009

Vulnerability Report now available via iTunes

Yes, that's right, our monthly vulnerability report is now available for your convenience, via iTunes. To subscribe, hit up this link: http://itunes.apple.com/WebObjects/MZStore.woa/wa/viewPodcast?id=336370330

Note that the video is large due to it being in high definition, we'll be making adjustments as we move forward to make sure the download size is more reasonable.

Additionally, there is now a Sourcefire application for the iPhone/iPod Touch that will keep you up to date with the happenings on our blog, snort.org and the top 10 malware threats from ClamAV.

Rule release for today - October 20th, 2009

A maintenance release this week, with several new rules in web-client, specific-threats, web-misc, oracle, smtp and dos rule sets.

As always, the changelogs are available here:

http://www.snort.org/vrt/advisories/2009/10/20/vrt-rules-2009-10-20.html

Thursday, October 15, 2009

October 2009 Vulnerability Report

Sourcefire VRT Vulnerability Report October 2009 from Sourcefire VRT on Vimeo.

Sourcefire VRT Vulnerability Report

This month's report covers the Microsoft Tuesday advisories, including IIS FTP vuln, SMBv2 remote code execution and Adobe patch Tuesday.

Wednesday, October 14, 2009

How does malware know the difference between the virtual world and the real world?

It is no secret that the Information Security industry takes advantage of virtualization software in order to research security threats. VMWare, Sandboxie, Virtual PC, Anubis, CWSandbox, JoeBox, VirtualBox, Parallels, QEMU are just just of few of these virtual machines. The cornucopia of virtual environments gives the security professional the opportunity to observe and analyze malicious software in a convenient and easily reproducible manner. This presents an issue for malware writers and because of this, they often include code in their binaries to make it more difficult for computer security professionals to analyze their executables in those virtual environments. Here are some of the most frequent anti-virtualization techniques:

Check for the presence of virtualized hardware:

Virtual environments have virtual network interfaces. Just like any network interface, they are assigned a unique MAC address that usually includes the manufacturer's identification number. For example, a network interface for VMware Workstation will have a MAC address that starts with 00:50:56 or 00:0C:29 (VMware has more than one organizationally unique identifier or OUI). Malware can check for the presence of certain OUIs and choose to behave differently or not to display any malignant behavior whatsoever in a virtual machine.

It is also possible to check for the presence of GUIDs that give away the fact that it's being run in a virtual environment. Eg: MD5: 0151c5afde070a7b194f492d26e9b3ef (Trojan.Agent-124243 by ClamAV):
.text:004012EA                 jz      short loc_40130E
.text:004012EC                 push    104h            ; size_t
.text:004012F1                 push    offset a76487644317703 ; "76487-644-3177037-23510"
.text:004012F6                 lea     ecx, [ebp+var_104]
.text:004012FC                 push    ecx             ; char *
.text:004012FD                 call    _strncmp
.text:00401302                 add     esp, 0Ch
.text:00401305                 test    eax, eax
.text:00401307                 jnz     short loc_40130E

The presence of HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\ProductId 76487-644-3177037-23510 shows that the host environment is CWSandbox.

Each virtual machine is associated with specific device drivers, registry values that give away their nature. For instance:

Hard drive driver (VMware):
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Enum\IDE\DiskVMware_Virtual_IDE_Hard_
Drive___________00000001\3030303030303030303030303030303030303130\FriendlyName VMware Virtual IDE Hard Drive

Video driver (VMware):
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Control\Class\{4D36E968-E325-11CE-BFC1-08002BE10318}\0000\DriverDesc VMware SVGA II

Mouse driver (VMware):
%WINDIR%\system32\drivers\vmmouse.sys

Any of these can be used by a malware writer to detect the presence of a virtual machine.

Descriptor Table Registers check:

There is only one Interrupt Descriptor Table Register (IDTR), one Global Descriptor Table Register (GDTR) and one Local Descriptor Table Register (LDTR) per processor. Since there are two operating systems running at the same time (the host and the guest), the virtual machine needs to relocate the IDTR, GDTR and LDTR for the guest OS to different locations in order to avoid conflicts. This will cause inconsistencies between the values of these registers in a virtual machine and in the native machine. The instructions SIDT, SGDT and SLDT are assembly instructions that can respectively be used to retreive the values of IDTR, GDTR and LDTR.

Eg: MD5: b27d73bfcbaec49a95f23b45e9cf9310 (W32.Virut-54 by ClamAV)
UPX2:3142A03A loc_3142A03A:              ; CODE XREF: sub_3142A02E+2 j
UPX2:3142A03A                 push    eax
UPX2:3142A03B                 sidt    fword ptr [esp+var_6+4]
UPX2:3142A040                 pop     eax
UPX2:3142A041                 mov     eax, [eax+6]
UPX2:3142A044                 shl     eax, 10h
UPX2:3142A047                 jns     short sub_3142A021

The IDT is at:
0x80ffffff in Windows
0xe8XXXXXX in Virtual PC
0xffXXXXXX in VMware


Backdoor I/O port:

VMware uses the I/O port 0x5658 ("VX" in ASCII) to communicate with the virtual machine. A piece of malware could detect the presence of that port by doing the following:
mov EAX, 564D5868h ; VMXh
xor EBX, EBX  ; set EBX to anything but 0x564D5868 (in this case 0)
mov CX, 0Ah   ; Backdoor command. 10: Get VMware version
mov DX, 5658h  ; VX
in EAX, DX  ; Read from port VX into EAX
cmp EBX, 564D5868h ; EBX should have the magic number VX is VMware is present. If not, EBX=0

Basically, the magic number 0×564D5868 ("VMXh") is copied to EAX and EBX is set to anything but 0x564D5868. A backdoor command is loaded into CX and finally the I/O port number 0x5658 ("VX") is loaded into DX. Then the "in" instruction is used to read from port 0x5658 into EAX. Outside of VMware (on a native host), a privilege error occurs. Under VMware, the magic number 0x564D5868 is returned to EBX (yes, in this case EBX is affected by in EAX, DX) hence the CMP instruction.

Exit if being debugged:

While this is not, per se, a anti-virtualization technique, it remains a a popular check performed by malware to see if a user-land debugger is present on the operating system. That's because more often than not a debugger will be installed in a virtual image used for malware analysis.

Eg: MD5: 74ab05d1ebdba509fd68711b360c1235 (Trojan.IRCBot-3475 by ClamAv)
.text:004050F8                 push    offset aZwquerysystemi ; "ZwQuerySystemInformation"
.text:004050FD                 push    [ebp+hModule]   ; hModule
.text:00405100                 call    ds:GetProcAddress
.text:00405106                 mov     [ebp+var_4], eax
.text:00405109                 push    offset aZwqueryinforma ; "ZwQueryInformationProcess"
.text:0040510E                 push    [ebp+hModule]   ; hModule
.text:00405111                 call    ds:GetProcAddress
.text:00405117                 mov     [ebp+var_14], eax
.text:0040511A                 cmp     [ebp+var_4], 0
.text:0040511E                 jz      short loc_405147

.text:00405120                 push    0
.text:00405122                 push    2   ; SystemInformationLength
.text:00405124                 lea     eax, [ebp+var_8]
.text:00405127                 push    eax ; SystemKernelDebuggerInformation
.text:00405128                 push    23h
.text:0040512A                 call    [ebp+var_4] ; ZwQueryInformationProcess
.text:0040512D                 test    eax, eax
.text:0040512F                 jnz     short loc_405147 ; process is being debugged

For the Windows API function ZwQuerySystemInformation, setting the value of SystemInfoClass to 2 (SystemKernelDebuggerInformation) retrieves system information on the presence of a user-land debugger.
NTSTATUS WINAPI ZwQuerySystemInformation(
__in       SYSTEM_INFORMATION_CLASS SystemInformationClass,
__inout    PVOID SystemInformation,
__in       ULONG SystemInformationLength,
__out_opt  PULONG ReturnLength
);

For the Windows API function ZwQueryInformationProcess, setting the value of ProcessInformationClass to 7 (ProcessDebugPort) retrieves the port number for the debugger of the process. A value other than 0 indicates that the process is being run through a user-land debugger.
NTSTATUS WINAPI ZwQueryInformationProcess(
__in       HANDLE ProcessHandle,
__in       PROCESSINFOCLASS ProcessInformationClass,
__out      PVOID ProcessInformation,
__in       ULONG ProcessInformationLength,
__out_opt  PULONG ReturnLength
);


How to thwart virtual machine detection:

For starters, do not install tools provided by the virtual machine in your guest OS. For example, VMware provides a set of tools called VMware Tools that enhances the overall user experience with the guest OS. The drawback is that installing VMware Tools in a Windows guest OS will leave many clues easily detectable by a piece malware that it is being run in a virtual machine.

The next step is to edit your VMware .vmx file. When you create a new virtual image with VMware, settings about it are stored in a configuration file with the .vmx extension. The file contains information about networking, disk size, devices attached to the virtual machine, etc...and is usually located in the directory where you created your virtual image. With your guest OS stopped, edit the .vmx file and append the following:

isolation.tools.setPtrLocation.disable = "TRUE"
isolation.tools.setVersion.disable = "TRUE"
isolation.tools.getVersion.disable = "TRUE"
monitor_control.disable_directexec = "TRUE"
monitor_control.disable_chksimd = "TRUE"
monitor_control.disable_ntreloc = "TRUE"
monitor_control.disable_selfmod = "TRUE"
monitor_control.disable_reloc = "TRUE"
monitor_control.disable_btinout = "TRUE"
monitor_control.disable_btmemspace = "TRUE"
monitor_control.disable_btpriv = "TRUE"
monitor_control.disable_btseg = "TRUE"

Now start your virtual machine. This will allow you to run (with very little effort) more vmware-aware malware than before.

I'll point out that:

  1. monitor_control.disable_directexec = "TRUE" will usually thwart descriptor table registers checks. This setting will make VMware interpret each assembly instruction instead of executing them directly on the processor. Therefore a the result of a sidt instruction will not be an address in the 0xffXXXXXX range as one would get without this setting.
  2. isolation.tools.getVersion.disable = "TRUE" will thwart the backdoor I/O check.

Now, what if after all this, your piece of malware still detects that it is being run in a virtual machine? I would go through the code, find where virtual machine checks are being performed and patch the code with NOPs (0x90).

Finally, if that's too hard or not possible for whatever reason, run your sample on a native system! :-) (you can always use system backup and restore software to quickly revert the machine to its original state without reinstalling the OS)