Friday, July 15, 2011

Do you really trust that certificate?

If you've read many of my posts on this blog, you've probably realized by now that I'm lazy when it comes to dealing with malware. I hate the "whack-a-mole" game of trying to stay on top of every new thing every new piece of malware does - not only because it'd keep me busy 24/7 if I tried to do that, but also because Snort would end up without particularly useful coverage anyway even if I (and a hundred of my closest friends) did.

With that in mind, I was pleased to add SID 19551 - "POLICY self-signed SSL certificate with default Internet Widgits Pty Ltd organization name" - to our last rule release (Issued on 2011-07-14). On the surface, it doesn't seem like it's related to malware at all - but if you'll give me a moment to explain, you may find yourself turning it on in the not-too-distant future.

The thought process behind it started with the barrage of requests I've received recently for coverage of the "indestructible" TDL4 botnet. One requester was kind enough to supply this excellent analysis from SecureList.com, which provided a list of recent C&C servers for this particular botnet. Armed with that information, I was able to go query up my malware sandbox, which had dozens of recently-run samples ready for analysis.

Sifting through the traffic, I noticed that, in addition to the custom encryption described in the analysis I'd read, successful connections to the C&C servers were starting off with SSL-encrypted traffic. Most people would be disheartened at the sight of this; I immediately zoomed in and looked at the server certificate, hoping that the botnet authors had used a unique Common Name in the certificates that we could use for an easy rule. Unfortunately, they had not; instead, they'd used the default option for a self-signed certificate, "Internet Widgits Pty Ltd".

As I sifted through the rest of the PCAPs, hoping to find a quality pattern, it dawned on me that even using the default option from a self-signed certificate was a useful indicator. Sure, most IT administrators use self-signed certs on their internal gear, and even some cheap administrators of low-traffic, public-facing sites will use them too (::looks at 21-year-old self::). Any serious, relevant site that uses SSL will have a validly signed certificate from a trusted CA - and even the cheapskates out there will usually set a non-default name on their self-signed certs.

With that in mind, we've made this rule available, just in case you agree with this logic and want to give this method of detection a spin. The rule is of course off by default, given that it could generate a substantial number of false positives (we'll be eager to get feedback from the field on just how many it generates, and what the ratio of useful-to-garbage alerts actually looks like). If you do decide to turn it on, I would recommend also checking that you've enabled SIDs 19496 - 19550, which look for DNS queries for the TLD4 C&C domains listed in the report I referenced above. If you see the two fire in rapid sequence, well, chances are real high you've got a problem on your hands.

P.S. Seems the good folks at Netresec.com had a similar idea the just a few days before we published that rule. I promise we didn't steal from them, it's just that great minds think alike. ;-)

Wednesday, July 13, 2011

Binary C&C Over HTTP

A few weeks ago I gave a presentation at the CARO 2011 Workshop in Prague. Besides being set in a stunningly beautiful location, the conference was an excellent opportunity to meet malware researchers from around the world - a group who are, by and large, distinct from network security researchers.

Since I personally happen to think that the separation of these two groups is a shame (and, well, since I needed a topic that would get me out to Prague in the springtime), my presentation crossed the proverbial streams, by looking at malware-generated network traffic. Thanks to the malware sandbox we have running over here, I've got traffic like that coming out my ears.

Specifically, the presentation focused on the presence of pure binary C&C channels being sent over HTTP. After the Night Dragon trojan (SIDs 18458/18459 for those keeping score at home) created a big media stir back in February, I was struck by the realization that sending data without HTTP headers over port 80 was actually a pretty solid trick, and that other malware authors might be doing something similar. After all, basically every firewall on the planet will let you initiate an outbound connection to the Internet on that port, and net flow sure isn't going to do much good on the busiest port on any network. Where better to be a needle in a haystack?

Running through approximately 1.5 million PCAPs from the sandbox, I realized that not only was this sort of thing happening among other malware families - it was actually fairly common. In fact, a full 0.8% of those 1.5 million samples showed this sort of behavior - a number which seems small, until you realize just how much malware you could catch with extremely simple behavioral analysis.

For those interested in more details, you can read my slides here. We are willing to share samples with legitimate security researchers - provided you're willing to send relevant data back our way in return.

For those just interested in protecting their networks - we're currently working with the Snort team to find the best way of detecting traffic like this at a generic level. In the meantime, I highly suggest that you enable SID 18492 - which looks for DNS queries made by the most prevalent bit of malware displaying this behavior in our sandbox - and that you consider turning on the entirety of our blacklist.rules and botnet-cnc.rules categories, which is where we're adding most of the new rules pulled from data generated by the sandbox.

Tuesday, July 12, 2011

Now Available -- Razorback 0.2 Release Candidate

0.2 Release Candidate

This week we’re putting out the Razorback 0.2 release candidate.  You can find it here:

http://sourceforge.net/projects/razorbacktm/files/Razorback/razorback-0.2.0-rc.tbz/download

This release, and the 0.2 final release scheduled for next week, contains all the major functionality for the dispatcher. The dispatcher in 0.2 now has the following capabilities:
  • Data acquisition and submission API
  • Alerting and judgment API
  • Queue based messaging system
  • Data blocks stored to disk
  • Support for local (shared file system) and over-the-wire data block transmission
  • Local and global caching services
  • MySQL database back-end
  • Remote management through the use of libcli

We use several open source services and libraries, so you’ll need to have those set up. The quick list is:
  • Apache's ActiveMQ
  • memcached (and associated libraries)
  • libcli
  • mysql (and associated libraries)
  • uuid libraries

Tom "The Amish Hammer" Judge has done a great job of laying out the prerequisites and other installation information on the Sourceforge Trac site here: http://sourceforge.net/apps/trac/razorbacktm/. After you have the prerequisites for installation, getting setup with a basic setup goes something like this:
  • tar -zxvf razorback-0.2rc.tar.gz
  • cd razorback
  • ./configure --prefix=/home/myhome/02rc/ --enable-debug --disable-officeCat --enable-routing-stats --disable-snort --disable-clamavNugget --with-api=/home/myhome/02rc/lib/
  • make; make install
  • Use the .sql scripts in ./dispatcher/share to setup schema and populate key data fields
  • cd /home/my home/02rc/etc/razorback
  • Change the names of *.config.sample to *.config
  • Change the name of magic.sample to magic
  • Edit dispatcher.conf
    • Modify database settings
    • Modify GlobalCache settings to point to your memcached server
    • Change username/password for the console
    • For now, leave everything else at default
  • Edit rzb.conf
    • Modify MessageQueue to point to your ActiveMQ server
  • cd /home/myhome/02rc/bin
  • ./dispatcher -d
    • Dispatcher should start up in debug mode
  • In another window, and in /home/myhome/02rc/bin:
    • ./masterNugget -d
    • master nugget and any nuggets you configured should start up in debug mode
  • In another window, and in /home/myhome/02rc/bin:
    • Find a PDF file
    • Inject it into the system:
      • /home/myhome/02rc/bin/fileInject  --type=PDF_FILE --file=monkey.pdf
    • A copy should be in your /tmp directory called block-.  This is done by the File Log nugget.
That test means your basic setup works.  We'll follow up with more information on the ClamAV and Snort-as-a-Collector nuggets in a future blog post, but both are functional for this build.  As always, you can get support from the Razorback trac site or from the Razorback mailing lists.

    Q3 -- Detection


    Now that we have the core of the system mostly in place, the Supreme High Royal Emperor Watchinski, head of the VRT, has declared that Q3 will be dedicated to building out the detection capability.  And there was much rejoicing.  (Seriously, the Dispatcher is awesome and all, but what we really want to do is detect bad things.  Its our thing.)

    To that end we'll be working towards several goals:
    • Script interface so that detection can be build in any given scripting language
    • A web portal so you can submit files to our Razorback deployment
    • A "Defense Run" where each developer works on two new nuggets for collection or detection
    • Improved configuration setup
    • A set of ISOs and VMWare images so you can quickly get the system up for testing.

    We'll keep you up to date on the Q3 stuff and we hope you let us know how you are doing with the 0.2RC.  You can expect a final release of the 0.2 build sometime next week, provided all goes well.