Tuesday, June 29, 2010

Rule Release for Today, Tuesday June 29th, 2010

We added and modified multiple rules in the backdoor, dos, exploit, misc, multimedia, netbios, oracle, pop3, rpc, specific-threats, web-activex, web-client and web-misc rule sets.

Information is here: http://www.snort.org/vrt/advisories/2010/06/29/vrt-rules-2010-06-29.html/

Monday, June 28, 2010

IMPORTANT Rule Download Change

Today the Snort Web Team made a change to the way that Snort rules are downloaded from snort.org. Hopefully this will result in faster downloads for most people. The changes are highlighted below:

We are changing the way we publish rules. In June 2010 we stopped offering rules in the "snortrules-snapshot-CURRENT" format. Instead, rules are released for specific versions of Snort. You will be responsible for downloading the correct rules release for your version of Snort. The new versioning mechanism will require a four digit version in the file name. For the Subscriber and Registered releases of Snort 2.8.6.0 and Snort 2.8.5.3, the download links would look as follows:


Subscriber Release:

http://www.snort.org/sub-rules/snortrules-snapshot-2860.tar.gz/43f45cd452456094ac7e3ae58b12d256fa3d2f23

http://www.snort.org/sub-rules/snortrules-snapshot-2853.tar.gz/43f45cd452456094ac7e3ae58b12d256fa3d2f23


Registered User Release:

http://www.snort.org/reg-rules/snortrules-snapshot-2860.tar.gz/43f45cd452456094ac7e3ae58b12d256fa3d2f23

http://www.snort.org/reg-rules/snortrules-snapshot-2853.tar.gz/43f45cd452456094ac7e3ae58b12d256fa3d2f23



Configuring Oinkmaster:

In order to use Oinkmaster to update Snort with VRT rules you must edit oinkmaster.conf.

In the oinkmaster.conf modify "url" to:

url = http://www.snort.org/pub-bin/oinkmaster.cgi/<oinkcode here>/<filename>



Important Note:

As noted above, the CURRENT and 2.8 naming conventions have been deprecated as of June 2010 for oinkmaster downloads. You are responsible for updating your oinkmaster.conf file to reflect your installed version of Snort. Continued attempts to download outdated versions will result in being banned. Example for snort 2.8.6.0:


url = http://www.snort.org/pub-bin/oinkmaster.cgi/43f45cd452456094ac7e3ae58b12d256fa3d2f23/snortrules-snapshot-2860.tar.gz



Example for snort 2.8.5.3:


url = http://www.snort.org/pub-bin/oinkmaster.cgi/43f45cd452456094ac7e3ae58b12d256fa3d2f23/snortrules-snapshot-2853.tar.gz


Saturday, June 26, 2010

Smart Grids and the Importance of Smart Security Choices

I got a flyer in my mail a couple of days ago, telling me that my local utility company would be coming out soon to install a smart meter on my house. Like most customers, I didn't think too much about it, until the new meter was installed today. That's when my curiosity got the better of me - even though I arrived home after dark, I had to go take a look at the shiny new toy on the side of my house.

At first glance, it was somewhat disappointing. The rusty old box surrounding the meter (which has probably been there since the house was built in 1942) hadn't been replaced. Sure, the new meter had a nice little LED, and I even saw a kWh reading flash by...but it was still a meter, nothing too exciting. There was, however, a prominent display of the manufacturer's name - Elster - and a model number, R2SD (commonly known as REX2), off in the corner. "Hmmm," I thought. "I should go Google that. I wonder what protocol it speaks?"

My search immediately turned up a link to a Canadian regulatory document approving the use of this type of meter in the country. Reading through it, I immediately turned up some security red flags:
  • "The REX2 meter is equipped with 900MHz radio frequency communications..."
  • "...the meter has the ability to update the communications firmware remotely."
  • "When the meter is registered to the local area network (LAN) it may display a registration number of the collector."

Doomsday scenarios immediately began popping into my head. 900MHz is an open, easily accessed frequency here in the United States; what is there to prevent pranksters, criminals, or even Google Street View cars from accessing my meter while they drive down the street? Hacking programmable road signs to warn of "Zombies Ahead" was funny; somebody coming along and making my meter tell the power company to up the voltage could mean my house burns down. The remote kill ability cheerly advertised in the flyer sent by the power company as a "feature" could easily be abused to whack power to entire neighborhoods with a few keyboard strokes. Oh, and what if someone uploaded a malicious new piece of firmware to my power meter, and ended up with complete control of the electricity coming into my house - or worse yet, used my meter as an access point to break into the larger electrical grid?

Digging a little further, I got a little reassurance when I found my meter's specifications page, which, handily enough, included a "Security" tab at the bottom. It seems that these meters use 128-bit AES encryption when talking to the Energy Axis network, which is in use by my utility company for transferring data to and from these new smart meters. That proves that the manufacturers are at least thinking about security, and provides a moderate barrier to entry for anyone trying to tamper with the system.

The data transmission itself uses the ZigBee protocol - which, surprisingly enough, is an open standard, freely available to anyone who wants to wade through a 604-page brick of a specification. Since digesting that will take some time, I decided to simply read the Wikipedia article instead, which again had a handy security-related section. The initial sentence there was great:

"As one of its defining features, ZigBee provides facilities for carrying out secure communications, protecting establishment and transport of cryptographic keys, cyphering frames and controlling devices."

Wow! Security built right in - how great is that?

Well, as it turns out...not so great. Things went from bad:

"This part of the architecture relies on the correct management of symmetric keys and the correct implementation of methods and security policies."

...to worse:

"Keys are the cornerstone of the security architecture; as such their protection is of paramount importance, and keys are never supposed to be transported through an insecure channel. There is a momentary exception to this rule, which occurs during the initial phase of the addition to the network of a previously unconfigured device."

Yes, that's right, folks: this protocol sends its encryption keys over the network in plaintext when it starts up for the first time. I know, I know, the window of opportunity is maybe 30 seconds...but really, you couldn't think of some way to avoid sending the keys to the kingdom over an insecure channel, even if it is only once?

Still, I'll take an open standard whose creators at least had security in mind when they wrote it over one of the myriad closed, poorly documented SCADA protocols in use throughout the utility industry, where devices will happily reply to any query without a hint of authentication and the entire network is assumed to be safe. Some security is better than no security at all.

Given the inevitability of the smart grid - not only is it being hyped by politicians of every stripe, my power company's FAQs tell me that I could not have opted out even if I had wanted to - we clearly can no longer rely on the typical SCADA security model of "don't plug it into the Internet and we'll be cool." Networked toasters may not be here yet, but networked power is, and the people who run these systems need to be thinking long and hard about security, and making sure that they implement it as intelligently as possible. Let's make sure that we, as both the general public and the security industry, keep our eyes on these folks as more and more networked utilities roll out - because after all, what good are your firewall, IDS, and AV systems if you lose power to all of your machines?

Tuesday, June 22, 2010

ClamAV for Windows

Recently, we released the only official Windows-specific version of ClamAV, appropriately called ClamAV for Windows (http://www.clamav.net/lang/en/about/win32/). It is designed to use little memory and processing speed because it uses an advanced cloud-based protection mechanism, best of all it's free (as in free beer. Ummm...beeeeer). If you haven't tried it yet, I really encourage you to.

You can download ClamAv for Windows from here: http://www.clamav.net/lang/en/about/win32/ or by going to a site like download.com and typing "clamav" in the search box. There are 2 installers available: a 32-bit version and a 64-bit version. If you don't know which one to choose for your Windows operating system, you can check this page http://support.microsoft.com/kb/827218. It will tell you if you are running a 32-bit or 64-bit of Windows. If that's too complicated, just start by downloading the 64-bit version. If you have a 64-bit operating system, you will get a speed boost from running the 64-bit version of ClamAV for Windows. If it turns out that you are running a 32-bit version of Windows, don't worry, executing the 64-bit installer will generate this warning:
64-bit warning
Pic.1: Wrong installer version
That will be your cue to grab the 32-bit installer instead :-)
In the last step of the installation process, you can opt to perform a recommended initial FlashScan. A FlashScan is not as comprehensive as a full scan but is designed to be a quick check for your system to see if you have any malware running in memory. The last screen in the installation process will also ask whether you want to share that you installed ClamAV for Windows with your Facebook friends or your Twitter followers. The more people that run ClamAV for Windows, the better the protection. Every time a ClamAV for Windows user encounters a new threat, all other users are protected from that same threat in real-time.

So, now that you've installed ClamAV for Windows and run a FlashScan. You are now looking at the Scan tab. The results of the scan you just performed are displayed on the left hand side and on the right hand side you have Scan Options. Leave them set to "on" in order for future scans to look at running processes and at locations where malware can hide in order to be run every time you turn your computer on.
flashscan
Pic.2: FlashScan
Under the "Settings" tab, you can choose to turn off some of the layers of protection that the software provides. Unless you have a good reason to do that, I recommend you keep everything set to "on".
settings
Pic.3: Settings tab
Under the "History" scan, you can review the different scans that were performed on the computer.
history
Pic.4: History tab
Finally, the "Summary" tab give you an overview of how many people are using the product as well as how many threats the ClamAV for Windows community is protected from thanks to the power of the cloud.
summary
Pic.5: Summary tab
The video below shows you the kind of nasty things you might encounter. On a completely clean computer, I visited a link that prompted me do download an executable called gb5339.exe. While you will hopefully not purposely visit a known bad URL, keep in mind that your computer could have automatically downloaded and executed this file via a drive-by-download (that's when a bad guy takes advantage of an vulnerability in your browser to force actions on your computer simply by visiting an infected web page), or through social engineering (eg: you get a spoofed email that appears to come from a know person that ask you to download the attached executable and run it....and you do). You can see in the video that shortly after running gb5339.exe, the background image changes to show "You are infected" in big red letters. Furthermore, a fake/rogue/bogus piece of antivirus software is loaded and reports that I have infected files on my computer. Again, I had a fresh installation of Windows XP. There are no infected files on my computer. The fake antivirus program's goal is to scare me into believing that I am infected in order to purchase a license for the software that will supposedly help fix my problems. Good thing I didn't fall for that, and neither should you.

Ransomware in action on a PC



Repeating the experiment with a clean computer and a fresh installation of Windows XP, but now with ClamAV for Windows installed, gb5339.exe is blocked as soon as I try to copy it on my hard drive (this is called blocking the file "on-access").

Ransomware being detected and it's actions blocked by ClamAV for Windows

Monday, June 21, 2010

Defenders of the Faith

Quite recently, Tavis Ormandy released a 0-day vulnerability in a prominent piece of software. For this transgression, both he and his employer received a good deal of bad press. Sadly, very few in the professional security researcher crowd made enough noise about this, and to the contrary, one man in particular came down squarely against him. Thankfully however we still have Brad Spengler. Last night he posted what none of us had the courage to say. You can find this post on the Daily Dave mailing list archives: http://seclists.org/dailydave/2010/q2/58

I won't rehash the post, I'd very much rather you read it yourselves. But I would like to point out the timeline.

June 5) Tavis contacts Microsoft requesting a 60 day patch timeframe.

June 5-9) Tavis and Microsoft argue about the patch timeframe and are unable to come to an agreement.

June 9) Tavis releases the information to the public.

June 11) Microsoft releases an automated FixIt solution

Tavis did not "give Microsoft 5 days to patch the bug" as was said by various media outlets.

As a few people (@dinodaizovi, @weldpond) have pointed out, this strikes at the heart of the term "Responsible Disclosure". A clever branding trick by software vendors, the term automatically assumes that any other method of disclosure is irresponsible. So we must ask, were the actions that  Tavis took responsible? Would it have been more responsible to allow a company to sit on a serious bug for an extended period of time? The bugs we are discussing are APT quality bugs. Disclosing them removes ammunition from APT attackers. If your goal is to stop attacks, where bugs are the supply chain of attacks, you must make bug and exploit creation prohibitively expensive as compared to the return on that investment. This is why OS mitigations are helpful. Removing high-value bugs from the marketplace is what full disclosure is good at.

I'd like to explicitly debunk a couple of myths related to this issue now.

Myth 1) Targets are a commodity. (All targets carry the same value)

At some point, the security posture of common software is no longer about your mother's Windows XP desktop with a CRT monitor from 8 years back. It is not about the money wasted when sales people's laptops need to be reimaged. It is about real security. It is about the financial information of your public company. It is about the plans for Marine 1 ending up in the hands of people who shouldn't have them. It is about the stability of our power grid.

This is because when a vulnerability becomes public it is no longer as useful for serious attackers. Defense companies provide detection and prevention mechanisms, researchers provide useful mitigations, and high end companies are able to arm their response teams with the information necessary to protect their particular environments. The companies with high-value data that are regularly attacked are able to proactively protect themselves. The attackers who have spent significant time evaluating a company's vulnerability with regard to a particular bug, will now find that bug to be much less useful for a stealthy attack. Yes, you may see an uptick in attacks, but you see a downtick in overall target value. The loss due to a 20+ company exploit spree such as "Aurora" is significantly greater than the monetary loss due to low-end compromises which can be cleaned with off the shelf anti-virus tools. No one is persistently using advanced exploitation techniques against low-value targets such as Joe's Desktop. These attacks are focused on large corporations, government, and military targets with the goals of industrial espionage and military superiority.

Myth 2) Only Tavis knew about the bug

The media asks, "how could attackers know about this flaw if Tavis hadn't released it?" Every bug hunter knows this statement is ridiculous. Security research, like all scientific research, moves like a flock of birds. I'm relatively sure that Leibniz wasn't spying on Newton's work, but they both developed calculus at the same time. They both had the same environment and the same problem to solve, so they developed the same working solution. I'm sure I'm not the only researcher to have lost bugs to another researcher's reporting. Within the past year I have lost several bugs which on the market would have sold for in excess of $65,000. At the point in which the bugs became public, their value dropped to approximately $0 because companies are able to build protections against the vulnerabilities. The bugs that I lost were bugs that had lived for more than 5 years, yet they were discovered independently by myself and others within months. Even if no one else had found the bug, there are other ways an attacker could become aware of it. It would be unreasonable to assume that high-end researchers and their companies are not the targets of espionage. The value of their research is high, and if an attacker can get a free exploit and know that it won't be patched in the next 60 days that is a win for the attacker. It is unreasonable to assume that a bug is not known to attackers once it is found by a researcher. Tavis has protected high-value targets by refusing to allow an unreasonable timeline for patching. Tavis has devalued the vulnerability by letting companies know about a threat that they otherwise would have been unaware of. Tavis has acted responsibly.

The long and short of this is that when only a handful of people have information, that information is very valuable and very useful. When everyone has this information, everyone can use it, but its value decreases significantly. Tavis simply devalued this flaw. Yes, what Tavis did means you might have to reimage your mother's computer when you visit at Thanksgiving. But also, what Tavis did means that you won't think twice about whether or not the power will be on when you get there. Despite branding, what Tavis did was responsible. In this case, "responsible disclosure" wouldn't have been responsible.

Thursday, June 17, 2010

Rule Release for Today - June 17th, 2010

As a result of ongoing research, the Sourcefire VRT has added multiple rules in the dos, exploit, ftp, mysql, policy, rpc, specific-threats, spyware-put, web-activex, web-client, web-misc and web-php rule sets to provide coverage for emerging threats from these technologies.

For a complete list of new and modified rules please see:

http://www.snort.org/vrt/docs/ruleset_changelogs/changes-2010-06-17.html

 

Tuesday, June 15, 2010

National Cyber-Security Emergency and Phenomenal Cosmic Power or Lieberman -- EARN IT

So…you’re at the bar and across the room you see this incredible [insert whatever floats your boat here].You spend an inappropriate amount of your time watching this person and your mind starts to fill in the details that the dark environment masks.  Then they turn around walk towards the bar and (finally!) walk into enough light that you can see what they look like.  Your first thought…”KILL IT WITH FIRE!

This is a lot how I felt as I read through the “Protecting Cyberspace as a National Asset Act of 2010” (pdf), a 199 page piece of legislation introduced by Senator Lieberman (I-CT) along with Senator Susan Collins (R-ME) and Senator Thomas Carper (D-DE).  It’s worth noting, in reviewing the legislation that Susan Collins and Joe Lieberman are the ranking members of the Senate Committee on Homeland Security and Governmental Affairs for their respective parties (with Joe Lieberman counting as a Democrat for the purposes of committees).

This is an impressive, expansive and ambitious piece of legislation, completely reworking the Federal government’s management of cyber security issues.  There are a lot of things in the bill that I think are necessary.  Of course, as you’ve probably seen by this point, there are a couple of issues that..erm..have “opportunity for improvement”.

First up is the creation of the Office of Cyberspace Policy within the Office of the President.  There is little in our world today that is as poorly managed, rapidly changing and outright dangerous as “cyberspace”.  Having an apparatus at the level of the White House that manages these issues from a strategic point of view is important.  It is this office that would be tasked with creating a “national strategy to increase the security and resiliency of cyberspace”.It is also the first place (page 9) you notice the incredible breadth of changes in the bill.

The Director of Cyberspace Policy is tasked with, to paraphrase, overseeing all policies and activities of the Federal Government across “all instruments of national power” to ensure the security and resiliency of cyberspace.  The act specifically cites diplomatic, economic, military, intelligence, law enforcement and homeland security activities and also calls for the management of “offensive activities, defensive activities and other policies and activities necessary to ensure effective capabilities to operate in cyberspace”.  So while it is organized for “Protecting Cyberspace”, the options available to ensure cyberspace is available is…well everything, including utilizing the NSA and Cyber Commands offensive capabilities to keep the peace.This office operates at the highest executive level, and the capability of every tool available, even offensive ones, needs to be understood.

Next, the National Center for Cybersecurity and Communications.  This is where a lot of the good work of this bill, in my opinion happens.  The most important one is called out specifically as a duty of the Director of the NCCC: “sharing and integrating classified and unclassified information, including information relating to threats, vulnerabilities, traffic, trends, incidents and other anomalous activities”.  This determination to improve Government/Private sector communication comes into play again in the section defining the responsibilities of the US CERT.  The information isn’t limited to domestic sources either, with the bill specifically calling for the Secretary of Defense, the Director of National Intelligence, the Secretary of State and the Attorney General to develop “information sharing pilot programs with international partners of the United States”.

The communication thing is critically important.  This game is hard enough without having as much information as possible to base your defensive posture on.  One of the common complaints from the private sector (who run 80% of the “Critical Infrastructure” of the U.S.) is the difficulty in getting actionable informationout of the Government. The recently released “High-Impact, Low-Frequency Event Risk to the North American Bulk Power System” report from the North American Electric Reliability Corporation calls out several times that “focus should be given to improving the timely dissemination of information concerning impending threats and specific vulnerabilities” going on to say that "more effort is needed to appropriately de-classify information needed by the private sector”.

From the perspective of incident response, there is another important new service provided by the DHS.  "The DHS will, at the request of critical infrastructure operators and provided the DHS has sufficient resources, to both assist the operator in complying with mandatory security and emergency measures" (yes, we’ll get to this…) as well as, through the US CERT “respond to assistance requests from…owners or operators of the national information infrastructure to…isolate, mitigate or remediate incidents”.

Now…you might have noticed that CERT is doing a lot of useful things from a central point for information to a cyber-guardian-angel ready to assist the most important components of the national information infrastructure in defending themselves from attack.  But there are some strings that come with this. Those entities deemed to be “covered critical infrastructure” are required to report any cyber security issue that might indicate an actual or potential cyber vulnerability or exploitation of a cyber vulnerability.  And the DHS gets to decide the procedures to enable that reporting.  So if you’re a critical infrastructure operator…you are starting to get a little uncomfortable here, no matter how many disclaimers about the protection of information are placed into the bill.

Then you look at Section 248: “Cyber Vulnerabilities to Covered Critical Infrastructure”.  Between this and Section 250:  “Enforcement” the DHS is granted near unlimited authority to deliver requirements to critical infrastructure providers on handling security threats.  In short, DHS can deliver a mandate that a certain security issue be addressed, and a set of mitigations to be used.  Now, in an exceptionally rare, well thought out approach to this mandate (and a shout out to Richard Clarke and the open-ended mandate crowd), the bill allows for the DHS to accept alternate mitigations provided by the operator if the DHS determines they are adequate. These requirements, as you can guess by the name of section 250 come with a “civil penalty” if providers fail to address these issues.

My inner Libertarian gets pretty spooked when it comes to this kind of thing.  But, to refer back to NERC’s HILF document, market forces seem to dictate doing the exact wrong thing when it comes to security:

The increased use of IP networks for Supervisory Control and Data Acquisition (SCADA) and other operational control systems, in particular, creates potential vulnerabilities. Executives with SCADA/ICS responsibilities reported high levels of connections of those systems to IP networks including the Internet—even as they acknowledged that such connections create security issues.” --(pg31, NERC HILF, Cyber Vulnerability)

Since NERC hasn’t been able to fix this, and the Department of Energy and Federal Energy Regulatory Commission apparently are unable to deliver the regulations necessary to fix it, maybe this is the only way to address these issues. When you declare that an electric grid is a system “so vital to the United States that the incapacity or destruction of such…would have a debilitating impact on security, national economic security….” maybe you should keep the damn thing off the Internet. (I'm going to say this more than once, just so you know).  It seems so obvious to every security professional I talk to and to NERC itself.  Clearly they won’t self regulate here, so maybe this is the answer. (Note that I understand that this act targets “National Critical Information Infrastructure”, but the market and privacy concerns in the information infrastructure are 10 times worse, yet we haven't even addressed the "easy" (for some value of easy) case).

Then, finally we get to the section that drives everyone nuts (you know, the kill-it-with-fire part). Section 249: National Cyber Emergencies.  In short, the DHS has the authority, when the President declares a Cyber Emergency to “develop and coordinate emergency measures or actions necessary to preserve the reliableoperation and mitigate or remediate the consequences”.  What this means is that in a “Cyber Emergency”, the DHS can do anything it feels necessary to the critical infrastructure systems of the U.S. and can mobilize the entirety of the Federal Government, provided the DHS does not “supersede the authority of the Secretary of Defense, the Attorney General or the Director of National Intelligence in responding to a national cyber emergency”.

Yeah, this is a good time to panic. I think we’ve amply demonstrated over the last decade that even when a President is restricted by law his actions can be…aggressive, and this essentially hands over to the executive branch the complete control of the nations critical infrastructure.  It doesn’t matter that there are hoops to jump through, the authority and the broad power that this bill allows for is simply unacceptable.  Further, we’ve absolutely avoided holding any high-level political figure accountable for his or her actions (did you just say Scooter Libby? Stop it…) as they relate to violations on the restriction of powers. We just don’t do it.

Also, I've never had a great deal of respect for anyone that comes to me in a panic about some issue when they've failed to do the things already in their power to address it themselves.  There is already regulatory power already vested in a number of Government entities, and they have failed to exercise that power (DOE, FERC,  I'm looking at you) to mandate even the most basic of security practices (like not putting our power grid on the Internet).  The list of "Critical Infrastructure" that relies on the Internet is simply unforgivable.  If its critical, get the damn thing off the Espionage Super-Highway.  What I'm saying here, is don't come to me saying you need broad, unmitigated power to manage a situation because it is so horrible when you have failed utterly to mitigate and reduce the chance that that situation will actually come to fruition.

This clause is glass-house based rock throwing.  When the Federal Government demonstrates that it can protect itself from cyber attack, when you can stop the terabytes of data flooding from Government and defense contractors, when they show that this issue is so important that they are willing to deliver regulation NOW to these critically important organizations, when you've done everything you can to ensure that this power will never need to be used...then, and only then is it appropriate to discuss this.  Earn it, Senator Lieberman, show me that the Federal Government is willing to do more than just panic after the fact.  (Hello 9/11, Katrina, BP).

All this and I didn’t even get to the part where the Director has “sole, unreviewable discretion” to decide how to address problem and deficiencies related to security issues in “national information infrastructure” or any infrastructures that is “owned, operated, controlled or licensed for use by, or on behalf of, the [DoD] or intelligence community”.  Look….using terms like “sole, unreviewable discretion” just isn’t conducive to a trusting relationship between the public sector and the DHS.  We’re already mad at you about the whole shoe thing anyways.

So here’s the deal, Sen. Lieberman.You’re on the right track here, concentrate on the following:
  1. Ensuring open communications channels between the private sector and the Federal Government.
  2. Ensure an aggressive declassification (within the limits of law and protecting sources, etc…) of threat information so that the private sector can be notified so they can modify their defensive posture.
  3. Build a coordination center that targets not just Federal to Private sector communication, but communications within an industry vertical with the ability to bring in both offensive and defensive experts to assist in mitigations.
  4. Provide an avenue for technical assistance to critical infrastructure organizations so that even organizations without a mature security posture can react in an agile manner to threats.
  5. If market forces don’t move critical infrastructure operators to do right, then fix it.
  6. Prove that you are willing to take the steps necessary to prevent incidents of this magnitude prior to them happening.
  7. Let’s revisit the “Incredible Cosmic Power” approach to incident response.Even if it is scaled back to providing a list of recommended actions backed by an automatic exemption from civil liability if organizations act on them.But we cannot simply hand over the infrastructure to the Federal government.
Good luck, Joe…unfortunately, you’re going to need it.

Monday, June 14, 2010

Rule Release for Today - June 14th, 2010

Apple Safari RCE (CVE-2010-1939), Google Chrome GLUG bypass (CVE-2010-1663). Details available here: http://www.snort.org/vrt/advisories/2010/06/14/vrt-rules-2010-06-14.html/

Sourcefire VRT Expansion Plans (We are Hiring)

One of the hardest things in life is finding the right place to work, where you can spend eight to ten hours a day doing something you enjoy and also pay your bills. I’ve been lucky enough in my life to find this type of place three times: HiverWorld, Farm9, and Sourcefire. Each one of these places had a number of attributes that made it appealing to me, and made it where I wanted to spend the vast majority of my time. Since I’m lucky (maybe unlucky) enough to be the guy responsible for the Sourcefire VRT, I’ve been able to take all the things that appealed to me about all these companies, and build a team where the people have all the right personality traits, and the environment has all of the right factors.

If the following 10 things appeal to you and describe the qualities you want in your co-workers and your workplace, then the VRT is interested in talking with you. Please submit your resume on Sourcefire’s website or send us a message at [email protected]

Submit Resume here

  • Passion (for the work) – Very few people are trained academically for vulnerability analysis, malware analysis, network engineering, or hacking. It is something that is learned by experience and experimentation. If you have dedicated your free time and lost countless days and nights perfecting some portion of it then you have the passion I’m talking about.
  • Good people – If you enjoy an environment were everyone around you is better than you at something and is willing to teach you their skill in exchange for your own, then the VRT might be the right place for you.
  • Goals – Clear definitions of strategic goals to the best of my ability and my managers’ abilities. If we can’t clearly explain the “why” then we won’t ask you to waste your time on it.
  • Belief – A group of people that share an intrinsic belief that it is possible to accomplish difficult, if not “currently” impossible, goals. More importantly, this belief is present not because of arrogance, but because of our experience proving that we actually can accomplish these goals.
  • Drive – A personal drive that exceeds the average. If you’ve worked on a problem for many months, still haven’t solved it, but truly believe you will shortly, you are either hard headed or have a lot of drive. Whether you’re pushing yourself by hitting your head on a wall, or just plain never giving up, you will most likely create a positive outcome.
  • Latitude – If you hate rules but understand personal responsibility, this might be the environment for you. You’ll get just enough rope to hang yourself, as long as you take responsibility for your own demise.
  • Trust – An environment were you can trust the people you work with to actually do what they say, do it to the best of their ability, and trust you to do the same.
  • Responsibility – For your actions and your words. If you broke it, you fix it. If you said you would do it, do it.
  • Risk – An environment where you are allowed to take risks in the pursuit of goals. Risk is the potential to fail and without failure there is no opportunity to learn. You will be able to take risks as long as you sign up for the responsibility of failing.
  • Leadership – You expect the people above you to actually lead, and trust them enough to actually follow them.

If these ten things fit your personality, and describe the place you want to work, please see the job description below. When submitting your resume please include either a comment or something in your actual resume that references the fact that you read this post.

Title: Research Analyst

Basic Purpose

This role is primarily responsible for developing Snort rules and other protection mechanisms for Sourcefire products based on information from public and private vulnerability feeds. The researcher will work on a team of analysts that are responsible for rapidly developing the necessary protection methods to protect Sourcefire customers from emerging threats. Research analyst also work with a variety of fuzzing frameworks, exploit development tool kits, and code coverage tools to quickly developing PoC (Proof of Concept) test cases for public vulnerabilities.

Essential Duties and Responsibilities
  • Develop Snort rules, ClamAV signatures, and risk analysis reports for internal review and external customers.
  • Conduct vulnerability analysis and risk assessments on public and private vulnerabilities.
  • Develop PoC test cases for vulnerabilities based on the information provided for triggering the vulnerabilities.
  • Work with fuzzing tools and code coverage tools to develop threat profiles for open and closed source applications.
  • Debug false positives and false negatives in Snort rules and other protection mechanisms.
Essential Education, Skill, and Environment Education and Work Experience
  • No previous work experience or formal education required.
Required Knowledge and Skills
  • Experience configuring Windows and Linux/UNIX applications.
  • Strong analytical and troubleshooting skills.
  • Experience with TCP/IP and networking in general.
  • Intermediate knowledge of PERL, Python, and/or Ruby.
  • Ability to learn new skills and apply them in a rapidly changing, high-pressure environment.
Preferred Knowledge and Skills
  • Experience with Snort & other network security tools.
  • Experience with network configuration and deployment.
  • Experience with PCRE or equivalent regular expression library.
  • Highly motivated and creative.
Work Conditions
  • Works closely with software reverse engineers and research analysts to quickly develop Snort rules and other protection mechanisms based on the provided vulnerability details.
  • Moderate to high levels of stress will occur at times.
  • Fast paced and rapidly changing environment.
  • Extremely talented and experienced team members and mentors.
  • No special physical requirements.
  • Constant internal training, drinking games, and heated discussions.

Thursday, June 10, 2010

Rule Release for Today, June 10th, 2010

Microsoft Help and Support Center Bypass Vulnerability:

Microsoft Help and Support Center contains a programming error that may allow a remote attacker to bypass security restrictions on an affected system. The error occurs when invalid hex-encoded characters are used as a parameter to a search query using the hcp:// URI schema.

Changelogs here: http://www.snort.org/vrt/advisories/2010/06/10/vrt-rules-2010-06-10.html/

Tuesday, June 8, 2010

Rule Release for today - June 8th, 2010

Here we are again, Microsoft Tuesday for June 2010. A number of issues this month and rules to provide coverage for attack detection. Main advisory numbers for IDS/IPS coverage are MS10-033, MS10-034, MS10-035, MS10-038, MS10-039 and MS10-041. Check out the advisory and changelog here: http://www.snort.org/vrt/advisories/2010/06/08/vrt-rules-2010-06-08.html/

Monday, June 7, 2010

Single Threaded Data Processing Pipelines and the Intel Architecture

Or,

No Performance for you, go home now.

Today's blog post is a guest appearance by our Benevolent Dictator and Glorious Leader, Marty Roesch.

We asked Marty for his thoughts on threading, performance and processing network data. Here's what we got:

Executive Summary

Performance of processes on current- and next-generation Intel CPUs is closely tied to proper cache utilization. Claims being made regarding Snort’s capability to maximize performance of today’s multi-core platforms are ignorant of the Intel CPU architecture and the steps that can be taken to make it perform on that architecture. Performance of Snort-like packet processing has nothing to do with threading and everything to do with proper load allocation on the available computing resources of a device. Sourcefire has demonstrated that Snort can perform at very high speeds on both single and multi-core machines by virtue of proper configuration and load allocation.

Discussion

There is a lot of FUD being thrown about in the IDS/IPS world regarding single threaded versus multi-threaded packet processing in the Snort detection engine architecture and its impact on top-line performance. The claims being made generally center on the age of Snort’s engine architecture and the appropriate utilization of compute resources on the modern Intel architecture. This paper will analyze the primary claims and provide a technical briefing on the matter at hand.

intelarchitecture.jpg

Intel CPU Architecture

One of the first things to understand about the Intel CPU is that it relies heavily upon cache for its performance. When a program is run its code and data are loaded into system memory and they are processed by the CPU. Read/Write Access to system memory is much slower than the CPU can process data through its primary processing logic so Intel added caching to its CPUs to prevent them from spending most of their time waiting for memory accesses. On the Intel Core 2 Duo architecture shown above there are two caches, an L1 cache which is very fast and small (due to the expense of making memory that fast) and a much larger L2 cache which is somewhat slower than L1 but much faster than system memory.

When a program is running, the CPU tries to predict which memory it’s going to need next and loads the L1 and L2 caches appropriately to minimize time spent waiting on memory access. Programs that perform very poorly will frequently be seen to be inefficient at the cache level exhibiting a large number of “cache misses”. In these programs the CPU burns so many cycles waiting for the cache to be refilled with needed data that performance suffers. Fast programs are built to take maximum advantage of the cache architecture of the CPU.

With today’s multicore CPUs this picture gets more complicated. In a multicore CPU with multiple processes spread across different cores the same rules apply in general. A program with efficient cache attributes will perform better than one that is cache inefficient. The complication comes when the programs become multi-threaded in the multi-core environment.

The idea behind multithreading is to speed up throughput of a process by having multiple simultaneous threads of execution working on multiple pieces of data. A multithreaded process that has one thread stalled waiting for data can execute another thread on another piece of data which maintains high overall throughput in the system. For processes that take maximum advantage of this arrangement there can be substantial performance improvements.

There is downside, however. Threads that are spread across CPU cores which operate on the same data have to keep their caches synchronized (or, coherent). As shown in the diagram above, there is an L1 cache per core and a shared L2 cache on the Intel Core 2 CPU architecture. This architecture is the same on all current Intel x86 CPUs. When there are two different threads operating on the same data executing across two different cores in this architecture, the L1 caches have to be synchronized with one another essentially for every access across the L2 cache. Boiling it down, every time you access memory (even for a read) you have to spend some clock cycles synchronizing the cache.

The downside gets even worse if the threads are spread across multiple CPU dies (the physical chips themselves). Multi-die systems are very common today, for example recent Intel 4-core CPUs are really two 2-core dies in a common package. When the threads are running on multiple physical dies and there’s a cache coherency update to keep the local L1/L2 caches synchronized the updates happen across the main memory bus (or, front side bus). The front side bus is much slower than the CPU cache and it’s also a broadcast bus, all devices that are plugged into it have to look at any message on the bus to figure out if its for them or not. Things that are plugged into the bus include all the CPUs on the system and the main memory.

Looking at the architecture of the Intel CPU a few things become very clear when looking at writing high performance code. Threads should access data on their own core only. Accessing a single piece of data across multiple cores has major performance impact that will not be made up for by increased throughput in a serial packet processing framework like Snort.

snortarchitecture.jpg

Snort Architecture and Design

Snort is a single-threaded multi-stage packet processing pipeline, it runs on one CPU core and the data that it processes stays resident on that core and in that cache. Packets arrive off of the network serially and are processed in the order of reception. If the bandwidth being passed by the network interface associated with a Snort instance is greater than it can handle more instances of Snort can be launched and the traffic can be load balanced across the instances. That is how Sourcefire sensors achieve their high multi-gigabit performance today, a kernel-based load balancing mechanism drives traffic to multiple Snort instances that each run on a single CPU core and can consume over a gigabit per second per core of traffic.

This design has some inherent advantages. It is simple and rugged, there are no corner cases that can cause deadlock and freeze the processing pipeline for multiple cores of execution. Because it doesn’t use threading it doesn’t suffer from concurrency and locking overhead required to maintain the internal consistency of a multithreaded application which hinder performance. Putting the load balancing mechanism outside of Snort allows multiple (hardware or software based) methods to be utilized to increase aggregate system performance.

In short, today’s Snort architecture is well suited to take advantage of modern Intel CPU design when intelligently paired with load balancing and platform resource management.

The case for Intelligent Multithreading

Multithreading can be useful for several things in an application like Snort:

  1. Presenting a single point of more interactive management for multiple analysis threads with the same configuration.
  2. Sharing information between threads to provide additional detection information.
  3. Load-balancing across threads with a common configuration.

In the first case, it can be seen that in a given Snort instance a “nice to have” would be a unified interactive management interface to a set of Snort instances for the purposes of managing the configuration and runtime behavior of the overall process. SnortSP implements this idea by providing a shell interface that allows a user to construct a traffic analysis thread from major components (data source, analytics, etc) and run them against an interface set while maintaining interactive access to the analyzer thread. Any modern IPS implementation that provides the level of functionality that Snort does (i.e. open source, extensible platform) should have a similar capability built into it.

The second case of information sharing is useful for a number of applications. In a multithreaded instantiation where different threads are performing different tasks (e.g. Snort/RNA) on copies of the same data there can be very useful data exchange between the threads such as real-time detection tuning or multi-session attack correlation. The key here is that if two threads are operating on the same data that the data for each thread reside in independent memory space so that the CPU cache management system doesn’t attempt to keep the caches synchronized. There will be some overhead for the initial buffer creation and copying but this will be far less than the cache sync overhead.

The third case is one of load balancing traffic across multiple instances of Snort with a common configuration. Functionally this is the same as what is being done today on Sourcefire sensors except that the load balancing happens in the process and is made less efficient than the current mechanisms due to synchronization and locking overhead of the thread management system. Given the performance that is seen in the Snort 2.x code base today this third option is not particularly desirable on the x86 platform.

The Intel architecture lends itself to an optimal application architecture where one thread that runs on a single CPU core processes an individual piece of data and then moves on to the next one. Multiple threads are useful on that CPU core if a single thread would stall the CPU while waiting for data to load or an instruction to execute. Several of the research paths currently being pursued by Sourcefire involve this architecture.

If the top level worst case of Snort performance is an optimization target then a new detection engine architecture should be investigated. The current model that buffers and processes packets has memory management and processing overhead that has worst-case performance implications that are noticeable to the user of the system. A new processing architecture that utilized in-sequence packet processing via finite state machines (FSM) and reduced or eliminated buffering could see significant performance gains over the current detection architecture.

What Not To Do

An architecture that should be avoided at all costs is a threading system that spreads the computational load of a single piece of data across multiple CPU cores. This approach will maximize cache misses on any single core and require continuous reloading of the cache across the multiple CPU cores that are involved in data processing as well as constant cache synchronization. An architecture that implements this mechanism will have very bad worst case performance and its best case performance will be far below what can be achieved per core on a single threaded application performing the same tasks.

Conclusion

In this paper the architecture of the Intel CPU and Snort were explored as well as the architecture of multithreaded applications and their interaction with the Intel caching model. An analysis of different cases for multithreading Snort-like applications was also performed. In the real world performance claims of one architecture versus another it can be shown that the Snort 2.x architecture is highly optimized for today’s CPUs when paired with an intelligent load balancing mechanism.