Detection is the process of collecting, identifying, validating, and escalating suspicious
events. It has traditionally been the heart of the reasoning behind deploying IDSs. Too
many resources have been devoted to the identification problem and fewer to issues of
validation and escalation. This section is a vendor-neutral examination of detecting
intrusions using NSM principles.
As mentioned, detection requires four phases.
1. Collection: The process begins with all traffic. Once the sensor performs collection, it
outputs observed traffic to the analyst. With respect to full content collection, the data
is a subset of all the traffic the sensor sees. Regarding other sorts of NSM data (session,
statistical, alert), the data represents certain aspects of the traffic seen by the sensor.
2. Identification: The analyst performs identification on the observed traffic, judging it
to be normal, suspicious, or malicious. This process sends events to the next stage.
3. Validation: The analyst categorizes the events into one of several incident categories.
Validation produces indications and warnings.
4. Escalation: The analyst forwards incidents to decision makers. Incidents contain
actionable intelligence that something malicious has been detected.
Collection involves accessing traffic for purposes of inspection and storage. Chapter 2
discussed these issues extensively. Managers are reminded to procure the most capable
hardware their budgets allow. Thankfully the preferred operating systems for NSM operations,
such as the BSDs and Linux, run on a variety of older equipment. In this respect
they outperform Windows-based alternatives, although it’s worth remembering that
Windows NT 4 can run on a system with 32MB of RAM.9 Nevertheless, few sensors collect
everything that passes by, nor should they. Because few sensors see and record all
traffic, the subset they do inspect is called observed traffic.
Not discussed in Chapter 2 was the issue of testing an organization’s collection strategy.
It’s extremely important to ensure that your collection device sees the traffic it
should. IDS community stars like Ron Gula and Marcus Ranum have stressed this reality
for the past decade. Common collection problems include the following:
• Misconfiguration or misapplication of filters or rules to eliminate undesirable events
• Deployment on links exceeding the sensor’s capacity
• Combining equipment without understanding the underlying technology
Any one of these problems results in missed events. For example, an engineer could
write a filter that ignores potentially damaging traffic in the hopes of reducing the
amount of undesirable traffic processed by the sensor. Consider the following scenario.
Cable modem users see lots of ARP traffic, as shown here.
Deployment of underpowered hardware on high-bandwidth links is a common problem.
Several organizations test IDSs under various network load and attack scenario conditions.
• Neohapsis provides the Open Security Evaluation Criteria (OSEC) at http://
• ICSA Labs, a division of TruSecure, offers criteria for testing IDSs at http://
• The NSS Group provides free and paid-only reviews at http://www.nss.co.uk/.
• Talisker’s site, while not reviewing products per se, categorizes them at http://
The IATF is organized by the National Security Agency (NSA) to foster discussion among developers and users of digital security products. The federal government is heavily represented. I
attended in a role as a security vendor with Foundstone. The October meeting
focused on Protection Profiles (PPs) for IDSs.12 According to the Common Criteria,
a PP is “an implementation-independent statement of security requirements
that is shown to address threats that exist in a specified environment.”13 According
to the National Institute of Standards and Technology (NIST) Computer Security
Resource Center (http://csrc.nist.gov/) Web site, the Common Criteria for IT
Security Evaluation is “a Common Language to Express Common Needs.”14
Unfortunately, many people at the IATF noted that the IDS PP doesn’t require a
product to be able to detect intrusions. Products evaluated against the PPs are
listed at http://niap.nist.gov/cc-scheme/ValidatedProducts.html.
This process seems driven by the National Information Assurance Partnership
(NIAP, at http://niap.nist.gov/), a joint NIST-NSA group “designed to meet the
security testing, evaluation, and assessment needs of both information technology
(IT) producers and consumers.”15 The people who validate products appear to be
part of the NIAP Common Criteria Evaluation and Validation Scheme (CCEVS)
Validation Body, a group jointly managed by NIST and NSA.16
I haven’t figured out how all of this works. For example, I don’t know how the
Evaluation Assurance Levels like “EAL4” fit in.17 I do know that companies trying to
get a product through this process can spend “half a million dollars” and 15+ months,
according to speakers at the IATF Forum. Is this better security? I don’t know yet.
Beyond issues with filters and high traffic loads, it’s important to deploy equipment
properly. I see too many posts to mailing lists describing tap outputs connected to hubs.
With a sensor connected to the hub, analysts think they’re collecting traffic. Unfortunately,
all they are collecting is proof that collisions in hubs attached to taps do not result
in retransmission of traffic. (We discussed this in Chapter 3.)
I highly recommend integrating NSM collection testing with independent audits, vulnerability
scanning, and penetration testing. If your NSM operation doesn’t light up like
a Christmas tree when an auditor or assessor is working, something’s not working properly.
Using the NSM data to validate an assessment is also a way to ensure that the assessors
are doing worthwhile work.
Once while doing commercial monitoring I watched an “auditor” assess our client. He
charged them thousands of dollars for a “penetration test.” Our client complained that we
didn’t report on the auditor’s activities. Because we collected every single packet entering
and leaving the small bank’s network, we reviewed our data for signs of penetration testing.
All we found was a single Nmap scan from the auditor’s home IP address. Based on
our findings, our client agreed not to hire that consultant for additional work.