The Practice of Network Security Monitoring

Some Quotes from the Author with my Notes, Thoughts, and the Occasional Opinion

Book Cover

Book Title Author Purchased Digital or Physical
The Practice of Network Security Monitoring Richard Bejtlich Yes Both

Chapter One - Network Security Monitoring Rationale

Richard Bejtlich

We need to recognize that incident response, broadly defined, should be a continuous business process, not an ad hoc, intermittent, information technology (IT)--centric activity"

Richard Bejtlich

"Time is the key factor in this strategy because intruders rarely execute their entire mission in the course of a few minutes, or even hours. In fact, the most sophisticated intruders seek to gain persistence in target networks--that is, hang around for months or years at a time. Even less advanced adversaries take minutes, hours, or even days to achieve their goals. The point is that this window of time, from initial unauthorized access to ultimate mission accomplishment, gives defenders an opportunity to detect, respond to, and contain intruders before they can finish the job they came to do. After all, if adversaries gain unauthorized access to an organization's computers, but can't get the data they needed before defenders remove them, then what did they really achieve?"

Richard Bejtlich

"NSM is not a blocking, filtering, or denying technology. It is a strategy backed by tactics that focus on visibility, not control."

Richard Bejtlich

"An organization that makes visibility a priority, manned by personnel able to take advantage of that visibility, can be extremely hostile to persistent adversaries. When faced with the right kind of data, tools, and skills, an adversary eventually loses. As long as the CIRT can disrupt the intruder before he accomplishes his mission, the enterprise wins."

The Range of NSM Data

When security analysts work with full content data, they generally review it in two stages. They begin by looking at a summary of that data, represented by "headers" on the traffic. Then they inspect some individual packets"

  • Full Content - making exact copies of the traffic seen on the wire.
  • Extracted Content - refers to high-level data streams--such as files, images, and media--transferred between computers.
  • Session Data - a record of the conversation between two network nodes.
  • Transactional Data - similar to session data, except that it focuses on understanding the requests and replies exchanged between two network devices.
  • Statistical Data - describes the traffic resulting from various aspects of an activity.
  • Metadata - data about data.
  • Alert Data - reflects whether traffic triggers an alert on an NSM tool.

Key Definitions by the Author Richard Bejtlich

Network Security Monitoring (NSM)

is threat-centric, meaning adversaries are the focus of the NSM operation. Continuous Monitoring (CM) is vulnerability-centric, focusing on configuration and software weaknesses.

Retrospective security analysis (RSA)

to apply newly discovered threat intelligence to previously collected data in hopes of finding intruders who evaded earlier detection.

Chapter Two - Collecting Network Traffic: Access, Storage, and Management

I don't have a lot of notes for this chapter because of my networking background. I also recently completed the SANS 572 course and exam taught by Phil Hagen. The course covers all of the information provided in Chapter Two.

Storage Estimate for full packet capture

Hard drive storage for one day = Average network utilization in Mbps x 1 byte/8bits x 60 seconds/minute x 60 minutes/hour x 24 hours/day

Example: If your network's average utilization of a 1Gbps link is 100Mbps this is how to use the formula:

100Mbps x 1 byte/8 bits x 60 seconds/minute x 60 minutes/hour x 24 hours/day = 1,080,000MB per day or 1.08TB per day

Ten NSM Platform Management Recommendations by the Author Richard Bejtlich

  • Limit command shell access to the system to only those administrators who truly need it. Analysts should log into the sensor directly only in an emergency. Instead they should access it through tools that allow them to issue commands or reterive data from the sensor.
  • Administrators should never share the root account, and should never log into sensors as the root account. If possible, access the sensor using shared keys, or use a two factor or two-step authentication system like Google Authenticator.
  • Always administer the sensor over a secure communication channel like OpenSSH.
  • Do not centrally administer the sensor's accounts using the same system that manages normal IT or user assets.
  • Always equip production sensors with remote access cards.
  • Assume the sensor is responsible for defending itself. Limit exposure of services on the sensor, and keep all services up-to-date.
  • Export logs from the sensor to another platform so that its status can be remotely monitored and assessed.
  • If possible, put the sensor's management interface on a private network reserved for management only.
  • If possible, use full disk encryption to protect data on the sensor when it is shut down.
  • Create and implement a plan to keep the sensor software up-to-date. Treat the system like an appliance, but maintain it as a defensible platform.

Chapter Three - Stand-alone NSM Deployment and Installation

Security Onion

Deployment Modes

Security Onion suports two deployment modes:

  1. Stand-alone mode - a self-contained, single-box solution that collects and presents data to analysts.
  2. Server-plus-sensor mode - acts as a distributed platform, with sensors collecting data and a server aggregatting and presenting the data to analysts.

Download and Installation

Welcome to the Security Onion Installation Guide

POST Installation Guide

PostInstallation Guide

Verify all NSM Services are Running :

sudo service nsm status

If any services are not running, try starting them :

sudo service nsm start
The following information was not listed in the book, however, I found it extremely useful to create a Analyst VM instead of accessing the sensor directly for analysis. I used the following instructions to create mine using Ubuntu 16.04.3 LTS:

Chapter Four - Distributed Deployment

Security Onion Server Considerations by the Author Richard Bejtlich

  • An SO server operates a central MySQL database to which all SO sensors transmit session data. The aggregate session data is a key factor when considering RAM and hard drive requirements for the SO server.
  • An SO sensor stores network traffic as pcap files. The SO sensor stores this data locally until it’s copied to the SO server. This locally stored data is a key factor when considering hard drive requirements for the SO sensor.

Chapter Five - Security Onion Platform Housekeeping

Limiting Access to Security Onion

As of securityonion-setup - 20120912-0ubuntu0securityonion201, Setup now defaults to only opening port 22 in the firewall. If you need to connect OSSEC agents, syslog devices, or analyst VMs, you can run the new so-allow utility which will walk you through creating firewall rules to allow these devices to connect. :

sudo so-allow

Chapter Six - Command Line Packet Analysis Tools

Security Onion Tool Categories

Data Presentation Tools

Expose NSM information to analysts in two ways, either via the command line or through a graphical interface.

Packet Analysis Tools
NSM Console

Data Collection Tools

Data Delivery Tools


I'm not an expert with tcpdump but I've used it enough that much of the how-to's in the book I've used at some point, however, there were a few flags/commands I haven't seen before, or at least I don't remember them. 🙂

I was not aware of the -c flag which tells Tcpdump how many packets to capture :

sudo tcpdump -n -i eth1 -c 5

The -ttt flag tells Tcpdump to show timestamps as YYYY-MM-DD HH:MM:SS.milliseconds :

$ tcpdump -n -tttt -e -XX -r icmp.pcap 'icmp [icmptype] = icmp-echoreply' and dst host

When searching for indicators of compromise in network traffic, you may want to search every file in a directory. You can use Tcpdump and a BPF modifier to hone your output. For example, Example 6-14 looks through all files for traffic involving host and TCP thanks to a for loop and the find command. :

“$ for i in `find /nsm/sensor_data/sademo-eth1/dailylogs/ -type f`; do tcpdump -n -c 1 -r $i host and tcp; done”


Imagine you want to search traffic for Simple Mail Transport Protocol (SMTP) commands. You could use the smtp.req.command display filter :

“$ tshark -t ad -r smtp.pcap -R 'smtp.req.command”

Looping through data to find HTTP traffic :

$ for i in `find /nsm/sensor_data/sademo-eth1/dailylogs/2013-02-17/ -type f`; do echo $i; “tshark -t ad -r $i -R 'http.user_agent contains "curl" and http.request.method == GET'; done”

Searching for a range of IP addresses with a Tshark display filter :

$ tshark -t ad -r /nsm/sensor_data/sademo-eth1/dailylogs/2013-02-17/snort.log.1361107364 -R 'ip.dst >= and ip.dst <= and not tcp and not udp

Argus Racluster

Ra is a session data generation and analysis suite, and its client for reading data.

Argus Racluster output for port 21 :

$ racluster -n -r 2013-02-10.log - tcp and dst port 21 -s stime saddr sport daddr dport sbytes dbytes

Using Racluster to look for UDP traffic while ignoring port 53, port 123, and host :

$ racluster -F /tmp/ra.conf -n -r 2014-02-10.log 2013-02-16.log 2014-02-17.log - udp and not \ (port 53 or port 123 or host\) -m saddr daddr -s stime:20 saddr sport daddr dport sbytes dbytes”

Using Racluster with as the source IP address and as the destination net block :

$ racluster -F /tmp/ra.conf -n -r 2014-02-10.log 2013-02-16.log 2014-02-17.log - src host and dst net and udp and not \ (port 53 or port 123 or host\) -s stime:20 saddr sport daddr dport sbytes dbytes

Chapter Seven - Graphical Packet Analysis Tools


Xplico is most often used against a saved trace file to extract and interpret interesting content and is managed via a web browser.

Chapter Eight - NSM Consoles


Sguil is one of the main applications packaged with SO. Its components collect, store, and present data that other SO tools use, and certain applications rely on Sguil’s authentication database.

Sguil’s Six Key Functions:

  • Performs simple aggregation of similar alert data records.
  • Makes certain types of metadata, and related data, readily available.
  • Allows queries and review of alert data.
  • Permits queries and review of session data.
  • Provides a right-click menu that lets you pivot, or move from either of those two categories of data to full content data, rendered as text in a transcript, in a protocol analyzer like Wireshark, or in a network forensic tool like NM.
  • Exposes features so analysts can count and classify events, thereby enabling escalation and other incident response decisions.