Analyzing SiLK Netflow data visually

Deep statistical visibility into your network traffic is a fundamental requirement for any serious security conscious enterprise.  While direct packet capture  will remain the best primary source of deriving these statistical insights, Netflow and cousins (JFlow,IPFIX,sFlow)  are the easiest to deploy and give you the biggest bang for the buck.

SiLK (acronym for System for Internet Level Knowledge) is a suite of open source  Linux tools for collection, storage, and analysis of flow data.  SiLK is created by the NetSA group at US-CERT

The two categories of SiLK tools are storage and analysis. The main tool for receiving compressing and storing flow records, called packing in SiLK terminology is rwflowpack  The main tool for querying is called rwfilter These are very flexible command line tools that follow the typical Unix idioms of composability. You can pipe outputs of various commands and build your own query tools. In the hands of a skilled analyst these can be  incredibly powerful.

However I found it quite hard to do some things in SiLK

  1. Analyze multiple things at once.  It is a query-response model that goes back to the primary data source for each analysis item.
  2. Visualize time-series metrics. The query results are usually in terms of flows or aggregates or top-N lists.   You can use rwcount to generate some basic time binned stats.

Trisul is a free real time streaming analytics platform which can supply some of the missing pieces to SiLK such as

  1. Time-series. From the flow data extracts hundreds of metrics and stores in a time-series. See some examples below
  2. Single pass analysis. Like all streaming platforms, Trisul needs to look at the flow data just once and various algorithms extract and store all data in a compact format. The raw flows are also packed, indexed, and stored for ad-hoc querying.
  3. Power User Interface. This is probably the most important addition. You have access to dozens of dashboards, Trisul’s second order metrics like “Number of Active Flows” , “Flow creation rate”, “Cardinality counters such as Unique Hosts per Port” etc.

Without disturbing an existing live SiLK deployment and toolchain, we can use a SiLK tool  rwcat to stream binary flow records to Trisul.  Since Trisul has a fully customizable inputfilter LUA API, we attach it to the output from rwcat like so

SiLK rwcat to export records

Up and running

Here are quick steps to get it working.

  1. Install Trisul using apt-get or yum. It is free no signups required. The limitation of the free version is you can only store max 3 day window of data.
  2. Create a new analysis context to hold the data. To do that type trisulctl_probe create context  silk11
  3. Download the following two LUA files from Github ( trisul-scripts ) into a directory say “/tmp”
    1. flowinput.lua  :  The helper library to process flow like data into Trisul
    2. silk.lua :  Reads binary SiLK records from the named pipe
  4. Create a named pipe mkfifo /tmp/silkpipe  This is the connector pipe.
  5. Run rwcat over your packed files and write to the pipe rwcat --ipv4-output --compression=none file1.17 -o /tmp/silkpipe  Currently the script only handles IPv4 so we specify the –ipv4-output flag.  Replace the file1.17 with your own list of SiLK dump files.  At this time rwcat will appear to hang because there is no one at the other end of the pipe yet.
  6. Run trisul trisulctl_probe importlua /tmp/silk.lua /tmp/silkpipe context=silk111
  7. Wait for the process to complete you can tail the log to check progress. Type trisulctl_probe ; then when you are inside the CLI tail the log from the probe log  log silk111@probe0 log=ns tail

So the whole command line looks like this

On 1 terminal

On Terminal 2

 

When the process completes. You can log on to the Web Interface and view the various dashboards, access the results, query flows and conduct further analysis.

Dashboards inside Trisul

Give it a shot and let us know how it works for you . We should be able to support IPv6 flows too via rwcat  currently we dont have a use for it. Let us know in the comments section if you need that support.

How silk.lua works

Trisul has a full featured LUA API that allows both the packet pipeline and the analytics pipeline to be programmed.  There are about 16 different script types which let you do everything from handle packets, reassembled flows, reconstructed HTTP files, process metrics streams, etc etc.  Check out the Trisul LUA API Docs for an overview. One of the script types is the inputfilter.  The inputfilter script allows you to drive the Trisul input from a LUA script. So we arrange to read from the namedpipe.

We use the incredibly cool LuaJIT FFI interface to process the binary records from rwcut, extract the required flow fields, and push it into Trisul. Once we extract the flow fields we load it into a LUA table and use  the helper library flowinput.lua to push the metrics into Trisul.

Here is the relevant FFI snippet from silk.lua where we use the C Struct defined in rwrec.h and then read it on the LuaJIT side.

 

Conclusion

If you are running SiLK and would like to try out a new way of analyzing data you already have in your dump files – try this script and see how Trisul can help you get a very different perspective.

 

Download Trisul Network Analytics for free today . Installing it is as easy as apt-get or yum.

Happy SiLK ing

How to detect SHA1 signed certificates on your network

A few colleagues at work today upgraded their Google Chrome browser to Chrome 57 and some of them were surprised they could no longer open many websites including gmail.

Blocked !

If you notice the error message it says NET:ERR_CERT_WEAK_SIGNATURE_ALGORITHM.  This is a clear indicator that we are dealing the use of SHA1 algorithm used to sign certificates.  If you click on the text that says WEAK_SIGNATURE you will get a dump of the certificate chain. You can then copy paste that into a file and then run openssl x509 -in err.pem -inform pem -text to print out the cert. When we did this we found that in each failure case atleast one of the certificates in the chain has the following line  Signature Algorithm: sha1WithRSAEncryption

Google has been trying very hard to phase out use of SHA1 and move to SHA2 (aka SHA256) for a long time now. With the latest release of Google Chrome (Rel 57) they seem to have really tightened the noose. It appears with Chrome 57 even intermediate CA certs that chain directly to a local “trust anchor” will not be honored. See the following explanation from Google Security.

Starting with Chrome 54 we provide the EnableSha1ForLocalAnchorspolicy that allows certificates which chain to a locally installed trust anchor to be used after support has otherwise been removed from Chrome. Features which require a secure origin, such as geolocation, will continue to work, however pages will be displayed as “neutral, lacking security”. Without this policy set, SHA-1 certificates that chain to locally installed roots will not be trusted starting with Chrome 57, which will be released to the stable channel in March 2017.

Google Security Blog

 

Since we have Trisul running 24×7 in our office network,  I decided to look into the issue from a network viewpoint.  Trisul saves all SSL Certs as Full Text Documents (FTS) in the same format as the command “openssl x509..” . This makes it easy to search old traffic or more importantly to script detection and analysis as we will see shortly.

From the SSL FTS we found that for the failing Chrome 57 browsers the cert chain looked like this.

The fail cases have 3 certs

The first line is the Subject name, the second line is the Issuer, and the last line is the Signature Algorithm.

and for browsers  that worked fine we found 2 in chain.

 

In the fail cases, GeoTrust Global CA is an intermediate CA and Equifax is the root CA. Equifax signed GeoTrust using SHA1 and therefore Chrome rejects it. In the success cases, GeoTrust Global CA happens to be the root CA and it uses SHA2.

The SHA1 rules for Chrome are :

  • SHA1 can be used by the root CA for self signing ; the signatures of the Root CA dont matter because the browsers trust them by the public key.
  • When any signature in the entire certificate chain except the Root is signed by SHA1, it appears Chrome 57 and above are going to block it.

While I am yet to get my Chrome 57 to open any Google site, I tried adding the Chrome profile “EnableSha1ForLocalAnchorsPolicy” as mentioned in the Google Security Blog without luck.  I have a support ticket open so will update this post when I figure out how to solve this.

UPDATE: SOLVED – see below

Alert when you see usage of SHA1 signing on your network

After this I decided to write a little script to alert whenever a SHA1 signed certificate was seen in our network traffic. Using the Trisul LUA API this kind of detection is super easy.  The script is available on Github at detect_sha1.lua

Once we got the script installed on our office perimeter Trisul has been generating alerts whenever SHA1 is detected. Surprisingly most websites from Facebook,Twitter,Github,many Banking sites seem to have moved to SHA2 so the alert volume is low.

The following is a screenshot you can see the highlighted sha1WithRSAEncryption  signing algorithm in the alert

We developed Trisul Network Analytics 6.0 to be a platform where you can build various applications on top of it using a general purpose language like LUA and loosely coupled from the actual protocol fields. For scripts like these the flow is :

  1. you decide what you want to work off – in this case SSL Certificates
  2. you plug in to that stream
  3. use standard text manipulations to get the data because Trisul documents are in standardized canonical format
  4. feed something back into the Trisul Analytics stream as a result of your script

In this script the real action takes place in only one line

 

What to do with this alert ?

One of the concerns of this  deep Network Security Monitoring (NSM)  action that generates an alert is — now what? out analysts now have extra alerts to take care of !!

After turning on SHA1 detection we have about 190+ alerts of this kind. These alerts are tagged Medium/Low priority and gives us an idea of exactly how much of SHA1 is going on.  It could be an annoyance if  each alert to be handled in some way.  In those cases, you can modify the  same Trisul LUA Script a bit  to turn the alert into a metric.  When you turn alerts into a metric you will be able to answer :

  • How many SHA1 signed certs did I see over time
  • You will not be able to see the individual alerts

Hope these techniques are useful to security pros particularly those obsessed with visibility and metrics.

Get Trisul , the LUA API Documentation , and the Github samples at trisulnsm  Trisul is totally free for a rolling 3-day window and works on Ubuntu 14.04, 16.04, RHEL/CentOS 7.x

 

— UPDATE 26-Mar-2017 —

SOLVED : 

I was able to solve the issue and get Chrome 57 working again by “Untrusting Equifax Secure CA”. See my answer at Chrome Help Forum

Building ZeroMQ with libsodium : No package libsodium found

Are you trying to build ZeroMQ with libsodium ? You may run into the following issue while trying to configure ZeroMQ where it doesnt detect that you have already installed libsodium.

The real issue is that the default libsodium install from source installs the package in /usr/local/ and ZeroMQ doesnt pick itup.

There are a lot of answers on the internet about using --with-libsodium=/usr/local. This will not work because the ZeroMQ configure script uses pkg-config (a tool to get information about installed packages to detect libs). That tool depends on finding a file called libsodium.pc ( <package-name>.pc) in a number of directories.

Do this to fix the issue.

Check if pkg-config is indeed picking libsodium

It is not picking it up. Try adding the /usr/local/lib/libsodium

export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig

Now try finding libsodium

Now ZeroMQ should find it and you are on your way to CURVE heaven.

Hope this helps a few people. We spend quite some time trying to hack the ZeroMQ autoconf process !!