Argus is an open source layer P2P accounting + auditing tool and is used to help support network security management and network forensics.
Argus can easily be adapted to be a network activity monitoring system, easily answering a variety of activity questions (such as bandwidth utilization). It can also be used to track network performance through the stack, and capture higher level protocol data.
Argus processes packet data and generates summary network flow data. If you have packets, and want to know something about whats going on, argus() is a great way of looking at aspects of the data that you can’t readily get from packet analyzers. How many hosts are talking, who is talking to whom, how often, is one address sending all the traffic, are they doing the bad thing? Argus is designed to generate network flow status information that can answer these and a lot more questions that you might have.
Argus netflow data can be used in forensic investigations several months, or years, after an incident has taken place. Argus’ netflow records offer up to a 10,000:1 ratio from the packet size to the record written to disk, which allows installations to save records for much longer than full packet captures. When network security is very important, non-repudiation becomes a very important requirement that must be provided throughout the network. Argus provides the basic data needed to establish a centralized network activity audit system. If done properly, this system can account for all network activity in and out of an enclave, which can provide the basic system needed to assure that someone can’t deny having done something in the network.
If your running argus for the first few times, get a packet file from one of the IP packet repositories, such as pcapr and process them with argus(). Once you have both the server and client programs and a packet file, run :
argus -r packet.pcap -w packet.argus
ra -r packet.argus
Anonymity in network data is a big topic when you consider sharing data for research or collaboration. There are laws in many countries against disclosing personal information, and many corporate, educational and governmental organizations are concerned about disclosing information about the architecture, organization and functions of their networks and information systems. But sharing data is critical for getting things done, so we intend to provide useful mechanisms for anonymity of flow data.
The strategy that we take with argus data anonymization is that we want to preserve the information needed to convey the value of the data, and either change or throw away everything else. Because data sharing isn’t always a life-or-death level issue, not all uses of anonymization require ‘perfect secrecy’, or ‘totally defendable’ results. If you require this level of protection, use ranonymize() with care and thought. We believe that you can achieve practical levels of anonymity and still retain useful data, with these tools.
The argus-client program that performs anonymization is ranonymize(). This program has a very complex configuration, as there are a lot of things that need to be considered when sharing data for any and all purposes. A sample configuration file can be found in the argus-clients distribution in ./support/Config/ranonymize.conf. This file describes each configuration variable and provides detail on what it is designed to do and how to use it. Grab this file and give it a read if you want to do something very clever.
By default ranonymize() will anonymize network addresses, protocol specific port numbers, timestamps, transaction reference numbers, TCP base sequence numbers, IP identifiers (ip_id), and any record sequence numbers. How it does that is described below. By default, you will get great anonymization. Great, but not “perfect”, in that there are theoretical behavioral analytics that can “reverse engineer” the identifiers, if another has an understanding of even just a subset of the flow data. If you need a greater level of anonymization, you will need to either “strip” some of the data elements, such as jitter and the IP attributes data elements, and/or use the configuration file to specify additional anonymization strategies.
Once you have anonymized your data, use ra() to print out all the fields in your resulting argus data, using the ra.print.all.conf configuration file in the ./support/Config directory, to see what data is left over. If you see something you don’t like, run ranonymize again over the data with a ranonymize.conf file to deal with the specific item.
Once the argus-clients distribution has been linke to a suitable flow-tools library, reading flow tools data involves specifying the flow data type in the “-r” option. By writing the file out, the flow-tools data will be converted to argus flow data.
% ra -r ft:flow-tools-data.file -w argus.file – src host 184.108.40.206
Analysing Network Streams
Argus can run in an end-system, auditing all the network traffic that the host generates and receives, and it can run as a stand-alone probe, running in promiscuous mode, auditing a packet stream that is being captured and transmitted to one of the systems network interfaces. This is how most universities and enterprises use argus, monitoring a port mirrored stream of packets to audit all the traffic between the enterprise and the Internet. The data is collected to another machine using radium() and then the data is stored in what we describe as an argus archive, or a MySQL database. From there, the data is available for forensic analysis, or anything else you many want to do with the data, such as performance analysis, or operational network management.
Once you have both the server and client programs installed, this usually works:
argus -P 561 -d
argus will open the first interface it finds (just like tcpdump), process packets and make its data available via port 561, running as a background daemon.
You can access the data using ratop(), the tool of choice for browsing argus data, like so:
ratop -S localhost:561
Graph Data Generation
A graph that can answer our question is a frequency distribution of the durations of the IP addresses in our IPHost table. This is very easy to generate, using the programs rasql() and rahisto(). So I’ve been running the rasqlinsert() for a week, so I’ll just graph the whole table. A week is a good initial time period for the study, and so lets generate a log frequency distribution of the durations of each unique IP address, with 50 bins, ranging from 0.0001 to 1,000,000 seconds (1 week is about 604,800 seconds).
For this graph, we want to see just the number of IP addresses that fall into a particular bin. To get that simple number, we need to remove the AGR DSR that each flow record contains. If we don’t remove the AGR DSR, we’ll get the number of argus records that were merged to create the row in the MySQL database. May seem complicated, but the more you use these tools, of course, the more they may make sense. OK, the encantation below will give us the data we need for the graph.
rasql -r mysql://root@localhost/ratop/etherHost -w – | rahisto -M dsrs=”-agr” -H dur 50L:0.0001-1000000 – pkts gt 1
So, we just dump the database table into rahisto(), and we process records that have more than 1 packet in them (so we count flows that actually have a duration). We’ll take the output of rahisto() and use it to generate the graph below. Actual output is included at the bottom of this page.
So this is a very interesting result. Basically, we’ve got 3 populations of IP associations from this network. We could call them
1) “infrequent” associations which only existed in the range of 0.01 – 5.0 seconds
2) “transient” associations that lasted from 5 – 4600 seconds in this study, and then
3) “persistent” associations that lasted betwen 10,000 – 603,749 seconds (basically a week).
Network Geolocation Visualizations
Here’s a snapshot of an ongoing Argus Project project to develop native Mac OS X (snow leopard) applications for near real-time situational awareness. In about 3 – 4months, we should have much of this available as Open Source code. We use argus data (of course), Cocoa, and OpenGL, to build interactive visualizations. This framework provides us with a 3D environment for visualizing argus() data. If you have any ideas, interest, whatever, please send email to the argus developers list.
This specific application, attaches to an argus() data source that contains lat/lon geolocation labels the IPv4 addresses . The app holds a 120 second (configurable of course) cache of the data records it receives, and then aggregates the data to generate the list of individual IP addresses, along with their lat/lon descriptors. The app displays little push-pins for IP address’s based on their lat/lon values on an interactive globe that provides some detail, so you can zoom in and out. This display is reading data for the realtime network activity seen at QoSient WHQ (world headquarters), but it can also read files, so visualizing historical data is very simple.
This screen is fully interactive, you can rotate, zoom the push-pins are selectable, etc…. There are “hot keys” to turn on/off the visibility of the clouds, the earth, the flow table and the push-pins. Any suggestions as to what would be cool for this application would be most appreciated. My next step is to show instantaneous load/rate along paths between two nodes, so …, hopefully that will only take a few days to do.
Block halving :- 840000 blocks
Total coin supply :- 28.6 Million coins
14 coins per block
4 Mins block target
Argus Foundation Fund
18% ( Premine )
RPC port : 10880
Argus Foundation Fund
Meet Argus community, visit : https://bitcointalk.org/index.php?topic=1804278.0