Changelog

Tue Sep 30 23:00:00 EDT 2014

Network String Development Release 0.15

Due to certain commercial products a network utility program had to be renamed. Since that was going on it was renumerated and labeled development release so it could get pushed out. netstr-0.15 is a collection of small network tools put together to compliment the network toolkit. The tools are modules that are called at run time and managed by the netstr main program. The modules are:

  • scan: simple small ipv4 portscanner
  • scan6: by port ipv6 scanner
  • passive: passive ipv4 port watcher & recorder
  • tcpdump: mini tcpdumper
  • arpsniffer: watches for arp traffic

Invoking netstr is similar to the dnet utility:

$ ./netstr                                                                        
Usage: netstr <command> <args> ...
netstr scan --ping --conn --dgram --port n-N --time s.ms \
            --extra -V {target}
netstr scan6 --dgram --port N {ipv6addr}
netstr passive --if dev --threshold n --polls count \
               --extra --no-verify {pcap-expr}
netstr tcpdump --if dev --polls count --decode {pcap-expr}
netstr arpsniff --if dev --polls count --decode {pcap-expr}

Please note that netstr is experimental and was just recently actively developed again. Your mileage may vary ... a lot.

Download netstr dr15

Mon Aug 4 19:06:28 EDT 2014

MySQL Status Page Check using Nagios Part 1

Nagios can check anything anyone is willing to write it to check. In other words if there is a way to reap results then Nagios can act on those results whether they be a set of strings, numbers or some combination therein. This two part series goes over setting up a very rudimentry MySQL status page check using common tools found on a BSD-Unix, Unix or Linux system (and it not, generally easy enough to install). This first part goes over requisites, assumptions and the status pages themselves. The second part is the Nagios end of things and of course the "other cool stuff" the creative mind can do with it all.

Text

Fri Feb 14 21:43:49 EST 2014

pwutils-0.6 Available

The single line print format for pwutils never worked right. Well now it does. The pwutils collection are some very small programs written in C, Perl, Python and Bash that do, among other things:

  • Userinfo print similar to BSD systems
  • Group report
  • Various user reports
  • A kinda sorta like the pw utility pwutil front end.

Should build and run on almost any Unix/Linux/BSD system.

Coding Download

Sat Jan 25 17:13:34 EST 2014

Leetness of Simplicity

I was thinking about doing another article about X windows when I realized not much has changed on the surface. So instead, to tide us over until I finish the current article I snagged a few old school simple window manager pictures from Xwinman.

xshot   xshot

xshot   xshot

Sat Oct 26 18:53:41 EDT 2013

Using Payloads to Probe UDP Ports

With no lubricant! A few years ago I was involved in an effort to move the payloads that were embedded in the Nmap code (and hence, compiled into the executable) to a file. I learned a lot, especially that I am a lousy C++ coder (my work basically had to be rewritten from scratch... but it was still fun!). I did learn one thing though, Maps in C++ are really friggin cool and if I were a C++ programmer I would probably use them every chance I could. They kinda sorta remind me of anon hashes in Perl ... but not exactly the same. Regardless, here is a short text on why we did it and an overview of how it works:

Text

Wed Aug 28 12:01:55 EDT 2013

Vignette Effects using The GiMP

Ever wanted to process your own photos so they look older (for some strange reason)? A quick down and dirty how to add some vignette and edge shading effects to images using the GNU Image Manipulation tool or GiMP. Enjoy, have fun and if you find mistakes... I might fix them!

Text

Thu Jan 31 09:56:06 EST 2013

Wicked Small Reverse Lookup Script

Here is a very tiny Perl script that can do a nicely formatted reverse lookup of a classed C subnet. It does not have to be an actual Class C. I just restricted it because where I work we always subnet down to 255 nodes/network.

Gotchas
  • It uses the bind utility host command, so you need bind utils installed. Or just change it...
  • You need to customize the @subnets array with your networks.
  • It looks for the string dhcp- because where I work at we use that as a host-part for dynamic DNS names.
  • This is a scummy filthy hack... but useful....
The Code
#!/usr/bin/perl
@subnets=("192.168.1.","192.168.2.","192.168.3.");
(@total,@dupes,@dhcp); # Total, Duplicates, `dhcp-` entries
$NETINDEX=0; # index to the current subnet we are messing with
foreach (@subnets) {
  ($subnet, $i) = ($_, 1);
  until ($i == 255) {
    @resolver_string= `host $subnet$i|grep -v not|awk \'\{print \$5\}\'`;
    if (@resolver_string) {
      $total[$NETINDEX]++;
      $n_entries = scalar(@resolver_string);
      if ($n_entries > 1) {
        $dupes[$NETINDEX]++;
      }

      foreach(@resolver_string) {
        if (m/dhcp/) {
          $dhcp[$NETINDEX]++;
        }
      }

      if ($n_entries > 1) { # if more than 1 entry, print them on 1 line
        print "$subnet$i ";
          foreach(@resolver_string) {
            if ($_) {
              chomp($_);
              print $_ ;
            }
          }
        print "\n";
      } else {
          print "$subnet$i @resolver_string";
      }
    }
    $i++;
  }
  $NETINDEX++;
}

$NETINDEX = 0; # reset the subnet index for printing ...
foreach (@subnets) { # Print out the totals
  $assigned = ($total[$NETINDEX] - $dhcp[$NETINDEX]);
  $available = (253 - $total[$NETINDEX]);
  print "\nSubnet $_"; # atomic print below... they *are* faster ....
  print "0\nDNS Total:\t$total[$NETINDEX]\nAssigned:\t$assigned 
Available:\t$available\nMultiple:\t$dupes[$NETINDEX]
DHCPaddr:\t$dhcp[$NETINDEX]\n";
  $NETINDEX++;
}
print "\n";

 

Wed Dec 26 13:55:13 EST 2012

Systhread Moving to a Quarterly Format

This year has been very busy for me on many fronts. Both my personal and professional life have been extraordinarily busy. Coupled with those factors I have been having serious issues coming up with good new material for the site. I have been debating putting the site into a sort of topical index mode on the front page until I can think of some good stuff to do. Instead, I am going to attempt to switch to a quarterly format. That means I might update or add a program or write a new text - that sort of thing. Regardless, the site isn't dead, its author is quite alive in fact...

While I cannot go into too much detail about my personal life, I can speak for what I have been up to outside of hacking. I returned to surfing and skateboarding in late 2010 and that has changed a lot of my life. I still enjoy hacking when I get the chance but my work is mostly design and implementation now and a lot less scripting and programming. I am working on changing that, I prefer a balance instead of one over riding the other. So what have I been impementing? High Performance Compute, Open Source Virtualization and soon (well hopefully) an internal cloud of some sort. When the latter is on course I am going to attempt to switch focus to a rather sophisticated monitoring system. The monitoring system will take a lot of programming and scripting - so lots of fun stuff should come out of that...

Tue Sep 20 21:46:26 EDT 2011

Wrapping a Program with Scripts and Libs

Ever have to run a program with a variety of options over and over again? If your a Unix, Linux, BSD, Mac etc. programmer and/or sysadmin then... yes you have. The key to success of course is my favorite sysadmin attribute: laziness. In this text a look at one simple wrapper for cron and a Perl library script wrapper.

Text

Tue Jun 21 20:12:47 EDT 2011

C Program with Registered Modules: dnet

Many programs come with modules that can registered and loaded. Some are on demand, others compiled in while still others are precompiled and can be loaded on demand (several Operating System kernels come to mind that have such a capability). In this text, an example of a program that allows a module to be written and compiled onto a program with relative ease. The example program is the dnet test program which ships with libdnet written by Dug Song.

Text

Sat Mar 26 18:16:12 EDT 2011

Nagios Configuration Auto Generation Script

Ever had to setup nagios monitoring for a group of very similar systems? Say, perhaps, high performance compute nodes? Well, I have. And being a lazy system admin, I decided instead of having to make (N) changes to the config file I would prefer to simply autogenerate the configurations. Ideally, one might use a base configuration file. Of course, even that was too much work for me, I just jammed it into two shell scripts. Regardless, here is a simple method for quickly generating nagios configurations that should scale quite nicely.

Text

Sat Jan 1 09:07:15 EST 2011

RAD Infrastrcture

Taken from wikipedia, software prototyping is:

Software prototyping,
refers to the activity of creating prototypes of software applications, i.e., incomplete versions of the software program being developed. It is an activity that occurs during certain software development and is comparable to prototyping as known from other fields, such as mechanical engineering or manufacturing.

While rapid application development is:

Rapid Application Development (RAD)
refers to a type of software development methodology that uses minimal planning in favor of rapid prototyping. The "planning" of software developed using RAD is interleaved with writing the software itself. The lack of extensive pre-planning generally allows software to be written much faster, and makes it easier to change requirements.

Can these same methods be applied to infrastructure? Or does infrastructure always have to be engineered? The real answer is of course (as usual per my essays) it depends. Instead of conjecturing when it might work this text will look at three examples. One where it did not work, one where it kind of worked until it went off the rails and one where it worked like a champ.

Text

Jasonrfink.com

After several years of procrastination I finally sat down and created a personal website. Okay in reality I was bored on a snowy winter day but either way it did finally get done. I don't think the two or three longtime readers of this site will learn anything new. So if you are bored out of your skull please do feel free to visit my personal site to help burn away what would be otherwise productive milliseconds.

Link

Sun Nov 7 14:41:47 EST 2010

Building a Program from Core Data Structures

In Eric Raymond's "The Art of Unix Programming", within the section called "The Basics of Unix Philosophy" there is a rule quoted by Rob Pike:

Rule 5. Data dominates. If you've chosen the right data structures and organized things well, the algorithms will almost always be self-evident. Data structures, not algorithms, are central to programming

At face value Rob's rule number 5 makes sense. But what is Rob actually saying? In complex software systems it might be difficult to track down and identify how the rule of evolving functions to deal with data worked. So why not use a small microscopic example instead. Taking a small program, a passive network scanner, from data structures to operations on the data structures illustrates Rob's rule number 5 perfectly. This is an interesting experience from my perspective as most of the programs and scripts I have written deal with transitionary data. What I mean by transitionary is simply find it, operate on it and/or print it then move on. Not an unusual trait in system administration centric programs. While working on a passive scanner that could also verify a port I witnessed rule number 5 occur right before by fingertips.

text

7 Patches in 7 Days

For those who use libdnet on their systems I recently completed another big merge from Nmap's various patches. So far all of the BSD/Linux/Unix patches have been making it into the authoratative dnet tree. We are currently looking into a new release target but I do not yet have details as to when a release is a for sure.

Site News

There is some interesting news about the site. I ditched the myspace, facebook and twitter feeds. They simply were not being used. The RSS feed accomplishes what external social networking sites had the potential to. In all honesty, this site simply does not generate enough interest to warrant anything but and RSS file. The other interesting news item is I am planning on renting a virtual private server (VPS) solution early next year. The current server does not expire until October 2011 giving me ample time to perform a bleedover instead of cutover. I am hoping that the new server will give me the opportunity to add anonynous git and/or svn (not just for my own projects but some I contribute to) plus more flexibility with this site and possibly a personal site.

Friday Sep 24 21:00:00 EDT 2010

Parsing Snort Alerts (in Perl)

The Snort Intrusion Detection System or IDS is great. Snort can detect all sorts of interesting traffic. I had to write a script to parse the snort alert log and mail me only stuff I was interested in. The other rule was I needed to keep it as simple as possible. I chose Perl simply because for me it was the fastest method to crank out a script I really needed. Otherwise I probably would have used something like bash. Note that if your logging alerts into a mysql database this is pretty useless, your better off writing a sql query or better yet just pumping alerts your interested in over to different tables or maybe even another database altogether. Follow the link for a quick read on a quick and dirty alert log parser.

text

NetRecon 1.79

This is the last release before I redo a lot of netrecon's guts. I have yet to be slapped with a cease and stop calling it that order so for now the name is staying. In this quick release I added verifying passive ports by default. You must specify --no-verify if you do not want the passive program to attempt to connect back to a port discovered by pcap. I decided on this, for now, because it makes the data more accurate (although still not 100%; there is still a bit of fuzziness to it). I have noted with verify on it is better to crank down the threshold to 1 or 2. I may have to code in some logic to determine what a good threshold is if verification is active.

I also redid the initial check of an active scan to use some common ports instead of port 1 which is never available on systems with host based firewalls. This caused an interesting problem. Systems with host based firewalls take a loooong time to get scanned right now. I am looking into a fix for this in the next release which has quite an extensive TODO.

So in summary this quick release has the following changes:

  • by default ports are connect checked when detected using passive scans
  • the initial isup code in the active scan has been vastly improved
  • the timeout value gets throttled down when a host is determined to be up

netrecon-1.79

Sat Sep 4 19:38:33 EDT 2010

NetRecon 1.78

Taking inspiration from the dnet utility netrecon has undergone a lot of redesign. The dnet utility a rather cool test program that can be found with libdnet. Yes a shameless plug on my part. Nevertheless, the way the dnet code plugs in each smaller test program proved to be the best way to change netrecon. All of the programs in netrecon have been merged into a singular front end. As such the syntax has changed drastically. However, the speed is the same and duplication of code, mainly between elements that use libpcap has been commoned up. There is likely still some deduplication of effort to be done. Lastly, for some odd reason, it seems to execute a lot faster too. I can't really account for that but I am not complaining.

New Usage

For those not familiar with the dnet utility, the syntax of netrecon is:

netrecon <prog> <options> <args>

Here is the full out put of the usage message:

Usage: netrecon <command> <args> ...
netrecon scan --ping --conn --dgram --port n-N \
               --time s.ms --extra -V {target}
netrecon scan6 --dgram --port N {ipv6addr}
netrecon passive --if <dev> --threshold <n>\
                 --polls <count> --extra {pcap-expr}
netrecon tcpdump --if <dev> --polls <count> \
                 --decode {pcap-expr}
netrecon arpsniff --if <dev> --polls <count> \
                  --decode {pcap-expr}

Basically all of the same options as the original program but different ways of expressing them.

Examples

A simple scan by IP address:

$ netrecon scan 192.168.1.2
Host 192.168.1.2
22    ssh                           
111   sunrpc                        
113   auth                          

Scan a network, define a portrange and be verbose (timeout messages have been trimmed out of the example):

$ netrecon scan --port 10-22 -V 192.168.1.2-10
Timeout: 2.0
Scan start: Sat Sep  4 20:08:19 2010
Host 192.168.1.2
Port range: 10-22
22    ssh                           
...
Host 192.168.1.10
Port range: 10-22
22    ssh                           
Scan start: Sat Sep  4 20:08:19 2010
Scan end  : Sat Sep  4 20:08:30 2010

Scan a single port on a host (good for long hops):

$ netrecon scan --port 80 www.yahoo.com
Host 67.195.160.76
80    www                          

The passive scan function is really just a sniffer that attempts to order hosts and ports it thinks are valid. This is a work in progress and is best used in conjunction with the active scan to verify a service is real. One of the TODO items for the next release is add the capability for passive to make a call to the active scanner to validate ports. Last and not least; passive accepts standard pcap expressions to help trim down or narrow a target.

$ sudo netrecon passive
Starting capturing engine on eth0...
Closing capturing engine...
192.168.1.2: 22 udp 

The passive scanner has a --threshold option which is used to determine if a port is running a service. Basically it means once this threshold is crossed by udp connections and/or tcp connections it is flagged as valid. Once the callback code to an active scan is complete the accuracy of the threshold value should improve (or not even be needed).

The tcpdump and arpsniff programs operate just like any other pcap utility. Note that arpsniff may take some time to print anything on a small network:

tcpdump
$ sudo netrecon tcpdump --if eth0 
Starting capturing engine on eth0...
Sat Sep  4 20:33:27 2010 : 192.168.1.2:22 \
> 192.168.1.4:49158 tcp len 132 off 16384 \
ttl 64 cksum 37489 seq 1844550100 ack 2188746861 win 37120

Sat Sep  4 20:33:27 2010 : 192.168.1.2:22 \
> 192.168.1.4:49158 tcp len 244 off 16384 \
ttl 64 cksum 8561 seq 3186727380 ack 2188746861 win 37120
...
arpsniff
$ sudo netrecon arpsniff
Sat Sep  4 20:35:12 2010  recv-packet-len=60bytes \
 hwtype=ethernet proto=ipv4 oper=ARPrequest \
58:B0:35:7C:A3:35 192.168.1.4 \
-> 00:00:00:00:00:00 192.168.1.1

Compiling

Because some users may want to use netrecon's old scan only functionality it can be compiled to have only the scan programs built. With the scan only build only scan and scan6 are currently supported. Since the scan code is common to libc, the scan only build can be copied onto other systems of the same platform and used. Following are some example make targets, see the Makefile for more:

Scan only on Linux
$ make scan
make scanobjs 
make[1]: Entering directory \
            `/home/jrf/src/netrecon-1.78'
gcc  -O2  -DSCAN netrecon.c scan.c scan6.c \
            utils.c -o netrecon
make[1]: Leaving directory \
            `/home/jrf/src/netrecon-1.78'
Darwin/osX
$ make darwin
make objs DEFINES=-DDARWIN
gcc  -O2 -DDARWIN netrecon.c scan.c scan6.c \
            passive.c tcpdump.c arpsniff.c \
            decode.c utils.c "-lpcap" -o netrecon

TODO and Next Release

The todo items are pretty extensive, however, getting over the passive scanning hump and front end integration was a major hump. Expect two more releases within the next two months. The goals for those releases (in no particular order) are:

  • add a DNS resolver option
  • Intelligent port guessing for active scans to see if a host is up or down.
  • Callback to active scan (or a shared bit of code) from passive to verify a port is running a service.
  • Fuzzy guess OS by port (or at least the family)

There are many more plus the ones listed here are detailed in the source distribution.

netrecon-1.78 · Coding

Fri Aug 13 19:31:59 EDT 2010

Site News for 3rd Quarter 2010

The general frequency of the site has been 1-2 times/month. Recently this was changed to a full quarter. The full quarter idea does not seem to work well. Too much change. That said the frequency is likely to change to an average of once/month. Meaning a month could pass with no news and then a month with two items and so forth. Otherwise absolutely nothing here has changed... which is of course a good thing. Quite a few items on the plate so read on.

Enlightenment Transform Utilty (etu) 0.1.8 Cut

A lot of changes with this release of the one and only graphics program I maintain. No remarkable user changes though so if your installation still uses the epeg library then there is no need to upgrade. That said, if you are tracking enlightenment then the current version will not deal with jpeg image formats at all and may be using legacy libraries (if it actually works). Following are the changes made to this version:

  • Migrated source over to git again (my first pass at this last year did not work right).
  • Moved epeg functions to imlib2 (where they now reside in e17).
  • Ran through the valgrind harness.
  • Added file information option.

etu · Coding

Replacing Ping with Nmap for Nagios

From the text:

Sometimes a system administrator needs to get around a few rules that are in place for good (or not) reasons. One example is when networks have ICMP turned off (or even just a portion of it). With ICMP off it can be difficult to configure tools like Nagios for simple up and down checks. In this text getting around the no ICMP problem and a script to handle it for Nagios.

Text

In 2600 2010 Summer Edition

Another article written by yours truly is in the 2600 Magazine. The article is a 10,000 ft. overiew about how to setup personal darknets. Eventually material written for 2600 may make its way here. Some in fact already has. This is due to 2600's excellent republish policy which states once 2600 prints it the rights revert to the author.

2600 · 2600 2010 Summer

New Feeds for the Site

It only took five years but finally, for those interested, there are external feeds/pages about the site for those who do not directly suck down the RSS file. They are:

As per the norm if it turns out the feeds/other sites are more or less useless they will be tossed or alternatively simply forgotten.

Taking a crack at Passive Scanning

Probably the most interesting and incomplete project at the moment is Netreconn now has the beginning(s) of a passive scanner. So far the lesson learned has been while snarfing ports and enumerating them per host is easy enough, there are a lot of challenges when using pure passive taps to scan for hosts... which is not really what is going on. What really is going on is the wire is being watched and particular data is being correlated. Regardless here are a few of the challenges thusfar (anyone interested may feel free to download the code and have a look):

  • Ports need their own data structure to record protocol name(s) and port number. This is a pure laziness issue and I will get to it.
  • The first pass at OS determination will be via port combinations. I have no idea how that will work.
  • Full fingerprinting has been requested. Not sure how to do that yet or even if I want to.
  • structures need to be sorted. I am saving this for last because I don't know what the structures are yet.
  • How to determine a real service vs. a client port? My thinking right now is N hits and different clients accessing a common singular port. Again, I've no idea how I will implement this.

Otherwise it works, that is to say it can be a bit wily nily but the core engine that gets the data is there. Eventually the plan is to merge all three utilities into one. So scanlan, wiretraf and passive would be one shared codebase. The exception is I intend to leave a make target to build scanlan (via defines) with no depends so users can just copy the static binary anywhere they need to if they do not have pcap libraries available.

Sun Jun 13 12:30:46 EDT 2010

Site Cleanup & Updates Done

The aforementioned site work is complete. Not really all that thrilling. Following are some of the chores wrapped up:

  • Compressed the news page.
  • Put several new series in the series lists on the texts index.
  • Updated the about section.
  • Updated the coding section (nmap stuff)
  • Changed the site content license to CC. There is a good reason for that ...

New Book Available

Last year I put together a book with selected texts from the site and some new material. The topic is basically the same as most of the site content regarding programming. After peddling the draft around I finally decided I didn't have the energy to keep packaging it up along with supporting materials anymore. Instead I decided to just give it away under a Creative Commons license. If there is any interest in the book (and that is a big if) I might do another one packaging up all of the site material as a sort of reference/history. The working title is simply System Utility Programming and can be perused in a variety of formats:

All of it Downloads
Book Formats

Additionally I broke it up into the major sections:

  Part 0 · Part 1 · Part 2 · Part A

The cover can be found here for those who might want to print out the entire book.

texts · sysutil-book

New Version of netreconn Available

I have branched a new version of the netreconn tools. There have been some major changes to it and there is still a lot of work yet to go. Following is a list of major changes:

  • collapsed TODOs into the top of source files
  • collapsed the pcap programs (ndecode, arpsniff and ntraf) into wiretraf
  • moved nstrobe to scanlan
  • Removed ntrace script
  • Removed nlist script
  • Print start and stop time at end of scan
  • Added arp traffic reading

Here are a few of the TODOs. As per the norm some, none, all or totally different things may happen to the utilities:

scanlan TODOs
  • add session trace (only one level, none of this d1-N stuff)
  • add udp support
  • different socket type support (e.g. raw), look at how dnet does this
  • perhost timers with -vv option or *something else*
  • if practical a true pre-ping using ICMP versus a full connect
  • Support for user to change TCP flags in both directions
wiretraf TODOs
  • some explanation of the output fields
  • arp needs to have timestamps
  • arp needs decoding
  • ethtraf (will want src/dst mac + IP)?
  • traffic counters both total and as packets come in (ARP has the latter)

The git repo has all updated sources now for netreconn as well.

netreconn-1.77 · coding

Nmap & Dnet Work

In addition to all of the other stuff I have been up to lately I managed to find time to wrap up a small Nmap project and complete (at least as far as my infrastructure supports it) a big chunk of Dnet work.

Nmap
After several months of a few hours a week hacking (prefaced with a couple of months of dialogue) we finally moved payload definitions out of the source code and into their own file. The file is parsed at runtime and loaded into a std::map for payload lookups. What does this mean? If one wishes to use a new payload all they have to do is add it to nmap-payloads versus adding it to the code and recompiling. Currently only UDP is supported.
Dnet
In libdnet land I wrapped up all of the changes from the stripped version that nmap uses. This actually fixed a handful of bugs. Most notably was a bug where an interface name could be missed by one of the dnet routines. I am not sure how close we are to a new version but I would like to crank one out this year if possible. We shall see.

Sun May 16 18:30:00 EDT 2010: Site News

Of course expect this entry to be deleted soon. Over the next month I might be taking a break from writing to perform some content maintenance. This is what happens when one does not use a database. They have to clean stuff. Specifically the news needs compressed into simple lists (which has to be done manually... great). Also the texts index page needs some new series lists put together and in of itself might need to be split (I haven't really decided yet). No fear, I do have some interesting content on the horizon (in the form of notes) I just need to sit down and actually, you know, write it. I think the about section could use a punch in the arm as well but we shall see. I do not plan on changing the design, just content whereabouts, lists and so forth. Honestly the design took me so long to settle on and is so complex at this point, I am kind of scared of even looking at it.

Fri May 7 20:45:00 EDT 2010: Nagios Meta Check Part 3

In part one of this series the basic trusses needed by the Nagios check_systemhealth script were put together. In part two the actual checks themselves were coded. In this the third and final part of the series compulsory checks are added, the main loop is constructed and the final full source listing produced.

It is worth noting that this is only one of many methods to achieve the same goal. There exists at Nagios exchange plugins and scripts that can do similar actions such as aggregate groups of checks, services and so on. The code presented in this series is just a touch upon a single idea designed to make the reader think about their monitoring deployment.

Text

Fri Apr 2 11:36:20 EDT 2010: Going (somewhat) Retro on Unix

Yes I posted this today to avoid the April Fool's joke wonderings. Recently during a short period of severe boredom I decided to try and change my habits a bit by using - when possible - nothing but command line tools. I did allow for the use of curses based tools too, so I guess console or terminal only tools would be a more appropiate way to state the experiment. Many of the tools I already did use but I wanted to see if I could use exclusively console commands/tools/utils for a week or so. The result was pretty surprising, excepting Firefox (which I found a retro skin theme for) and audacious (for streaming internet music stations) I still use nothing but console utilities in my Xsession and am still using the window manager I setup. Note this is not a review of tools or anything like that, just an experiment that had some unexpected benefits. I am thinking about trying the opposite but I fear it won't be nearly as interesting.

Text

Wed Mar 3 17:42:26 EST 2010: Using Nmap to Fix a Problem

Ever had an ipv4 network address that is supposed to migrate over via a high availability mechanism simply not work or even stranger if there were several addresses some do and some do not? An experienced network administrator probably has seen mysterious non-migrating addresses, however, within this context is presented a rather interesting "solution" to when it has been observed.

Text

Mon Feb 8 21:00:46 EST 2010: netreconn-1.76 & mmw-2.0

netreconn

Finally got around to releasing the stable version of the netreconn utilities. These are basically the same as 1.75 without any changes. I think I am going to stop using the odd numbered/even numbered method since there do not seem to be enough changes in between to bother.

netreconn

mmw

I was really bored one day and finally did some work on the micro memory watcher or mmw. The mmw utility is basically a nicely formatted version of free. Following are the changes in this version:

  • Updated manual page (finally)
  • Added a subdivision of 1/10 GB (not apparent to user) so ranges of 1-10 GB still print in MB units
  • Changed usage over to an atomic printf
  • Converted exits and returns to posix macro
  • Fixed if no /proc/meminfo to exit with failure (before it did nothing)
  • Moved the sleep interval to end of reading /proc/meminfo to mimic how other similar utils work
  • Combined poll and sleep check into one shot deal

mmw

2010

  • 02/06/2010: Nagios Meta Check 2
  • 01/30/2010: Cray/SGI nettest 2.4 Update
  • 01/15/2010: netreconn 1.75 Release
  • 01/15/2010: Cray/SGI nettest 2.3 Update
  • 01/10/2010: Portcheck in C 5

2009

2008

2007

2006

2005