In the first text a generalized view of how Open Source may be beneficiary to an organization was highlighted: not this time, now a look at development models and three real world examples of leveraging Open Source resources for a particular organization.
The Open Source model is actually very similar to extreme programming minus the formality. In the simplest terms; the Open Source model is:
On the surface the model does not sound too difficult and it is not for small projects. The Open Source model becomes difficult - ironically - when the software becomes popular. The more popular a particular type of software becomes factored by the software's size creates a management issue on the author(s) part.
Problemwith the Model
Essentially the three stages of time management look something like:
The time management problem is exacerbated by the number of developers controlling the source, for example, the Linux kernel (for a very long time) only had one maintainer of the core source tree; burnout was inevitable.
Other than dropping a project on the floor, there are several ways to mitigate time management problems:
The implications of time management are obvious; a real version system needs to be used, someone or some persons have to set aside time to figure out exactly which model to use and execute putting the mechanisms in place.
Most organizations take three approaches to software system development:
The models of central core and/or gatekeepers work - in fact; they work the best. The Linux kernel experienced a slowing of development when the single gatekeeper issue arose; it was remedied by putting in a set of gatekeepers to handle subsystems. The various BSD projects use a two pronged approach; there are developers who are considered experts for a general area, set of areas, specific modules and package experts with an accompanying technical core group to make final (and sometimes pretty tough) decisions about code inclusion.
It is worth noting that many organizations do end up using the
Open Source structure naturally. A team of programmers is assigned
to create an application; often they divide into groups of
is better at certain aspects of the project and subsequently
end up becoming the subject matter experts; at the same time; the
more expierienced and/or talented individuals naturally assume
These examples are real; the names involved have been omitted for legal reasons however, each case study example happened. Perhaps even of more interest; each case example happened to the author. The examples are given in an order of magnitude from least to most significant.
Security through obscurity is often a very bad method; key word: often. At the time hardly any sites were using IPv6; the provider for this company was not using IPv6 (at least not routing IPv6).
The scenario is simple, two systems in a DMZ needed to communicate using a private connection. The problem? No physical private connection. Secure Sockets Layer and Secure Shell both can do the job with an encrypted session - what if that could be taken a step further? What if communication was done between the two systems on a non routed network using IPv6?
The systems were one NetBSD and one FreeBSD system. They both supported IPv6 by default . Instead of having to remember the syntax of remote execution over IPv6 a set of wrapper scripts was written to use IPv6 in place of the commands needed to maintain a heartbeat between the servers.
Solely hours; roughly five hours total versus:
The organization saved money in time and hardware expense by paying for the technical resource to implement a solution.
Some cases are just too interesting to pass up. A company wanted to use Oracle on Linux in 2000. The problem.. Oracle kept wedging because it was using a newer memory model than the current kernel. The short term solution was to update the 2.4 kernel to at least 2.4.12. Easy enough, the sysadmin downloaded the Red Hat sources; quick built (that is - just use the default) and voila - a new kernel . . . one minor problem . . . the BroadCom ethernet card did not work. It gets worse; the card was not supported at the time nor any of the subsequent kernels . Quite a place to be cornered into. One of the reasons this system had been purchased was for the 1 Gig card.
The sysadmin understood that at the time, a new 1 Gig card would cost a great deal of money; possibly thousands of dollars. The solution was simple enough: port the driver from the previous kernel. The admin copied over the driver files, changed the PCI handshaking and glued the card into the newer kernel's framework . . . it worked.
Time, however, in this case the time was a direct proportion to the cost of a new card. A new card could have cost at least 1000 USD - the admin (who already had driver experience on another platform) did the work in 2 hours. Remember, the porting was done in 2 hours versus the time to:
It is interesting to note that the result was staggering, the system was made available again within two hours of diagnosing the problem wheras if the driver could not have been ported for lack of experience, the cost would have been far greater (possibly an order of magnitude).
In 1999 Brett Lymn put forth the idea of using a method to check all executables in a system (and scripts) via a kernel loaded hash table; not unlike tripwire but inside the kernel. A Fellow Developer felt it was a good idea and decided to take Brett's proof of concept code and break it out into a seperate module within the kernel. The same admin took the idea to their corporate management with a singular pitch . . . credit.
Not only did the company the sysadmin worked for go for it - they offered up test hardware that both the admin and developers could login to remotely. Eventually the first draft was hammered out and committed to the NetBSD kernel.
In this case, the company paid for power and a system plus spare developer time that the admin could have been using to code for the company, however, the mandate was for the admin, no coding unless it is idle time - effectively the company lost no personnel cost.
The company got three things:
Upon closer examination it can be found that Open Source differs from closed source only in the sense that vendor support models are modified and may (read: often do) take a little time to adjust. The Open Source development model while on the surface looks like chaos in action is not. Small projects remain controlled by their respective developer(s) while larger ones adapt using some sort of control mechanism. The cost savings do take time, there is the initial cost of deploying, adapting and hiring/training resources - but the savings exist.
What has been discussed in these texts are only two pragmatic facets of Open Source; there are more that have been missed (and may be addressed later) such as heritage, tradition, science, hobbyists, friendship, community and just plain fun - for the short-sighted pragmatic or Open Source afficiando alike these texts explain the methods and models but skip over the human factor . . .
. . . you need to figure that one out yourself.