In some project I worked on in the past, I forget which now, we started distinguishing between whether a task was “done”, or “done done”. It’s a useful distinction outside software development.
Lately I’ve been feeling that I was done done as a software developer. As people age, there’s a tendency to look back more than forward. For some it happens quite early in life, for other much later. Whether it’s merely very common or completely inevitable is still a question in my mind.
As a developer, as with any engineer, technologist, craftsman or artist, it’s in the nature of techne as a manner of knowing that the primary orientation is towards the future, the possible, as opposed to the actual, which always means the historically. The knowing of episteme, such as that of science, is conversely oriented towards the past. Science studies what is, of course, but what is, beings, are insofar as they have been. The rational is always simultaneously the historical and vice versa. The two arose together with a common basis in viewing the real as the accounted-for, and are thus inseparable, as Hegel was the first to fully notice.
Thus a change in one’s basic orientation can signal the end of a career as a technologist, engineer, or artist – anyone whose primary mode of knowing is techne and not episteme. Of course technologists and artists also look back, but only in order to look forward more effectively. I found an interesting example of a technologist trying to understand the notion of the effectivity of history telling, in that the most natural way for him to grasp it for himself was to project the effectivity of something occurring today a couple of thousand years into the future. For a historian or scientist, or an accountant, this might seem absurd. But to a technologist, engineer or artist, it’s the only way of truly knowing it, given the primary mode of knowing that they understand.
If, as a software developer, one starts looking back and finding one preferred the way things were a decade or two decades ago, either one’s orientation has changed, or, and this can be even worse, one might in fact be correct. Either way it makes continuing as a developer much more difficult, and eventually perhaps impossible. I’m naturally suspicious of it being my orientation that changed, rather than reality, so when I found myself thinking in that manner, I didn’t simply accept it as true. Neither did I write it off as “merely” subjective, though. As any good technologist should, I tested it.
This thought process came about initially as I was putting together the pieces for a build server capable of building a demonstration project. Since I work in live object environments, projects are built in memory. Generally with Smalltalk this is not a big issue, since a Smalltalk environment with as many libraries as one could sensibly use in a given project: the full language, compiler and development tools, runs under 200 MB in memory. On a modern machine (my work laptop has 16 GB) 200 MB is barely noticeable. In current Java with a current version of Eclipse, however, particularly with model-generated code, depending on the size of the model (which in my case was determined by the size of the XML schema the model was in turn generated from) 200 MB may not be sufficient to load the language, never mind the libraries, development tools, and the project you’re eventually trying to build. In this particular case the model and thus the code, which must be built in memory at one time in order to satisfy all the links, when added to the prerequisite language, libraries and tools, blew out the maximum heap (12 GB) I could reasonably allow Eclipse on a 16 GB machine.
Thus the need for a build server. I already had a server with sufficient memory, but due to its being an Ultrasparc T2+ based machine, with a true 64 bit memory space per process, it doesn’t honour the limitations of x64 in terms of the memory space being segmented into 16GB chunks, hence the build was fine, but there were numerous potential issues running the built codebase on an Intel based machine. Since actual speed wasn’t a huge issue (particularly if the build is on another machine, how long it takes doesn’t impact working on the laptop), I went for an old server that uses ECC Registered memory specific to motherboards designed for AMD CPU’s, since there’s tons of that memory around for little money. I wound up with a dual processor, 12 core AMD Opteron based machine with sufficient memory slots to go up to 128GB. Since I don’t quite need 128GB yet, and the 8GB DIMMS necessary to reach that with 16 RAM slots is more expensive per GB than the 4GB DIMMS, the machine has 64 GB. Add in a couple of 146 GB 15K SAS drives, and it’s a pretty decent machine for less than $150 total. Yes, it’s loud, hot and takes a lot of power, but I already have four older Xeon based HP 385 servers in the back office that are loud, hot and take a lot of power, but are necessary for running an XRv virtual ASR router network, one more machine like that is neither here nor there. And since it’ll be running Oracle Linux, I can run the X based UI locally in a VM on my laptop from the comfort of my living room, where it’s cooler and quieter.
To get back to the main point, I was thinking as I was putting it together about a project I worked on about 16 years ago, where the laptops we had initially had 128 MB RAM, although we soon upgraded them to 256 MB. In that memory, using a Java live object environment, the one that was replaced by Eclipse, in point of fact, we wrote a large client-server application where the server portion, which was mostly written, was in C++ and the client in Java Swing, using CORBA to pass objects back and forth. Since the application also had a web interface, we were also running WebSphere locally to test the web portion, and DB2 UDB 6.1 for local storage (the server stored everything in Oracle, but Oracle was too expensive to run an instance per client). Even with a live object C++ environment, a live object Java environment, the aforementioned WebSphere and DB2 servers, and Lotus Domino, 256 MB RAM was entirely sufficient.
Not that memory usage matters much in most situations. As it gets cheaper, software uses more of it, no big deal.
The big deal, for me, was the fact that 95+% of the mental energy expended was on code design, development, testing and optimization. Building and deploying the application, even with the complexity of sharing objects between two environments and using CORBA IDL to generate the interfaces for that, was as simple as hitting save, and choosing a menu item to deploy, which took a matter of seconds.
Although it creeps up on you, since it’s occurs as one small extra step here, another there, with today’s devops tools at least 50% of that energy would be spent simply on writing and maintaining configuration, build and deploy scripts. Although both have some great projects under their umbrella, I have a base hatred of GNU for what they did to building C/C++ code, and Apache has managed to do nearly the same to Java.
This aspect became phenomenally apparent, because aside from the demonstration PoC project, I’m working on a production project where for various performance and development reasons Java became an impossible option, and the project was moved into Pharo open source Smalltalk. Although there is a GNU Smalltalk, the people who love to over-complicate what should be a simple task of building and deploying code, didn’t manage to have much effect on Smalltalk, mainly because Smalltalk developers were having none of it. GNU Smalltalk is so little used that nobody even bothers maintaining built versions of the environment. Pharo, like the commercial Smalltalk’s such as VIsualWorks and VisualAge, remains as easy as I remember Java and C/C++ being, back when I didn’t hate both. I don’t actually hate the languages, I hate the configuration, build and deployment tools and processes. The additional effort required due to things like Make, CMake, MakeMake, Configure, AutoMake, AutoConf, and on the Java side Eclipse, Maven, Maven and Maven, where the latter manages to be a terrible package manager, horrendous build tool and mediocre deployment tool all at once, ensures that none of the projects I’ve worked on in the past 8-10 years have even gotten to the point of optimization. If you manage to get it working, somewhat tested, and deployed, you’re beyond “done done”.
As I said, I didn’t take my memory’s word for it, I tested it. I actually went back to the OS and tools we used on that project, run up in VM’s on my current laptop. The OS was OS/2, the tools were VisualAge for C++, VisualAge for Java, and as I mentioned DB2 6.1, Lotus Domino 5.01, and WebSphere 1.1 (yes, really). In a couple of hours I was able to run up three VM’s: two simple servers and one client; write a server C++ process to get data from DB2 (the data was being generated by Netfinity based on network events); pull the data via WebSphere servlets using remote CORBA to the C++ process, format the servlet output into a set of Lotus Domino documents with a Notes view to look at historical trends, and be able to view the results either in Notes or on the web via Domino web access.
None of this involved any configuration other than the defaults in the various environments. No makefiles, no XML configs, no build tools other than the in-memory compilers that all the VisualAge products had. The very idea of trying the same with a current C/C++ environment, even the better ones such as Sun Studio or XL C/C++, never mind the GNU travesties, and Eclipse (or worse, IdeaJ or NetBeans that are not live object environments), even without the additional hassle of Maven, is frightening in terms of the sheer time wastage needed to simply configure the tools.
One last point. The claim is often made that somehow all this time-consuming config results in more efficient code, or builds, or something. There’s always some sort of half-assed justification, at any rate. In using Pharo for the other project, though, since performance is an absolute necessity in a way that is rarer these days given the increase in hardware power, I un-Gnuified the build for the Pharo bootstrapper and OS library access code, and rebuilt it using a much simpler configuration in Sun Studio for Linux (although it was still far more complicated than the equivalent in VisualAge C++). The result, with the only other changes being a couple of compiler flags, was a 45% increase in single threaded performance, and a 240% increase in peak throughput at maximum concurrency. As significantly, the number of test failures (out of nearly 12,000 unit tests in my Pharo workspace) dropped from just over 100 to 9. The Gnuification in Pharo is a hangover from its origins in Squeak, and I understand why most Smalltalk developers are loathe to mess with it. However, it quite obviously isn’t helping. Having un-Gnuified it also made it a breeze to compile for Solaris on both x86 and Ultrasparc, and the peak concurrency on a 128 thread T2+ machine, with both Ethernet ports and the PCI bus on chip for low latency, is phenomenal.
Emboldened by the success of that experiment, I un-scripted the build completely and imported the code into VisualAge C++ on OS/2. After a bit of time getting everything linked (mainly due to needing EMX and a few other OS/2 ports of ^nix libs), I managed to get it to compile and run. I spent about 30 minutes with the VisualAge profiler fixing some slow functions, and the result? Deployed into a large VM on the aforementioned build server, using 6 of the 12 cores for OS/2 Server SMP, it was able to get to 75% of the throughput of the 128 thread Ultrasparc. As compared with the initial build of Pharo for EL 7 that I downloaded running on the base Linux on the same server, that works out to about 5x the throughput using half the machine.
The difficulty is that despite this, and despite the enormous cost of development and the poor results in the industry overall, Smalltalk is just written off as niche, without anyone asking what that niche is. Instead poorly performing and phenomenally unstable NodeJS code is used as if it were the most natural thing in the world for enterprise software. OS/2 is only maintained “on the downlow” by IBM, and the last version of VisualAge C++ was replaced by XL C/C++ in 2011, in order to be more “GNU-friendly”.
As a result, I think as a software developer I’m “done done”. If it were just a matter of aging, I might be able to reverse or at least delay the change in my outlook, but when testing it demonstrates that it’s not me but reality that’s the problem, well, there’s no easy fix for reality.