Modeling versus Coding


Once we had enough memory that we no longer needed to commit atrocities in the name of space efficiency, state still bit us on the backside. Huge programs were written where many many functions used many many different bits of state. No part of the state could be changed without changing many parts of the program.

The enormous cost of such programs led to a backlash. Programs as state were bad. State must be bad, too. This led to the development of functional programming, where there is no state, only programs. In spite of their conceptual and mathematical elegance, functional programming languages never caught on for commercial software. The problem is that developers think and model in terms of state.

State is a pretty darn good way to think about the world. Objects represent a middle ground. State is good, but only when properly managed. It is manageable if it is chopped into little pieces, some alike and some different.

Similarly, related programs are chopped into little pieces, some alike, some different. That way, changing part of the representation of the state of a program need lead to only a few, localized changes in the program.

Let’s get this out of the way first: modeling cannot completely replace coding. Modeling deals with higher level state changes in the program as a state machine, not with the tactical challenges of coding specific, particular manipulations on specific, particular types of data.

That said, the mind sets of those who begin with modeling and then write code where necessary, and those who write code and put together some sort of state model where necessary, are intrinsically different. The quote above assumes that by developer we understand that he means Smalltalk developer, and Smalltalk is a language that is particularly suited to the modeling mind set. Properly speaking Smalltalk is not “object-oriented” in the sense that that term has come to mean, but is a topological entelechy language.  If you don’t understand the latter phrase, it should give you an inkling that you don’t understand in what way Smalltalk is unlike other languages, even those like Ruby with similar syntax.

This difference, together withg the tendency to preference coding over modeling since it is easier to understand and easier, from the project management perspective, to monitor, leads to all kinds of inefficiencies that not only add to the cost of software projects, but in many cases cause them to fail altogether.

Another thing to be clear on: with modeling we are no more talking about architecture than with coding; although the architectural constraints may appear to affect modeling more directly, in many cases modeling provides a means of meeting architectural constraints in multiple different ways, something that cannot be accomplished by simply coding without multiplying the work by the number of means implemented.

It occurred to me to write this while reading a book on design patterns in Smalltalk, which despite the’design’ in the name, are largely concerned with formalizing well known tactical solutions to common tactical coding problems. Specifically, the following passage made me think about the mindset of a coder as opposed to a modeler.

I wrote the section on Temporary Variables before I wrote this section. I was pleased with how the section on temps came out. I expected this section to turn out to be the same sort of cut-and-dry, “Here’s how it goes” list of patterns. It didn’t.

The problem is that temporary variables really are all about coding. They are a tactical solution to a tactical problem. Thus, they fit very well in the scope of this book. Most uses of instance variables are not tactical.

Along with the distribution of computational responsibility, the choice of how to represent a model is at the core of modeling. The decision to create an instance variable usually comes from a much different mind-set and in a different context than the decision to create a temp.

I leave this section here because there are still important coding reasons to create instance variables, and there are some practical, tactical techniques to be learned when using instance variables.

Modeling is very well defined here: it concerns the distribution of computing responsibility and the choice of how to represent state (and therfore also manage state changes). Of course, every program does this in some manner, but from the tactical, coding perspective state is merely a necessary evil, something to be avoided as far as possible, and so tends to be at best an afterthought. Distribution of responsibility as well, largely, is left up to the system; from the coders perspective everything could go in one big method and would be more efficient, and as long as the code is properly reentrant, no problems. I remember a programmer trying to explain REST years ago to the project manager, himself a former programmer. When the project manager finally realized that state would not be maintained by the live system he just about exploded “then it’s not a program, a program is by definition a state f%%^^*$ machine!”. Pointing out that since REST uses precisely the same command set as HTTP implies that it is a web site and not a program didn’t, unfortunately, mollify the project manager very much.

REST proponents may point out that a representation of state is transferred, after all that’s what REST means. However they’re falling into the same trap that psychologists who only think in terms of patients’ ‘mental re-presentations’ without considering the original presentation that must have occurred in some manner for a re-presentation to be possible. REST is also not necessarily, and in fact not usually, resource-oriented, since by and large the representations in the payload are aggregations, which causes all kinds of issues if the client needs to change the state. An analogy might help understand the problem: a payload of data from an aggregated REST call is a convenient and useful representation of complex data, much as a bank statement for a company with many accounts, lines of credit, company credit cards etc is a convenient representation of that complex data. But imagine if the recipient of said statement, who say downloaded the statement in .csv format, imports it into Excel, makes various changes to the numbers, and uploads it back to the bank, expecting the bank to change the appropriate accounts that those numbers aggregated. The bank would be lost as to where to even start, particularly since the state that produced the representation no longer necessarily exists at the server. On the other hand, if the REST calls accessed every individual resource separately it would result in complex aggregations (and their inversions) being written in the browser in Javascript, as well as causing possibly thousands of network calls to get the data for one visual data view.

The popularity, despite various issues with REST, is a consequence of coders’ tactical desire to avoid maintaining state and managing state transitions. It is in this area as well as in distributing tasks that modeling is most useful as a complement to coding, but as I said at the beginning, it takes a different mind set. A modeler can code, but they’re unlikely to be your best hard core coder; likewise a coder can learn modeling, but they’re unlikely to be all that good at it.

A good way of understanding the difference between the two mind sets is as follows.

  1. A facility with the symbolic manipulation of linear operators;
  2. An intuitive understanding of the logical structure of new models;
  3. An intuitive understanding of the combinatorial superstructure of new models (understanding how all the models and metamodels in a system interact).

Most coders are competent with only the first of the above three basic abiltities relevant to software development, in fact most people are only good at 1 or 2, those that are good at both 2 and 3 are a rarity. The problem is that systems design, including understanding data state and how to model it and its transitions, is heavily dependent on the second two, while the first is more of a tactical, detail issue of implementation. The issue is not simply that 2 and especially 3 are intrinsically more difficult, but that 2 and 3 require an ability to project systems imaginatively, and software development hasn’t really been all that attractive a field to the more imaginative among us.

So, in concrete terms, why would I want to begin a project with something as labour-intensive (and, for many programmers, something intrinsically difficult and outside their area of comfort) as a model up front, when I could simply start coding and worry about state and responsibility distribution later? Perhaps the following example will give some idea of why I might choose this approach, and also demonstrate where coding remains an absolute necessity.

A product owner approaches me with a problem he needs to solve. In the data stores of the company there are hundreds of thousands of high resolution images. These are kept on various types of media – whatever was current at the time. Since many of the readers for such media are now unavailable and parts are even difficult or impossible to find, he needs to create a data warehouse of all these images on a big NAS system while the readers are still functional. So far so good, no software issues yet, maybe he’s just thinking out loud?

Nothing works out that easily though. While transferring these images he wants to create an image specific datastore, and this is where it becomes my problem. These images are all in various TIFF formats, but as I already know TIFF can mean just about anything, since the tags in the “tagged image file format” actually determine the format of the image data that follows: it could be CMYK or RGB data; it could be interlaced or not; etc. Luckily there are libraries available in the company that go a bit beyond imagemagick in terms of being able to extract data from TIFF images, including libraries to create smaller versions of these files, since most of them were created on Crosfield or Highwater scanners and as a result are at 2540 DPI resolution.  In terms of the actual formats of the images some are standard RGB Photoshop TIFFs, but the majority originated on a combination of Scitex machines, which use CMYK interlaced TIFFs, and Quantel Paintbox machines, which use CMYK non-interlaced TIFFs.

So from what I can gather from the initial conversation I have the following basic requirements:

  • a data store that is browsable by images, implemented as some sort of key / value map data store
  • must be searchable by any data that can be extracted from the original image or entered by human beings as tags
  • must be searchable via some sort of image recognition system whose algorithms I’m not privy to (and may not have been written yet)
  • editable via some sort of editor that can be accessed via a rich GUI app or via a web app

As you might have noted, much of the difficult coding and algorithm design is already accomplished in libraries I just have to plug in. Others are in process but they’re somebody else’s problem. My problem is largely creating and maintaining the state of a huge data set. From what I wrote above this sounds like a perfect problem for a modeling based solution, doesn’t it? And it is, of course, since I specified it to be. Too bad real life never pans out so neatly.

First I have to make a few decisions:

  • What data store should I use as a test bed, given I may have to use any in the finished product, or even multiple different ones?
  • What language should I write this in?
  • What environment should I develop in?

Fortunately or unfortunately, most of us have most of these decisions already made for us, since companies have standard data stores they use depending on the required type, and standard languages and environments. In this case, we’ll say the standard key value data store is Cassandra and the preferred language by the company is Java. Environments can sometimes be a bit more at the discretion of the developer, but in this case we can safely go along with the company’s preferred environment – Eclipse.

As it happens, Eclipse has a powerful modeling environment – EMF, the Eclipse Modeling Facility. EMF (and extensions such as the Graphical Modeling Facility and the Extended Editing Facility) give me many of the capabilities found natively in a language more suited to modeling than Java such as Smalltalk, by creating a parallel object hierarchy, beginning with EObject, that takes care of the reflectivity and morphology necessary for adept modeling techniques.

So to begin I’m going to create a simple EMF model (or Ecore model, properly speaking) that contains the basic data I need to capture for each image. Along with the textual data in the TIFF header I’ll need the two low resolution (thumbnail and viewer sized) images, a reference to the location of the original, and since I don’t work in a company with myriads of images by total accident, I also figure I’ll need to derive a vector representation of the image so that the recognition and matching algorithms can do topography over topology image matching. Since I have some code I wrote years ago to generate EPS vector versions of TIFF bitmaps it will be easy enough to convert that code to output SVG rather than EPS. I’m also going to create fields for editable tags that will be added by human beings.

Once I have this model created I can try it out by generating code for the model in Java, a simple GUI editing framework for the model, test cases and a CDO model that will be useful for persistence and maintaining state changes later on. By plugging in the various libraries and writing a bit of glue code, I can actually start bringing in real image data and, for now, storing it in a local CDO repository managed inside Eclipse. Given that I don’t know all the possible TIFF tags in the images, I’m going to make that part of the model dynamic, where the model fields are determined reflectively from the data and the code to store, persist and edit those fields is created dynamically whenever a new type of TIFF is encountered.  I can then generate the code (via an EMF library called Texo) to persist the CDO model to Cassandra (or any other JPA supported data store).

Given that a fairly complex model (including all the elements required for a reverse engineered topology of each image) might take a week or so to create and test. I now have the most primitive version of software that meets the initial requirements. Since the editor is created in Eclipse RCP (rich client platform), I can use RCP remoting, also known as RAP, to make the editing facilities available via a web server to browser clients.  Later when I optimize the finished program I can replace RAP with a web client based on something like CodeEnvy, which doesn’t put as much strain on the server for each concurrent client, although it does have certain limitations as far as the GUI representation goes.

The real value, though, comes when, given a demonstration of what it can do, the product owner starts to see more clearly what it needs to do. The more detailed (and changed) requirements can be quickly implemented as a model change and the code for the model, persistence, and editing regenerated and automatically tested. As requirements on the UI move beyond simple text fields, more advanced controls applied via EEF and GMF are also only created once and regenerated whenever the base model changes. I can create any arbitrary views of the data via simple EMF transforms and the map of the object structure is maintained throughout the transforms, allowing transformed views of data to update the originals. Finally, I can generate JPA code to persist each version of the model not only in CDO, but in Cassandra, Mongo, Hadoop or an RDBMS without extra coding. Since the model comprises both data and behavioural state changes, including specifying responsibility for the latter, the two most basic aspects of software design are captured in the model and can be quickly inspected, judged, corrected and optimized.  I can even use EMF Compare and Diff/Merge to do full comparisons of my topological/topographical models, potentially saving the poor sods that have to write the matching algorithms a lot of work and their sanity.

In this particular example the rather primitive, initial version of the software comprises about 1.4 million lines of Java code. Perhaps if I hand coded it that might drop by a quarter, but not much more than that. Any new views of the data require approximately 300k lines of code, and trying to ensure that a given representation can be successfully modified and persisted is next to impossible if those representations are hand coded, whereas the change graph that is an inherent part of the generated code does that for me reliably and consistently.

The only difficulty is that much of the system, until it’s finished and working (and the dynamic parts even then), only initially exist in the imaginative projection I have of the system.. This not only requires that one have the imaginative ability, but using it and maintaining the projection in one’s imagination until the model is sufficiently complete to generate a working system can be exhausting. However, given the choice between that and hand writing millions of lines of (mostly rote) code, often multiple times, I’ll take the former.  This also helps answer the question of why Eclipse uses so much memory compared to a text editor like, say, the appropriately named “Sublime Text Edit”.  I say appropriately named due to Hegel’s definition of the Sublime, as “the night where all cows are black” and nothing whatsoever can be distinguished or understood.

Of course, converting bitmaps to vector files is not code that can be generated from a model, at least not at this point, since we don’t even have a theory regarding all the possible correlations between topography and topology, and thus the ability to do it at all is going to be a tactical, detailed implementation full of specific decisions and compromises. Nor can I generate the image matching and searching code (although text searches can certainly be generated). These are areas where hand coding is absolutely necessary, and of the three abilities I mentioned earlier, someone whose strong point is #1 is likely going to produce the best code, all else being equal.

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s