Why The “Coolest” New Ideas Don’t Always Succeed in Technology


Every year in IT something comes along that is ‘super-cool’, i.e. at least two steps beyond what’s generally considered “bleeding edge”. “Bleeding edge” itself is called that for good reason, and what makes the “super-cool” only accessible to the massively initiated is that it’s generally been developed by some mad genius who doesn’t feel the need to either make the code comprehensible or provide any reasonable documentation. Rather than simply bleeding, for most in IT for whom being in IT is a living, even a personal interest, but not necessarily the entire focus of our work, social and family life, involvement in the “super-cool” involves at minimum a slashed artery or two.

Of course, some of the most mainstream languages, frameworks and concepts used in IT started out that way: Java, when it was still known as Oak; Haskell, when functional programming still made most developers think “C”; Eclipse, when it was still known as “IBM Workbench”, was not open sourced, and the idea of incrementally compiling code in memory gave most developers the cold sweats; MVC, when it was first released from ignominy as part of the Smalltalk ecosystem and used in web frameworks such as Struts (MVC was still giving .NET developers the cold sweats 10 years after that); unit testing frameworks such as the now ubiquitous XUnit, X being whatever language or framework you happen to want to unit test, such as JUnit, JSUnit, Nunit, (come to think of it, that all started in Smalltalk with SUnit as well), etc.

A large number of such “super-cool” ideas never really make it beyond that stage, however. And of course there are those that make it some way past that stage and form a niche that sticks around but never quite becomes mainstream. The latter include things like Ruby/Rails and its successors, super-cool in the early-mid 2000s, niche in the later 2000s, not really considered all that important today. Haskell itself could be considered a ‘mainstream niche’ if you like, much like LISP or PROLOG or Smalltalk itself before it.

So what differentiates Oak/Java, which now has the distinction of having more code written in it than all other programming languages combined, from Ruby/Rails, or even Haskell/LISP/Smalltalk? Is Java really that good?

The answer to the last question is, of course, as any developer familiar with other languages will tell you, a resounding no. I’m not saying its terrible, but I wouldn’t really classify it as better than decent at most tasks. For any specific task, there is probably a more efficient language, but the sheer market presence, in terms of available frameworks, libraries, and most importantly skilled developers, makes choosing something else a difficult and usually unpopular decision. But of course that assumes that it somehow differentiated itself sufficiently in the first place to build that market dominance.

Answering the question of why Java has been so successful requires first answering the questions posed by the lack of similar success of other languages and frameworks. Why doesn’t a given “super-cool” new idea gain sufficient traction to either become a stable niche, or to go further and become mainstream?

Immediately, the bleeding edge nature of most new ideas, which includes things like a lack of decent tools, arcane syntax either in the language, the APIs in the case of frameworks or libraries, configuration files in many languages and frameworks, puts off those who don’t spend all of their non-work social and family time discussing the newest tech invention and figuring out how to use it, and as importantly, what to use it for, if anything.

Earlier in the history of computing above average resource requirements, above average cost if the idea was proprietary, unfamiliar syntax, or lack of availability on the most commonly available machines were often among the biggest barriers to acceptance. Some combination of the above were the reason for the niche status of Smalltalk, Hypercard (had Hypercard been available on PCs, it’s doubtful that HTML would have ever taken off), Objective C, LISP, and other innovative ideas.

Today ideas have the advantage of portable low level compilers that allow them to be made available pretty much everywhere, a developer base accustomed to a wider variation of syntax in language structures, and hardware sufficiently cheap and powerful to make most ideas available on machines that people actually have, and a recognition by companies that they have to build a market before they can make money off a new idea.

So the old issues that plagued things like Hypercard or Objective C are no longer so much of an issue. The issue, then, becomes primarily what can be done with the new idea that said idea makes significantly easier, more efficient, or in some other way noticeably advantageous over using a mainstream technology. As a case in point we can look at Ruby/Rails, which garnered sufficient interest to become a significant niche, but a niche that, from all appearances, looks to be slowly losing adherents year by year.

The strength of Ruby/Rails was simple: if you wanted to create a basic, CRUD based web application without writing a lot of rote code to handle simple things like page navigation, the Rails generator essentially created all that scaffolding code and configuration for you. That was a big advantage at the time over manually writing Struts configuration files or whatever Java framework took your fancy, and made the creation of decently dynamic web applications much cheaper in terms of developer effort. So what prevented it from taking over the mainstream?

  1. Ruby was a new language, and while similar in many respects to Smalltalk (where the Rails concept originated as Seaside) had some issues. In trying to create a “Smalltalk without Smalltalk”, i.e. without the need for a Smalltalk VM to do anything at all, Ruby is not properly virtualized. Plenty of Ruby Gems (prepackaged objects) require a call to a specific C library that may or may not be present on a given platform, making cross platform development more difficult than it should be.

  2. The lack of being in a Ruby “environment” (which would be similar to a Smalltalk environment) meant that the tools were relatively primitive and difficult to develop further. Most of this could be worked around easily enough as long as the generated application did most of what was needed, but it was precisely when trying to extend a Rails app with more complex business logic that the relative lack of libraries, and the lack of proper virtualization in those that did exist, made Ruby apps suddenly increase dramatically in cost.

  3. The increasingly popular JavaScript UI frameworks like jQuery and Angular were initially more difficult to integrate into Ruby applications due to the prefabricated nature of the apps themselves, leading to further increases in cost if the app required such frameworks for optimum user experience.

Going back to Java, of course Java had some similar issues, and plenty more besides, so why did it take off to such a massive degree?

A huge part of the answer was simply timing. Java came along precisely when the limitations of CGI became apparent in creating web applications, but the increased control and accessibility of web applications was pushing a more capable server platform. Although Java as a client programming language has never had more than limited success, as a server platform, with the introduction of web integration frameworks such as JSP, Servlets, Struts, etc. made Java the most attractive platform for web application server development. The addition of J2EE technologies, while many of them were only necessary in very specific niches (EJBs for distributed transactions, for instance), gave Java the capability of going beyond where 99% of applications needed it to go for the moment, but since it supported those niches, while ASP and other competing technologies didn’t, a company that needed niche functionality in one application was likely to choose Java as the platform of choice for all the others. Although plenty was done using ASP and later MS technologies, and plenty was done using PHP and related technologies, neither supported the broadest spectrum of requirements at the largest companies, or did so only much later, by which time Java was too entrenched for .NET or PHP to make major inroads other than in specific use cases that didn’t need significant integration with the already large base of Java server applications in use.

And of course Java has not stood still itself, where limitations were shown to be just that by other languages and frameworks, the Java ecosystem has had the resilience to create something either as good or at least workable, so that a significant advantage from a new idea becomes at most a minor advantage. And the tendency to need to incorporate ideas into the Java ecosystem (for instance, Jython, which is Python that compiles to Java, so it can take advantage of the wealth of Java server features) tends to keep those ideas in niche areas and simultaneously strengthen Java as the mainstream go-to language.

To shift the discussion a little, I’m going to discuss another once “super-cool” new idea, node.js, and what has happened to it in terms of developing towards a mainstream toolkit.

Node.js was at its peak in terms of super-cool status 3 or 4 years ago. At that time it had most of the hype and handicaps of the beyond-bleeding-edge ideas that I discussed at the beginning. Some handicaps can be overcome, but there is a fundamental handicap in node.js that is both necessary and difficult to overcome, which is that it is written in JavaScript.

JavaScript has really nothing going for it except the accident that virtually every web browser can interpret it, and only it. As a language it is arcane, difficult to read, difficult to structure and debug, and in its main implementations both limited and not entirely portable. The generic version EcmaScript, aside from sounding like a skin disease, suffers from the most deadly infection any generic version of a language can be infected with, which is that you can’t actually do anything with it without the semi-proprietary extensions in various implementations of JavaScript. It’s been said that as a language, JavaScript incorporates all of the mistakes ever made in programming language design, and then adds a few new ones of its own. To this day, even though it has been around almost as long as Java, it suffers from a lack of decent tools, including editors, debuggers, refactoring tools and just about anything else that might increase productivity over a text editor.

However, since nobody has the marketing power to get all the browser vendors to support a better scripting language within the browser, which is where code that improves the user experience generally needs to be, it is the only option. There are various syntactic sugars available such as CoffeeScript, that make the arcane syntax and structure somewhat more palatable, but generally at the cost of making framework and library usage more difficult, or limiting what can be coded in the language. Frameworks like JSF 2.x can interact within a deployed page with JavaScript, which at least allows the developer to write housekeeping code in a simple, efficient way and leave the more tedious JavaScript for only those areas of the application that actually require it. But none of these is a real solution.

So what does node.js offer? At first look, seemingly not much. JavaScript is a serially interpreted language, so threaded, asynchronous code is not really a possibility, as is the case in many scripting languages. Node.js in effect turns JavaScript into a fully asynchronous (although at the process, not the thread level) language. This is almost a necessity for writing server side software in JavaScript. The question is why would anyone want to do that, given there are so many more developed and accepted alternatives, most of which are better than JavaScript, with or without node.js, in any case?

The answer has to do with developer skill sets rather than any intrinsic advantage to the language. Web application developers generally get split into “front end” and “back end” developers. The former are often retrofitted graphic designers who have learned sufficient coding to write barely acceptable JavaScript that makes their designs interactive. The latter are generally developers who work in more structured languages such as Java or C#, and want as little to do with JavaScript as possible. The real use of node.js, then, is to get the server side developers, by forcing them to write a thin facade in node.js to Java or other back end services, to learn JavaScript and be able to take over writing the JavaScript that runs in the browser, applying their greater focus on coding to the problem than most designers have time for.
Node.js, with extensions such as Express.js and Kraken.js allow you to write a thin, stateless facade to back end services without too much worry. The reason I specify ‘stateless’ is simply this, in a language that uses untyped objects and variables, which are therefore unchecked by the interpreter/compiler, there is an inherent danger in using stateful objects that persist data, since the chances of corrupting the entire datastore are greatly magnified. Syntactic sugars like CoffeeScript do nothing to alleviate this problem. In large part, it is simply applying the facade pattern popular in Java EE applications, but rather than implementing the facade as stateless session EJBs, implementing the facade as node.js code that calls web or Rest services to actually do the work involved. A layer of checking is thereby lost, but you gain developers with experience in writing JavaScript. Whether that’s a reasonable trade-off has to be decided by the requirements of any specific project.

Since node.js, even with the express and kraken extensions, is fairly lightweight, it is reasonably quick as a facade. Since the data in most cases is not manipulated within the JavaScript the lack of type safety is not a huge concern, since it will be passed through a type safe layer prior to persisting the data, and thus invalid data will be caught and not persisted, preventing wiping out valid records and potentially corrupting the entire datastore. There is, of course, always the question of how to enforce a policy of not manipulating data (other than simple validation) within the JavaScript, but decent code reviews and a clear mandate that is understood by the developers can generally mitigate that risk.

The more interesting question, though, given the constant increase in the amount of dynamism expected within the UI, is how to continue to provide that without manipulating data. A potential solution to this problem, that simultaneously solves the need to write any significant amount of JavaScript, came to my attention recently, and after some more experiments with it, I will write a post on how successfully it does manage the problem.

 

Advertisements

One thought on “Why The “Coolest” New Ideas Don’t Always Succeed in Technology

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s