The client-server fiasco

The dominant business computing trend of the past 5-10 years or so has been the switch from centralized computing (a mainframe or minicomputer with many terminals attached) to the client-server model (a server storing data, connected to a network, with PCs or workstations replacing the terminals of old).  This was supposed to save money and result in increased user productivity.  That's according to the companies and misguided industry analysts who pushed the idea.

The reality has been a different story.  Client-server computing has been the biggest mistake the technology industry has made in at least the past 15 years -- possibly its biggest mistake ever.  Instead of making systems easier to manage and making workers more productive, all it's done is create chaos for system administrators, slow users down, and cause data-sharing and bandwidth problems that are completely unnecessary and that were unheard of 10 years ago.

Creep-and-Beep Traffic on the Network

The main problem is, of course, network overhead.  And overhead is something networks have, in spades.  I first became aware of the problem with networks when I wrote some documentation for a network-monitoring tool several years ago.

In order for a data packet to get from one network node to another, it has to be wrapped in several layered protocols.  A complete set of protocols that lets two nodes communicate is called a protocol suite.  A protocol suite generally consists of four to seven protocols that are layered, one on top of another.

Why is this a problem?  Well, it's a problem because each protocol adds a whole bunch of data that does nothing except tell the network where the real data packet (the data being sent from one node to the other) is supposed to go.  The vast majority of the bits being sent over the network are nothing more than overhead.

To make this abundantly clear, suppose you're on a network node (say, a Unix workstation), and you want to log into another workstation on the network using a terminal-emulation window.  You start the window, enter a telnet command, and log into the remote system.  A system prompt appears.

Now, every time you type a character -- say, a letter "a" -- that letter "a" has to get wrapped in several layers of lower-level stuff so that it'll get to the node you're sending it to.  Then, that node echoes the "a" back to you -- again, wrapped in several layers of stuff -- and gets displayed in your terminal-emulation window.  In the good old days, with a terminal hooked to a mainframe or minicomputer over a serial line, the terminal would send one byte to the host, and the host would send one byte back to the terminal.  Doing the same thing over a network, you've got several times as many bytes being sent.  This is the basic reason why networks are slow.

A former co-worker of mine once pointed out that in the good old days, 90% of a computer's CPU cycles were devoted to doing calculations, and only ten percent to displaying the answer.  I've argued (Faster Chips, Slower Software, March 17) that this ratio has been reversed in recent years, with the advance of graphical user interfaces.  The same comparison could be made between the old centralized computing model and the client-server model -- although it's actually even worse, because it used to be that all 100% of the data sent over a serial line to a terminal consisted of useful stuff, with 0% overhead.  Now, it's a small portion of useful stuff, with a huge portion of overhead.

No Disk = No Brain

In the early days of client-server computing, someone got the idea that you could save money by making workstations that got all of their data over the network, with no local disk.  This was a terrible idea.  I've sat at a diskless workstation, and I can tell you, it's like watching paint dry.  Why take a computer with a fast chip and a high pricetag, and hamstring it by making it unable to do any work until it gets some data from the server?

The other problem with the client-server model -- at least in its current incarnations -- is lack of scalability.  With centralized computing, when the system gets slow, you just plug in a faster chip, and everyone's happy.  With client-server, you have to upgrade the clients, upgrade the server, and add more network hardware -- and you still may end up with performance problems, unless you know exactly what it is that's slowing down the network.

Why did client-server take off?  I think it was largely an outgrowth of the PC "revolution."  People -- end users -- like having a PC on their desk, even if it's more expensive, in hardware and support, than a terminal.   I'm not going to go into whether a graphical user interface (GUI) is inherently better than an old-fashioned command-based user interface (I'd argue it isn't, but that's a battle that's long-since been fought and lost).  But be that as it may, there's no technical reason why centralized computing, with a good GUI, couldn't have provided all the convenience users seem to find in a client-server/GUI environment, with none of the support headaches.  Why didn't it happen?  Because Unix vendors couldn't agree on operating-system and X Window standards, while Ken Olsen of DEC clung stubbornly to the belief that PCs were toys and that if he ignored them, they'd go away.  VAX/VMS was a superior computing environment, but it was too expensive and didn't stay current with what users wanted.

The late '80s and early '90s witnessed the decline of proprietary minicomputers and operating systems (the one I lament the most, as you can tell, was VAX/VMS -- VMS is the only operating system I've ever used that I actually liked), in favor of "open" systems.  There was a tremendous hullabaloo in the trade press about how "open" systems were going to bring the cost of a minicomputer down to bargain-basement levels, while bringing graphical user interfaces to the desktop and allowing users to buy shrink-wrapped software, as they could with PCs, and run it on anyone's minicomputer or workstation.

Unfortunately, there's no such thing as an open system, and there never was.  "Open" was just a euphemism for Unix, and Unix vendors never agreed on a standard -- everyone's flavor of Unix had to have its own "enhancements" that rendered it incompatible with everyone else's.  As a result, shrink-wrapped software never happened, or at least not to the extent the open-systems advocates predicted.

While all this was going on, Bill Gates & Co. got their noses under the tent, offering products that, though technically inferior, included -- let's face it -- a lot of stuff that end users wanted.  What's more, DOS/Windows, although a proprietary operating system, reached a critical mass at which hundreds (or thousands) of companies were writing applications for it.  So the PC platform became, in fact if not in name, the real "open" system.  And that was all she wrote.

What goes around, comes around

Those of us who liked VAX/VMS systems and other powerful centralized computers feel vindicated with the rise of the "thin-client" model, which its supporters like to tout as the Next Big Thing -- it's simply a return to the idea that the most efficient computing model is a big, scalable computer, with terminals hanging off it.  A PC with a browser is, in essence, a terminal -- only less efficient, because it's still a network node.  All of the computing is done on the big box in the middle, with the PC, "network computer," or what-have-you on the end being used as a glorified dumb terminal.

The reason this is happening (or may happen, at any rate) is simply that the industry has begun to realize its mistake -- that client-server is an inherently inefficient computing model.  It took them awhile, but they're beginning to wake up!

Copyright © 1998 John J. Kafalas



Feedback?  Drop me a letter to the editor, and I'll post it on-line!


Urb's previous columns can be found in the Column Archive, where you'll also find letters to the editor.


Return to the home page