Where Have All the Mainframes and Minicomputers Gone?
Remember minicomputers and mainframes? They are now called "mid-range systems" and "enterprise servers." Their sales are very much alive, albeit at a much slower rate than for personal computers (PC).
The new names are just indicative of what's happened over the last five years to these traditional compute platforms. "They have morphed into giant servers," says James Heaton, vice president of AMR Research (Chelsea, MI). "The trend seems to be that information technology (IT) is going to the server model instead of the traditional batch model."
What's A Server?
Lately, "network computing" has replaced "client/server" computing, says Jerry Sheridan, principal analyst for Dataquest, a unit of Gartner Group Inc. (San Jose, CA). With that, the distinction between mainframes and minis and workstations all goes away. "Now computers are either clients or servers," according to Paul Barker, director of Technical Marketing for J.D. Edwards & Co. (Denver, CO). The mainframe itself is now just one more element in the network as opposed to the main part of the network.
Generally, a "server" is any computer that's primarily responsible for managing applications or data. According to a recent AMR Research report:
"Application servers constitute an undefined product category—or at least one that has been defined in enough ways that the expected functionality isn't clear.
"For years, products like Transaction Processing (TP) monitors have been called application servers because they manage transactions across applications. The management functions historically provided by TP monitors include transactional support, multithreading, connection pooling, and load balancing. Many of the newer application servers provide the same functionality, but with consideration to new computing trends such as using the Web and objects by adding data caching and session and state management."
"An application server is where the program is actually running. That may or may not be the same as the data server," points out David Fowler, vice president of Automotive Business Development for Baan Co. (Troy, MI). A data server delivers and parks data in and out of the database to whatever application that's calling for those data.
Still the Gracious Host
The definitive mainframe was the IBM 3xx: the 360, 370, and 390 architectures and clones. Today, Big Iron is undergoing transformation. For example, two years ago, IBM discontinued water-cooled computer architectures in favor of CMOS technology. CMOS technology is cheaper to manufacture than conventional bipolar technologies, cheaper to maintain, and can provide the same or better performance than conventional bipolar designs.
Similarly, IBM has mixed in some brand new and complete code suites to make its enterprise servers a critical component in the World Wide Web and e-commerce applications. Explains AMR's Heaton, "E-commerce stuff has gotten to the point where I encourage you not to think of an IBM S/390 as a mainframe if it's running all new code, as opposed to legacy code. It's a mainframe only in the sense that it runs the traditional operating system, now somewhat modified and renamed OS/390. The S/390 is one of the few, proven, high-capacity, high-availability web servers and server devices for client/server environments."
The web is not the only place where reliability and, therefore, computer availability are critical. There's also enterprise resource planning (ERP), financial management, data warehousing, to name a few mission-critical applications.
This is where mainframes truly shine: Nothing can match them in reliability. For instance, IBM touts 99.999% uptime, which amounts to less than five minutes of downtime per year. "That's partly the result of extremely mature software," explains Heaton. The latest version of the OS/390 kernel, for example, has not had any significant changes in 15 years. Microsoft Corporation's Windows NT 5, now called Windows 2000, is expected to weigh in at over 30 million lines of code, versus the 16 million lines in Windows NT 4. In the Windows universe, says Heaton, half of the millions of lines of operating system code are brand new, and none of them are more than five or six years old.
The Battle of the Mid-Ranges
The mid-range computing market is where all the action will be for the next few years. Increasingly faster 32-bit processors from Intel, coupled with applications that run on Windows NT, are moving traditional PCs up the ladder in terms of price/performance and suitability to moderately compute-intensive tasks. This alone has forced traditional RISC and Unix vendors to react with technology, resulting in increased performance and reduced prices for their traditional mid-range computers.
Toward the end of this year there will be two more developments. In hardware, Intel is expected to release its 64-bit processor line (code name "Merced"). In software, Microsoft is adding capabilities to its Windows 2000. The resulting "Wintel" platforms will just put more pressure on RISC and Unix vendors to stay ahead of the technology curve.
Yet despite all of these mid-range developments, several reasons exist for vendors to maintain different types of computer platforms: compatibility with existing software, availability, and scaleability. Proprietary in-house applications typically stay on the platform they were written for because software rewrites and new hardware are too costly. But for boutique applications, like advanced planning and scheduling (APS), something that may be used by a department, that's where the NT-type environments are taking over. "They are a low-cost entry and they can be deployed to groups of users," says Baan's Fowler.
Second, a traditional data center, running a 7x24 operation, must have 99.999% uptime. Mainframes can do that; several traditional mid-range computers provide almost that level of uptime. Wintel platforms do not—yet.
Last, "scaleability is everything," says Antonio Muttoni, ERP marketing manager in the AS/400 Division of IBM Corp. (Somers, NY). Scaleability becomes more and more an issue as companies grow, applications grow, and users are added to the IT environment.
But the days for some traditional platform's days are numbered. Consider, VAX minicomputers, which Compaq Computer Corp. is still selling. In fact, at a recent conference that Dataquest's Sheridan attended, "people spoke reverently of their VAXes." Because of that, Compaq remains committed to VAX customers for the sale of VAX systems and ongoing support, including increasing system memory and implementing Year 2000 Compliance in the OpenVMS operating system.
VAX sales, though, are valid "While supplies last." The last VAX chip was built about two years ago. Supplies of VAX 4108 and 7800 systems, also introduced two years ago, are expected to run out by the end of this year; MicroVAX desktop computers are expected to run out in the year 2000. No wonder Compaq is attracting VAX users to other mid-range platforms, ideally Alphas.
The Auto Industry is a Hybrid
Ironically, the automotive industry is the one industry where the mainframe has not only maintained its existence, it has probably expanded its capabilities with the help of new technologies and by surrounding itself with smaller devices to handle I/O, file transfers, and the like. "The automotive IT world is a hybrid consisting of mainframes and clusters of mid-range computers for compute-intensive, mission-critical applications where the business or the plant requires a very high level of reliability and stability," explains Jack Mileski, director of Automotive Industry Solutions for Compaq Computer Corp. (Littleton, MA).
This flies in the face of Microsoft's marketing engine, which implies that everybody has switched from Unix to Windows NT. The reality is that many automakers do not believe that NT-based boxes can support those automotive applications that do require more compute power and more sophisticated computing than what is available on the "industry-standard computer." For example, on the business side, APS and ERP both deal with very large amounts of matrix data that require tremendous number-crunching power for data analysis.
On the technical side, crash testing and analysis is also a compute-intensive application. Granted, simulating an entire vehicle on an Intel Pentium II-based system is possible. But the simulation will run painfully slow. Electronic design automation in particular is still very "Unix server-centric," says Bill Gerould, Worldwide Auto Industry manager for Sun Microsystems Computer Corp. (Palo Alto, CA).
Data warehousing is another application well-suited to running on either downsized mainframes or mainframes surrounded by Unix servers and clients, continues Gerould. The architectural strategy is to have the data warehouse running in, say, IBM DB2 on the mainframe, but the data mining (data queries and retrievals) running on Unix servers.
The reality is that many automotive engineering and business applications were internally developed. "The automakers can't get off that legacy code quickly even if they wanted to," says Heaton.
So, traditional compute platforms will be with us, though probably in increasingly Microsoft-compatible forms. In effect, continues Heaton, "computerization will move from a best-of-breed environment to a plug-and-play environment." But in this case, it is enterprise servers, application servers, data servers, web servers, clients and the like that are the components to be plugged in and then played.
Big Blue Iron Saves Mazda Money
Guess what new computer Mazda North American Operations (Irvine, CA) chose 2½ years ago to replace its 3-year-old IBM S/390 Model 600J mainframe? An IBM S/390 Model 9672 R83 Enterprise Server.
This new mainframe—yes, that's what Mazda's systems manager Edward DeLano calls it—is used primarily for Mazda's bread-and-butter applications, except finance. These applications include communications to Mazda's dealers and distribution centers, and support for vehicle and parts distribution, import, warranty processing, and marketing. Most of the applications are Cobol-written "green screen" applications.
Mazda also has a file and data server farm consisting of 40 Windows NT-based servers that run everything in Microsoft Office 97 Suite plus a few homegrown applications. Just about everybody in Mazda—about 1,000 employees and contractors within Mazda and 2,400 dealers—
has a desktop PC running at 133 MHz or greater linked to this server farm. As necessary, the PCs emulate a dumb terminal, using 3270-type protocol, to interact with the mainframe applications. Mazda's mainframe upgrade was simply a hardware change; there were no changes in software. An alternative approach could have been to implement a cluster of smaller computers. But manageability, says DeLano, favored the mainframe approach. "There are a lot of tools and people who know how to manage that environment. A cluster of devices is fine up to a point, but after a certain level of work, you might be better off in a larger box."
The larger, newer box Mazda got features CMOS processors, 141 MIPs, 2 GB RAM, and a 500 GB disk farm with RAID 5 technology. That's 34% more processing power and four times the system memory than the previous mainframe.
Equally important, Mazda saves $25,000 a month in electrical bills, plus the costs in maintenance and floor space. "Those savings come out to more than the cost of putting in the new system," says DeLano.
Although the term “continuous improvement” is generally associated with another company, Honda is certainly pursuing that approach, as is evidenced by the Accord, which is now in its ninth generation.
Hyundai enters the American market with a new parallel hybrid system that uses lithium-polymer batteries and the same six-speed automatic found in non-hybrid versions of the 2011 Sonata.
A Vietnamese start-up auto company is doing what it name implies: VinFast Manufacturing and Trading Company Limited is going exceedingly fast in vehicle development.