Linux Supercluster Supercomputer Evolocity 4500Total of 3 enclosures. Each of the 62 total blades has 2 CPUs, total of 124 CPUs.Winning offer is only the deposit.Contact seller to make your OFFER before you place your deposit offer; otherwise the total sale price is $250,000 PLUS your deposit.ALL OFFERS CONSIDERED!Hard drives, delivery and setup are extra, optional and not included in this sale.High
performance cluster system, featuring 124 AMD Athlon(TM) processors.
Linux NetworX brings its powerful cluster technology to those demanding
high performance and high availability systems. With the use of cluster
computer technology, a method of linking multiple computers through
high-speed networks to form a single and more powerful system, Linux
NetworX provides solutions for companies with high-computing needs,
including research, industry, government, ISPs, ASPs, and other
technological fields. Through innovative hardware, complete cluster
management software and solid service and support, Linux NetworX
provides end-to-end clustering solutions.
AMD, Linux NetworX Deliver Linux Supercluster to Boeing; Boeing
Implements Linux NetworX Cluster System Featuring 96 AMD Athlon
SUNNYVALE, Calif.--(BUSINESS WIRE)--March 14, 2001--AMD announced today
that The Boeing Company has implemented an AMD Athlon processor-based
supercluster developed by Linux NetworX. The high performance cluster
system, featuring 96 AMD Athlon(TM) processors, is running computational
fluid dynamics applications in support of the Boeing Delta IV Evolved
Expendable Launch Vehicle program at the company's Space &
Communications division in Huntington Beach, Calif. Boeing Delta IV
engineers tested several other processor platforms at Linux NetworX
facilities before purchasing the AMD Athlon processor-based cluster. The
Delta IV is the newest class of rockets developed by Boeing that will
enter service in 2002 and will have the capability of lifting satellite
payloads of up to 29,000 pounds into geosynchronous transfer orbit.
"The Linux NetworX cluster system and the performance of the AMD Athlon
processor provide an excellent solution that satisfies our
requirements," said Daniel Hart, Director of Systems Engineering and
Integration, Delta IV Launch System Program for Boeing.
Learning to Love Linux
Hungry for computing power, life science companies are turning toward Linux clusters as the preferred high performance solution
July 11, 2002 | There's little doubt why life science companies love Linux clusters: The price is right.
For about $1,000,000 you can cobble together a cluster of Intel-powered
PCs that generates roughly the same computing power as a brand-name
super-server — for a fraction of the price. Moreover, the Linux operating
system has proven to be stable, reliable, and scalable.
Stir into this mix of attractive price and performance the recent
trickle of improved commercial Linux management tools, and the stage is
set for widespread implementation of Linux clusters by the life science
But as even Linux cluster advocates admit, this frugal approach to high
performance computing (HPC) has its drawbacks. For starters, there is no
single, authoritative commercial support network for Linux' open-source
operating system. Instead, companies must develop and maintain Linux
expertise on staff. Linux also lacks several HPC functions, such as
parallel file processing, performance analysis tools, and job
scheduling. Debate continues about the robustness of Linux' security.
Nevertheless, these drawbacks aren't slowing the adoption of Linux
clusters. According to Silico Research Ltd., 85 percent of large
pharmaceutical companies and 65 percent of life science organizations
use clustered and distributed computing platforms, the majority of them
implemented with Linux. Genomics pioneer Incyte Genomics Inc. and drug
discovery firm Tularik Inc. are just two examples of life science
companies that have made big bets on Linux, and each cites dramatic IT
Market-watcher International Data Corp. forecasts the cluster market
will grow 35 percent annually to $4.27 billion by 2005 and that Linux
will become the dominant choice in the cluster environment, growing from
$226 million in 2001 to $1.4 billion by 2005.
The Story of 'Beowulf'
Linux cluster computing got its start in 1994, when NASA researchers
created the first Linux cluster and nicknamed it "Beowulf" after the
warrior hero in the epic poem. The Beowulf project was an exercise in IT
scavenging worthy of any resource-constrained lab.
NASA engineers scrounged up 16 Intel 486-generation personal computers
that had been discarded. They connected them with channel-bonded 10Mbps
Ethernet and used Linux as a distributed operating system. This cluster
of previously dumped PCs functioned as a parallel computer engine. Since
that time, the term Beowulf has come to describe the class of clusters
that use similar architecture.
The NASA engineers were searching for a cheaper way to solve
computational problems while mapping the eco-regions of the country.
They dreamed of a machine that could achieve 1 gigaflop — 1 billion
floating-point operations per second. At the time, commercial
high-performance computers at that performance level carried a price tag
of $1 million — far too steep for the research group's budget. The
newly created Linux cluster, which delivered 70 million floating-point
operations per second, cost about $40,000, or one-tenth that of a
comparable commercial machine in 1994.
Early Linux cluster users quickly discovered that clustering boosted
processing speed, increased transaction speeds, and improved
reliability. But this cost-efficient speed bonanza came with a concern
that continues to shadow the Linux debate. Because Linux is open source
software, with no single entity controlling its growth or patrolling its
security needs, some observers worry that it is less secure. Others say
such security concerns are overblown.
"I don't think that Linux is any more secure or any less secure than
Windows or Unix," says Bill Claybrook, research director of the Aberdeen
Group, an IT market analysis firm. "We hear a lot about Windows' lack
of security, and I think there are problems with it, but Windows is a
big piece of code and there are a lot of people pounding on it. Linux is
smaller and far fewer people are using it. We'll see how good Linux
security is when it is more heavily used."
Advocates further argue that most Linux clusters are isolated deep
within corporate firewalls, not visible on the public Internet, and
therefore less vulnerable to hacking.
Big Bang, Few Bucks
The network-of-nodes approach (each PC is a node) is a great fit for
life science enterprises, in which drug discovery and human genome
research are generating enormous amounts of data and emphasizing the
need for cost-efficient computational approaches.
Clearly, though, it's the bigger bang for the buck that's driving the
popularity of Linux clustering. Aberdeen reports that Linux clustering
generally delivers a 5-to-1 performance-to-price advantage over HPC
solutions from traditional suppliers. Claybrook says savings can be
greater depending on the specific use. "I have heard of situations where
the ratio goes to 40x, but that is very uncommon," he says.
Tularik, a San Francisco-based company focused on developing small
molecule drugs that regulate gene expression, chose Linux because of its
Linux Makes the List
To get a feeling for just how effectively Linux is infiltrating the
clustering industry, visit the Top500 Web site — clusters.top500.org —
which tracks data on HPC clusters. Longtime iron box leaders top the
"You can buy a high-end computer such as an SGI or a Sun, but they're
very expensive," says Bruce Ling, director of bioinformatics for
Tularik. "For a fraction of the money you would spend on such a server,
you can buy a lot of CPUs using Linux."
Tularik has several Linux clusters, including a 150-processor Evolocity
cluster from Linux NetworX Inc. The 75-node cluster features 150 dual
Pentium III 1GHz processors with 300GB of memory and an Intel 10/100
Ethernet connection. It's being used to data-mine genomic information
for drug development. "The Linux cluster is well-proven and can do its
job," says Ling, adding that scalability and cost are also compelling
Though Tularik's IT department collaborates with bioinformatics
researchers on technical issues, such as predicting potential processing
requirements, two bioinformatics staffers actually manage the cluster
"That's the downside," Ling says. "You really need to understand the
guts of it, as there's no real consumer support for it. For Linux, you
need a specialized understanding of the OS."
The upside is that Linux is often a familiar environment for
bioinformatics researchers and laboratory scientists in general. Most
are familiar with open-source software from college computer studies and
its widespread use in the cost-conscious academic environment. Ling
studied biochemistry as well as computer science before receiving his
doctorate in molecular biology. His bioinformatics team features three
scientists and five programmers.
Scalability and Performance
Incyte, a Palo Alto, Calif.-based genomics information company, says it
cut computing costs by 95 percent when it moved to Linux clusters three
Although the savings wouldn't be as dramatic today because of the
continual price drops in proprietary architecture machines, Stu Jackson,
Incyte's director of bioinformatics, says the attractive price — along
with performance benchmarks and scalability aspects — spurred Linux
cluster development at the 11-year-old company.
"A Sun E-10000 costs over a million dollars. A Linux 128-CPU cluster is
going to run you $100,000, and you don't have the maintenance and
license costs you would with the other option," Jackson says. "If you
couple that with some really flexible, effective job distribution
software, you can kick the tar out of the bigger machine for a lot less
At Incyte, the Linux clusters process data from human genome research
and feed the results into the company's database products containing
information on gene structure, sequence, and function. This data is used
by pharmaceutical and biotech companies for drug development and
Roughly half of the Incyte data center's 4,500 processors — which
include units from Intel, Compaq, Sun Microsystems, and SGI — are used
in Linux clusters that handle the "heavy lifting" computations, says
"There are lots of things you want to do in a data center that aren't
suitable for Linux clustering, so you're always going to need some of
those large machines around for apps that simply need that kind of
hardware to run," Jackson says.
Indeed, Linux clusters aren't a cure for all HPC needs, says Aberdeen's
Claybrook. "Applications that require low latency and very high
bandwidth are difficult to do with Linux-based computer clusters," he
says. "But about 80 percent of the HPC applications can be done."
The Management Migraine
Incyte, like Tularik, has found that developing and maintaining Linux cluster management tools is a challenge.
A big drawback to Incyte's proprietary management application was its
inability to easily move applications from one computing resource to
another. Jackson says that made it nearly impossible to collect unused
processing power and reallocate it to handle additional applications or
one-time, ad hoc projects. If a computational job ran behind schedule
and required more processing power, the CPU had to be manually
reconfigured — a time-consuming effort.
In many early Linux cluster implementations, system administrators often
wrote scripts for handling such menial tasks as adding a user,
configuring an application, or cross-mounting a new network file system
partition. These added administration costs cut into the initial savings
provided by Linux clustering.
Incyte tackled the problem by searching for commercial Linux cluster
management software. After several months of reviewing products, Jackson
chose Platform Computing Inc.'s LSF management platform.
"There are always difficulties with tools," he says. "Just about anyone
who does anything real is going to eventually find one or two things you
can't make internally, so you have to purchase system applications to
support internal customers."
Jackson considered building another internal tool, but concluded it
would be more efficient to find an outside solution. "Any time you buy a
commercial product, you get stuff you don't want," he says. "On the
other hand, you get something that's supportable and has a bigger user
community, so it's got ongoing development."
Jackson says the investment has paid off. The 1,000-CPU LSF cluster
performs job distribution functions more efficiently and has produced a
50 percent increase in computer productivity.
Altogether, Incyte has spent well more than $2 million on the Linux
infrastructure. The programming project for its internally developed job
distribution system consumed about two employee-years; the project to
test and obtain LSF took only about 10 months.
New Wave of Tools
Tularik also had the in-house Linux expertise to build its management
tools, but was eager to avoid the effort if possible. The company chose
Linux NetworX' ICE Box, which provides serial switching, remote power
control, and system monitoring capabilities.
ICE Box is helping Tularik focus on finding new genes rather than on
server operation and maintenance. "The appliance provides vital features
... so I never need to go down to the server room," says Gene Cutler, a
Tularik bioinformatics scientist.
The time saved by finding an off-the-shelf tool has shortened the time
to market for products, say officials at both Tularik and Incyte,
eliminating their need to build proprietary tools.
According to Aberdeen, only a few Linux cluster tools were available
three years ago. Today, Linux cluster suppliers are developing both
open-source and proprietary cluster management products. Some commercial
suppliers are building from scratch; others, such as Red Hat Inc., are
using various pieces of open-source software to shorten development
Other vendors are taking proprietary Unix cluster technology and
modifying it to run on Linux — among them are SteelEye Technology Inc.,
Hewlett-Packard Co., and Veritas Software Corp. Platform Computing
recently announced its Platform Clusterware for Linux, the first
hardware-independent support solution for cluster management.
There's no denying that today's Linux clusters are rivaling the
throughput capabilities of legacy mainframes and becoming a central
player in HPC evolution
U. S. Army Research Laboratory Aberdeen Proving Ground, MD.
USE OF HIGH PERFORMANCE COMPUTING TO CONDUCT FINE SCALE NUMERICAL SIMULATIONS OF ATMOSPHERIC FLOW IN COMPLEX TERRAIN
This paper highlights the results from a series of high resolution (1.0
km grid spacing) numerical simulations using the National Taiwan
University (NTU) / Purdue model for a variety of flow situations such as
hydraulic jump, lee waves, juxtaposition of supercritical and
subcritical flows, etc. To complete these computations in a reasonable
amount of time, the NTU/Purdue model simulations were run on a 1024-node
Linux Networx Evolocity II cluster at the Army Research Laboratory
(ARL) Major Shared Resource Center (MSRC). The parallelized NTU/Purdue
model’s scalability characteristics were evaluated for a fixed grid
size, with the number of processors ranging from 4 to 128; the model
scales very well up to at least 128 processors on the ARL MSRC’s Linux
Numerous observational and modeling studies have revealed a wide variety
of atmospheric flows around, through and above terrain obstacles. Most
such studies, however, have considered fairly simple terrain such as an
isolated summit or an infinite perpendicular barrier exposed to uniform
or relatively simple atmospheric conditions. Real terrain and real
atmospheric conditions are considerably more complex than those used in
the above mentioned studies and the resulting atmospheric flow is even
more complicated and diverse.
The US Military uses this very system... us_military_evolocity_4500.pdf
* No Reasonable Offer will be refused! Make an offer, today!
* Family emergency = we are very motivated to sell!
* Inventory is sold as is and is priced accordingly.
* Non-refundable deposits are required to take items off of the market.
* No refunds or returns.
* You can come to Las Vegas to test anything that you desire.
* Shipping and Handling is not included; however, shipping options are available.
* Deposit offer is due upon winning via PayPal - non-refundable.
* "Cleared" payment in full before you load items.
* Items are located in Las Vegas Nevada, at a commercial warehouse in zip code 89146.
* Packing and shipping is buyer's responsibility and cost.
* Buyer will have 5 days for removal once invoice is paid in full.
* Delivery service is available for an extra fee.
* Equipment Setup service is available for an extra fee.
* Most items have been tested, by government representatives, to power up to the system BIOS and may not be otherwise complete.
* Items are from various US Government Agencies.
* In accordance with DOE CIO Guidance CS-11 and Federal Regulations the assets have been wiped or otherwise sterilized.
* Picture accuracy is deemed reliable but not guaranteed.
* Hard Drives and memory have been either removed or wiped.
* Caddies, batteries, and cables/cords may also be missing.
* Mutilation is not required as condition of sale.
* Cables and Software not included.