Oser Communications Group

Super Computer Show Daily Nov 19 2013

Issue link: http://osercommunicationsgroup.uberflip.com/i/265921

Contents of this Issue

Navigation

Page 5 of 19

S u p e r C o m p u te r S h o w D a i l y Tu e s d a y, N o ve m b e r 1 9 , 2 0 1 3 6 CLUSTERED SYSTEMS ANNOUNCES EXABLADE BASED "SUPER NODE" RUNNING INFISCALE SOFTWARE The current generation of air cooled blades is subject to severe power con- straints at blade and rack levels. More of everything – processors, network cabling, blades, racks and space – are required. ExaBlade systems are freed of these constraints. A single blade can cool more than 2kW, and a rack, 200kW. Conditioned rooms are not required. ExaBlade systems can be installed virtually anywhere. Any com- bination of mechanically compatible off-the-shelf servers, GPUs or storage media can be accommodated and cooled. Infiscale's Software Defined Scalable Infrastructure (SDSI) man- ages Super Node systems spanning from a single chassis to hundreds. The smallest ExaBlade unit is a chassis. Each has 20 slots cooled by cold plates. The front 16 slots house compute or storage blade while the four orthogonal rear blades network them together. Six ExaBlade chassis can be mounted in an 800mm wide 48U rack. The rack distributes power and cooling to the chassis. Immediately available are Intel S2600JF system boards and PCI Express switches that provide blade- to-blade communication and Gigabit Ethernet management links. Planned are GPU and storage blades. Infiscale's Software Defined Scalable Infrastructure (SDSI) knits the whole thing together. Four modules make up the software stack: Super Node Manager (SNM); PERCEUS OS and provisioning tool; Abstractual Intelligent system management; and GravityFS Distributed, parallel file system. The GravityPark Open Parallel Toolkit (a next-generation compiler) is also available. "After Clustered and Infiscale cooperated on writing some proposals, we realized that our individual prod- ucts have great synergy," said Phil Hughes, CEO of Clustered Systems. "Our goal is to bring petascale computing into the main- stream industry for enter- prise, and Clustered's ExaByte system is an ideal platform for that," said Arthur Stevens, CEO of Infiscale. About Clustered Systems Company, Inc. Clustered Systems is a pri- vately owned company spe- cializing in innovations for system cooling and switch- ing. They are the developer of a revolutionary cold plate cooling system for 1U and blade servers. It was recognized as being the most energy efficient cooling system available in a series of tests performed by Lawrence Berkeley Labs under the aegis of the Silicon Valley Leadership Group California Energy Commission. Clustered installed the first ExaBlade based system at SLAC National Accelerator Laboratory earlier this year. www.clusteredsystems.com About Infiscale, Inc. Infiscale has been in operation since 2005, delivering software defined scal- able infrastructure technology at the industry forefront while developing next- generation software solutions for high-per- formance, high- throughput and cloud computing environ- ments. Utilizing open source software of the company's own design and that of others which Infiscale sup- ports, Infiscale's solu- tions have deployed numerous Top500 list- ed supercomputers, demanding content delivery networks, web portals, proxy services and fully-integrated software defined network/compute/storage scal- able infrastructure. Infiscale's latest software stack features A.I. subsys- tems for node, cluster and data center workload automation and learned behavior system administration assis- tance. Learn more at www.infiscale.com. During SC13, visit Clustered Systems Company, Inc. at booth 742. For more information, visit www.clusteredsys- tems.com, call 408-327-8100 or email phil@clusteredsystems.com. ABERDEEN OFFERS STIRLING W51 WORKSTATION By Niso Levitas, Research and Development Manager, Aberdeen LLC Traditionally, workstations do not have large storage arrays. You can add a couple of SSDs in an array, but usually workstations utilize external storage via fibre channel, SAS, e-SATA or PCI-E. These days, Thunderbolt is becoming popular, especially with the Mac crowd. Another option has been to work with your good old network con- nected storage. What if you could have 24" x 2.5" drives in your dual processor worksta- tion directly attached to a battery backed up dual processor, hardware based controller with no backplane expanders creating bottlenecks? What if you could fill it up with SSDs for speed, 10 or 15K mechanical drives for capacity and performance, or you could mix both worlds? Meet the Aberdeen Stirling W51 workstation. With the Stirling W51, you can add two powerful cards for GPU pro- cessing and an additional video card for dual displays. There are also a cou- ple more PCI-E expansion slots on the top of that, along with a lot of 5¼ external bays. In these bays, you can install your old fashioned DLT tape, a Blu-ray burner and still have extra drive cage space. The power supply of the workstation is eas- ily able to carry all of those 15K drives, GPUs, dual processors and up to a terabyte of memory. There is more in this dream of a workstation, including dual NIC ports and front USB 3.0 access as standard equipment. This is a true dual workstation that will serve as a workhorse for many years, with 24 bays of fast storage. I did not make this all up. The Aberdeen Stirling W51 is real, and it can be custom config- ured on our website. 24" x 2.5" drive bays, dual Xeon proces- sors, up to a terabyte of memory, and it is all covered by our industry leading five year warranty, which includes hard drives. Visit Aberdeen during SC13 at booth 1738. For more information, visit www.aberdeeninc.com, call 800-500- 9526 or email salesinfo@ aberdeeninc.com. NUMACONNECT MAKES BIG DATA COMPUTING AFFORDABLE By Trond Smestad, CEO, Numascale IBM reports that 90 percent of the data in the world today has been created in the last two years alone. Datasets in the 10 to 20 terabytes range are increasingly common. New and advanced algorithms for memory- intensive applications in oil and gas (e.g. seismic data processing), finance (real-time trading), social media (data- bases) and science (simulation and data analysis), to name but a few, are hard or impossible to run efficiently on commodity clusters. This mainstreaming of big data is an important transformational moment in computation because traditional cluster computing, which is based on distributed memory, struggles when forced to run applications where mem- ory requirements exceed the capacity of a single node. Traditional clusters cannot adequately handle this crush of data, and more expensive shared mem- ory approaches are required. "Any application requiring a large memory footprint can benefit from a shared memory computing environ- ment," says William W. Thigpen, Chief, Engineering Branch, NASA Advanced Supercomputing (NAS) Division. "We first became interested in shared memory to simplify the programming paradigm. So much of what you must do to run on a traditional system is pack up the mes- sages and the data and account for what happens if those messages don't get there successfully and things like that – there is a lot of error processing that occurs." Numascale's Solution Numascale has developed a technolo- gy, NumaConnect, which turns a col- lection of standard servers with sepa- rate memories and I/O into a unified system that delivers the functionality of high-end enterprise servers and mainframes at a fraction of the cost. NumaConnect links commodity servers together to form a single uni- fied system where all processors can coherently access and share all memo- ry and I/O. The combined system runs a single instance of a standard operat- ing system such as Linux. The result is an affordable shared memory computing option to tackle data-intensive applications. NumaConnect-based systems running with entire data sets in memory are "orders of magnitude faster than clus- ters or systems based on any form of existing mass-storage devices and will enable data analysis and decision sup- port applications to be applied in new and innovative ways," says Einar Rustad, Numascale CTO. Early adopters are already demon- strating performance gains and costs savings. A good example is Statoil, a global energy company based in Norway. Processing seismic data requires massive amounts of floating point operations and is normally per- formed on clusters. Broadly speaking, this kind of processing is done by pro- grams developed for a message-pass- ing paradigm (MPI). But not all algo- rithms are suited for the message-pass- ing paradigm. The amount of code required is huge, and the development process and debugging task are com- plex. Numascale offered the perfect solution with NumaConnect, which offers shared memory and cache coherency mechanisms. NumaConnect provides an afford- able solution by delivering all the advantages of expensive shared memory computing – streamlined application development, the ability to compute on large datasets, the ability to run more rigorous algorithms, enhanced scalability and more – for a cluster price point. Visit Numascale during SC13 at booth 2505. For more information, go to www.numascale.com, call 832-470-8200 or email ts@numascale.com.

Articles in this issue

Links on this page

view archives of Oser Communications Group - Super Computer Show Daily Nov 19 2013