Oser Communications Group

Super Computer Show Daily Nov 20 2013

Issue link: http://osercommunicationsgroup.uberflip.com/i/265924

Contents of this Issue

Navigation

Page 16 of 19

S u p e r C o m p u te r S h o w D a i l y 1 7 W e d n e s d a y, N o ve m b e r 2 0 , 2 0 1 3 solution. Time proven ZFS based storage ensures that data integrity. With the main system memory being your primary cache, there are two additional separate cache mechanisms for read and write. You can dedicate a couple of SSD drives for secondary cache purpose in an array. On the data capacity, this power can drive racks full of drives with no prob- lem. You can utilize data de-duplication to save some space if you would like to. If you deploy two of these servers in a high availability configuration, it becomes more secure. You can add sec- ond and third sets of storage for onsite and offsite data replication. Aberdeen ( Cont'd. from p. 1) To serve all of this capacity in NAS or iSCSI form, Aberdeen AberNAS servers provide more than enough PCI-E slots for extra networking capacity in addition to two onboard 10 Gigabit Ethernet ports. You can start small with the Aberdeen AberNAS ZXP2 series for ter- abytes of storage, and scale up to petabytes with the Aberdeen Petarack that we have refined over the years. We back up the quality with our industry leading five year warranty, which includes the hard drives. Visit Aberdeen during SC13 at booth 1738. For more information, visit www.aberdeeninc.com, call 800-500-9526 or email salesinfo@aberdeeninc.com. cooling technology. Third, unlike other cooling system vendors, we give our cus- tomers the choice of purchasing either a complete turnkey system or the compo- nents to build their own. SCSD: Who will supply the software for your turnkey systems? PH: Infiscale provides a complete, fully integrated software stack for deploying commodity hardware as software defined scalable infrastructure. (Visit www .infiscale.com for more information.) Our highly flexible cluster design with integrated storage, when deployed with Infiscale software, provides interesting synergies for a range of industries. Our turnkey systems enable both high-density mass scale deployments and a low barri- er to entry for computational research and data analysis. SCSD: What were the most significant events affecting your company in the past year? PH: This year we delivered our first ExaBlade system to SLAC National Laboratories. This 256 node Xeon based system is located in unconditioned space just off a loading dock. SCSD: What did you install at SLAC? PH: It is a rack with four chassis, each with 16 compute blades. Theoretical peak compute power is 47 teraflops. Two phase cooling removes 50 kilowatts of heat for a net PUE (power usage efficien- cy including power conversion) of 1.07. Although that seems much in line with, Clustered Systems ( Cont'd. from p. 1) say, free air cooling, when the hidden overhead of server of fans (7 percent) and power conversion (10 percent) is includ- ed, a claimed PUE of 1.10 becomes 1.31. SCSD: Compare the position of your products and their technology against the current market. PH: There are three technical categories: our two phase "Touch Cooling™," water closets and dunkers. Touch Cooling uses cold plates hard soldered into the chassis. When a blade is inserted, the cold plate is gently guided into contact with heat ris- ers sitting atop hot components. The cir- culating refrigerant boils, taking the heat away. The WC people pipe water to indi- vidual blocks placed atop the hottest components and cool the rest with blown air. All use quick connects to connect their enclosures to in-rack distribution manifolds. They still require some air and will have leak problems. Dunkers place their server boards in baths of dielectric fluid that they then cool with water coils or by circulating the fluid through heat exchangers. While they can be quite efficient, servicing can be an issue. See our "Survey of Cooling Systems" white paper for more details. SCSD: How can our readers find out more about your company? PH: At www.clusteredsystems.com they will find solution, technology and prod- uct data plus downloadable white papers. During SC13, visit Clustered Systems Company, Inc. at booth 742. For more information, visit www.clustered systems.com, call 408-327-8100 or email phil@clusteredsystems.com. Kitware has tailored ParaViewWeb to meet a range of diverse customer needs from data publishing to financial data analysis. In one such case, a cus- tomer wanted to enhance the experience of subscribers who view their digital pub- lications by providing interactive 3D visualizations to augment the standard images, diagrams and tables in a scientif- ic article. Kitware leveraged its ParaViewWeb platform for web visuali- zation and its Midas platform for storing scientific data collections to develop a custom plugin that enables readers to interact with and visualize the data asso- ciated with an article. For other customers, Kitware has deployed a tailored web front-end to enable research teams at different loca- tions to easily collaborate, simultaneous- Kitware ( Cont'd. from p. 1) ly interact with and annotate data, and quickly publish results. By leveraging as a foundation the client-server framework of ParaView, researchers have access to robust data analysis and advanced visual- ization techniques running on a remote computational cluster through the light- weight usability and mobile accessibility of a web interface. For details on how Kitware's web visualization products and tested, high- quality software process can be leveraged to meet customer needs, representatives will be at booth 4207 this week giving demonstrations and discussing the tech- nology. Information on Kitware is also available online at www.kitware.com. Visit Kitware at booth 4207 during SC13. For more information, visit www .kitware.com, call 518-371-3971 or email kitware@kitware.com. Asked which applications he sees as most in need of Numa architecture, Einar Rustad, Numascale's CTO, says, "More and more application areas seem to look for large memories, many cores or both. We have been working from the start toward traditional HPC with simulations, especially with oil exploitation. Life Sciences also have the large data set chal- lenge, and with our technology they see solutions for making their algorithms more parallel with reasonable effort. Big data is on everyone's lips and is also obviously a target for us." What about fluid and structural dynamics? "CFD has proven to run well, and with structural dynamics I think we can make a difference and cut computa- tion cost dramatically. A success for us is when we can provide a solution for the industry. Short runtime for simulations contributes enormously to an engineer's efficiency," says Rustad. Another aspect of interest is the soft- ware to emulate large memory. "Yes, this technology may have a place," says Rustad. "Its biggest advantage and disad- vantage is that it is software. It puts fewer limitations on the type of hardware to be used, but introduces a complex emulation Numascale ( Cont'd. from p. 1) layer that has to be maintained with dif- ferent software and hardware architec- tures," he continues. "Numascale's tech- nology plugs in at a very low level and is invisible to the software. We run a stan- dard Linux, the only extensions needed are for booting, and to improve hardware and OS features that do not scale to the size we can provide." It's been suggested that maintenance and operation must be dramatically sim- pler when you use fat-nodes with Numascale and reduce the number of OS instances by a really large factor. Rustad agrees: "Yes, there you go, this is a com- ment we often get from cluster owners." NumaConnect has been an extremely well-received technology. Numascale's largest system to date is at the University of Oslo, with 1728 cores and 4.6 terabytes of memory. This sys- tem runs one single image OS. Larger systems are in preparation, and the hardware that Numascale offers sets virtually no limits to the number of cores or the memory size, instead leav- ing the limitations up to the software. Visit Numascale during SC13 at booth 2505. For more information, go to www.numascale.com, call 832-470-8200 or email ts@numascale.com. "TEST OF TIME" AWARD RECOGNIZES TRANSFORMATIVE IMPACT ON SUPERCOMPUTING The annual SC conference has a long tra- dition of excellence, from ground-break- ing new research to industry-shaping product announcements. Because of the quality of the conference and the caliber and depth of attendees, several major pro- fessional society awards are presented at the conference each year, including the ACM Gordon Bell Prize, the IEEE-CS Seymour Cray Computer Engineering and Sidney Fernbach Memorial Awards, and the ACM/IEEE-CS Ken Kennedy Award. Published continuously for the past 25 years, the SC conference technical program has been the launching point for many of the technical innovations that have radically reshaped the supercom- puting community. In recognition of this rich legacy of impact, and in celebration of SC's 25th year, the conference has cre- ated the "Test of Time" award, which will be presented for the first time at SC13. The Test of Time award recognizes a paper from a past conference that has deeply influenced the HPC discipline. It is a mark of historical impact, and requires clear evidence that the paper has changed HPC trends. The award will be presented annually to a single paper selected from the conference proceedings of 10-25 years ago. The inaugural Test of Time Award will be presented to William Pugh from the University of Maryland for "The Omega Test: a fast and practical integer programming algorithm for dependence analysis," published in the proceedings of Supercomputing '91. The selection process involved nine exceptionally renowned researchers who nominated 13 papers for the period 1988 to 2002, cov- ering the first 15 years of the SC confer- ence series. The award committee, chaired by Franck Cappello, Argonne National Laboratory, and Leonid Oliker, Lawrence Berkeley National Laboratory, selected the winner after a rigorous pres- entation and discussion of the merits of each contender. "The Omega test provided a very elegant solution to an extremely difficult problem in compiler technology," com- mented Daniel Reed, Vice-President for Research and Economic Development and Computational Science and Bioinformatics Chair at the University of Iowa. "The paper had a huge impact at the time, and its results are still shaping today's compilers." As part of the award, Pugh will give a presentation during the awards session on the paper, its history, the research dif- ficulties that had to be overcome to pro- vide the result and the impact that the paper has had, both in HPC and beyond.

Articles in this issue

Links on this page

view archives of Oser Communications Group - Super Computer Show Daily Nov 20 2013