Oser Communications Group

Super Computer Show Daily Nov 19 2013

Issue link: http://osercommunicationsgroup.uberflip.com/i/265921

Contents of this Issue

Navigation

Page 11 of 19

S u p e r C o m p u te r S h o w D a i l y Tu e s d a y, N o ve m b e r 1 9 , 2 0 1 3 1 2 Department, which is furthering research on climate change, using the high-per- formance cluster to model high-elevation plant community ecology in the Andes Mountains. In the Center for Digital Theology, researchers are using the HPC cluster to help process large sets of digital images of pre-modern, hand-written and unpub- lished manuscripts to support existing research in the field of paleography. Carbon nanotubes are being investi- gated for everything from nano-wires to artificial muscle tissues to textiles. They tend to cluster together, and the Chemistry Department is using the HPC cluster to develop a list of possible sol- vents that can be used to separate them. Researchers from the John Cook School of Business Department of Operations and Information Technology Management (ITM) are using the cluster to help develop algorithms for large-scale optimization, with implications for trans- portation, logistics and a variety of other fields. Silicon Mechanics ( Cont'd. from p. 4) The Political Science Department and the SLU School of Law are using the cluster in research that is evaluating insti- tutions, behavior and outcomes in American State Supreme Courts. Finally, the HPC cluster is also being used in the social sciences at SLU. One fascinating social science project led by the Sociology & Criminal Justice Department is research on a new kind of methodology for synthesizing informa- tion on the spatial aspects of social and economic, environmental and ecological phenomena. The city of Saint Louis itself is the first testing ground for the method, which will be replicated in several other cities. Information, applications, and com- petition instructions on the 3rd Annual Research Cluster Grant Competition will be available in November 2013 at www.researchclustergrant.com. During SC13, visit Silicon Mechanics at booth 3126. For more information, visit www.siliconmechanics.com, call 425-424-0000, or email info@silicon mechanics.com. Ames Research Center has deployed multiple generations of ICE with no sys- tem downtime, and in the process saved millions of hours in user productivity. While speed and scale are consid- ered the bread and butter of high per- formance technical computing, efficien- cy (i.e., power and cooling) is the Holy Grail. Within the next five years, innova- tions such as the ICE X cooling system SG I ( Cont'd. from p. 4) will enable greater speed and ultimately exascale capability. As a result, super- computers will be able to solve bigger problems, even finding answers to ques- tions we haven't yet contemplated. Learn more about SGI and ICE X at www.sgi.com/icex. Visit SGI at booth 2709 during SC13. For more information, visit www.sgi.com, call 800-800-7441 or email laura_clark@sgi.com. Neumann fetch, decode, execute with separate CPU and memory; and the par- allel learning machine ("AI") where the machine is taught, not coded. For decades, these parallel learning architec- tures were relegated to being implement- ed in FPGAs for niche applications, mil- itary and special purpose usages or in S/W on traditional hardware. Running simulations of "embar- rassingly parallel" machine learning algorithms can be done, but these paral- lel algorithms also reduce themselves directly to simple hardware, memory- based learning solutions. These direct hardware implementations are dramati- cally lower in power, faster, lower cost and also scale, going straight to the expert to solve the problem, versus code (expensive and difficult). Conversely, taking the legacy serial architecture path parallel is plagued with compiler, communication, synchronization, cache coherency, shared and/or dedicated memory complexities that are very dif- ficult and reduce in efficiency as you parallelize. Not so for the hardware learning machines. The CogniMem CM1K (with 1024 parallel processing elements available at the Cognimem website) has such a learning memory architecture where each processing element has 256 one byte connections to the input perform- Cognimem ( Cont'd. from p. 1) ing the mathematical function of 1npei-input or the sum of the absolute differences between the input vector and the learned vector associated with the processing element (pe). This equation is common to multiple non- linear classifiers and is performed in parallel with no execution of internal code. An automatically adjusted thresholding function is included to perform the training/fuzzy matching. The interface bus allows the memory cells to interact with one another dur- ing the execution of learning or recog- nition. This bus is also the key to the scalability of a system composed of N CM1K chips. Thus, the von Neumann cpu/memory bottleneck is removed, communication is straightforward and the processing function scales with N chips at a constant latency. Enter a new beginning. It is time for hardware learning machines to help tack- le the new applications that scale without the difficulties outlined above. There is a new solution for exascale and embedded applications alike: to tackle the emerging applications of vision, advanced data mining and "AI." It is time to rethink how we are solv- ing problems! During SC13, visit Cognimem at booth 3609. For more information, visit www.cognimem.com, call 916-358-9483 or email info@cognimem.com. GENEVIEVE BELL TO KEYNOTE SC13 Australian anthropologist and researcher Genevieve Bell is set to bring a new perspective to the interna- tional supercomputing community as she delivers the keynote talk at SC13. "Supercomputing as a discipline is uniquely valuable in our society," observes William Gropp, the Thomas M. Siebel Chair in Computer Science at the University of Illinois, co-creator of MPI, and the general chair of SC13. "From more absorbent diapers to better medicines and technologies for a sus- tainable future, the benefits of HPC are felt everywhere, every day. As a global leader in the effort to understand how technologies support and transform society, Genevieve's talk will help our community better understand how we can relate to society more effectively, extending the reach of HPC even fur- ther than it goes today." Bell was a researcher at Stanford until she joined Intel Corporation in 1998 as a cultural anthropologist studying how different cultures around the globe used technology. She was named an Intel Fellow in November 2008 for her work in the Digital Home Group, and today directs the Interaction and Experience Research group. Dr. Bell will officially open the conference with the keynote address on Tuesday morning, during which she will show that we have been dealing with big data for millennia, and that approaching big data problems with the right frame of reference is the key addressing many of the problems we face today. Big data is the catch-all term for datasets that are so large, so complex or arriving so fast that our ability to manage and process them using con- ventional technologies is challenged. In her talk, Bell will explore the life- cycle of data to better understand its needs and potential. In 2010 Bell was named one of the top 25 women in technology to watch by AlwaysOn and as one of the 100 Most Creative People in Business by Fast Company. Bell is a Thinker in Residence for South Australia, and in 2012 was inducted into the Women in Technology International Hall of Fame. Her book, "Divining a Digital Future: Mess and Mythology in Ubiquitous Computing" written with Paul Dourish, explores the social and cultural aspects of computing. THE BIG (DATA) BANG THEORY By Jill King, Vice President, Adaptive Computing Three powerful market phenomena are colliding in the future of the Information Age: cloud computing, high-performance computing and Big Data. The bits and bytes companies accumulate today require significant investments in all three areas in order to leverage the data for game-chang- ing results. Unfortunately, some organizations think that "cloud computing," "Big Data" or "HPC" is something you can just buy with deep pockets. In order to derive deep economic value and insight from any of these three technology assets, organizations need data process- ing models to extract the results neces- sary to make data-driven decisions, all the while utilizing all available resources within their datacenters, including virtual machines, bare metal, private and hybrid cloud, Big Data and HPC environments. When we talk about Big Data, how- ever, what are we talking about? Ask a hundred pundits and you'll get a hundred different definitions. The simplest answer might be that Big Data is any data that is too overwhelming to mine for insight with simple methods. If you can think of a straightforward and practical way to extract value from the data, then it's not Big Data. On the other hand, if using the data requires thoughtful weighing of tradeoffs and expenses, discussions with stakeholders, the creation of custom tools, trial and error or the resetting of expectations, then you've met a Big Data test. Today's enterprise needs to rely on collected data and simulation results to keep competitive in the marketplace. No longer can CEOs make business deci- sions based on hunches and what they can physically extract from industry research. Companies are turning to their CIOs to help make data-driven decisions and give their business the competitive advantage. In order to thrive in today's data- driven business environment, most with- in an organization can benefit from the results of Big Data analysis. IT can col- lect and store the data, but is confused and lack the ability to extract the results from it. However, when IT professionals try to run and manage these data-inten- sive workflows, it is a very manual, time- consuming process, oftentimes with mul- tiple steps and applications with complex dependencies. Humans end up becoming the limitation, causing logjams and delaying results. As larger and more complex data sets emerge, it becomes increasingly more difficult to process these data sets using traditional database manage- ment tools or data processing applica- tions. Industry and technology providers need to work together to cre- ate solutions that enable organizations to process data better and faster that won't break the budget. During SC13, visit Adaptive Computing at booth 3113. For more information, visit www.adaptivecomputing.com, call 801-717-3700 or email solutions@ adaptivecomputing.com.

Articles in this issue

Links on this page

view archives of Oser Communications Group - Super Computer Show Daily Nov 19 2013