Oser Communications Group

Super Computer Show Daily Nov 18 2013

Issue link: http://osercommunicationsgroup.uberflip.com/i/247471

Contents of this Issue

Navigation

Page 5 of 19

6 Mo n d a y, N o ve m b e r 1 8 , 2 0 1 3 ONE STOP SYSTEMS DELIVERS HIGH-DENSITY 8 MILLION IOPS FLASH STORAGE ARRAY One Stop Systems, Inc. (OSS) unveils its Fusion-Powered Flash Storage Array (FSA) product line to customers demanding extreme storage performance in a small footprint. The FSA is the ideal platform for high-speed data recording and processing, lightning fast data response time, high-availability and flexibility. The latest FSA rendition offers enterprise, financial and intelligence, surveillance as well as reconnaissance (ISR) applications for the fastest, most flexible and powerful turnkey storage solution to date. Fusion-io ioScale flash coupled with four 128Gbps OSS PCIe 3.0 server links in the FSA provide the extreme performance demanded by today's applications. Uniting these innovations creates a 100TB network attached flash array that can reach 40GB/s and more than 8 million IOPS. The FSA fits in most datacenters with its compact size and light-weight. At a height of 3U and 24" deep, the 19" rack mount FSA packs up to 32 Fusion ioScale modules into four individually removable sleds. The sleds and enclosure are made of lightweight, rugged alloys with redundant power and filtered air cooling optimized to the installation environment. The local IPMI module optimizes the enclosure parameters while also allowing the power user to set features through SNMP or the built-in user interface based on the overall policy of the installation. The small footprint, removable sleds and light weight allows CIENA DEMONSTRATES PROTOTYPE AT SC13 At Supercomputing, Ciena will demonstrate a prototype of an open, modular multi-layer Software Defined Network (SDN) controller and autonomic intelligence applications for use on carrier grade wide area networks (WANs). The SDN will connect to the industry's first live, fully functional international research testbed that unites all of the key packet, optical and software building blocks required to demonstrate and prove the benefits of software-defined, multilayer service provider WANs. The testbed was created in collaboration with Ciena's research and education (R&E) partners CANARIE, Internet2, StarLight and ESnet. It spans more than 2500km and connects Ciena labs in Ottawa, Canada and Hanover, Md. with the R&E community via StarLight in Chicago. An impor- tant component of Ciena's OPn architecture, SDN supports open, application-driven and analytics-enhanced control of wide area networks, laying the groundwork for more efficient capacity utilization and new advanced research applications. The testbed leverages OpenFlow across both the packet and transport layers, is supported by an open architecture carrier-scale SDN controller and intrinsic multi-layer operation, and incorporates real-time analytics software applications. The SDN controller incorporates a multi-layer path computation element and leverages OpenFlow v1.3 with transport extensions across packet, OTN and photonic layers for end-toend flow/connection control of the following network elements: a prototype 4 NUMASCALE PROVIDES PLUG-AND-PLAY SMP, SHARED MEMORY AT A CLUSTER PRICE By Trond Smestad, CEO, Numascale Innovative developers can now access the power of shared memory systems for the price point and ease-of-use of a cluster by utilizing Numascale's NumaConnect, a simple add-on card for commodity servers. The hardware is now deployed in systems with more than 1,700 cores, and the memory addressing capability is virtually unlimited. The big differentiator with NumaConnect, compared to other high-speed interconnect technologies, is its shared memory and cache coherency. These features allow programs to access any memory location and any memory mapped I/O device in a multiprocessor system with a high degree of efficiency. They provide scalable systems with a unified pro- gramming model that stays the same from the small multi-core machines used in laptops and desktops to the largest imaginable single systemimage machines that may contain thousands of processors. The architecture is commonly classified as ccNuma (or Numa) but the interconnect system can alternatively be used as a low latency clustering interconnect. Numascale systems are deployed by simply installing a card with a PCI form factor into a standard server. This approach makes it possible to take advantage of the great price break offered by mass-produced servers with volume applications outside the segment covered by NumaConnect. Servers from IBM, Supermicro and Dell provide excellent building blocks for large memory systems in combina- Su p e r C o m p u te r Sh o w D a ily one-person installation in data centers, airborne ISR platforms, mobile shelters and portable transit cases. The 3U x 18" x 3.4" sleds fit easily into the enclosure to protect your investment and your data in highly secure environments. The FSA supports OSS PCIe direct attached storage as well as Fiber Channel SAN or Infiniband NAS storage options via the Fusion-io ION Data Accelerator software. In direct attached mode, an internal switch matrix allows from one to four servers to have direct access to the Fusion ioScale memory in multiple configurations. The sleds act in concert or separately to fit the changing needs of any storage application while supporting any RAID level available to the servers. In network attached mode, the ION Data Accelerator software provides a fiber channel or Infiniband path across servers, virtual machines and more concurrent users than the direct attached mode. Up to 100TB of shared ioMemory becomes available with industry leading performance, minimum latency and comprehen- Tb/s packet switch, Ciena's 6500 Packet Optical Platform supporting packet, OTN and photonic switching, and Ciena's 5410 Reconfigurable Switching System supporting OTN switching. It also supports a northbound RESTful API that supports Ciena-developed autonomic operations intelligence applications that include a multi-layer optimizer and a dynamic pricing engine. The multi-layer optimizer application will show how operators can combine a global view of the current network state, an analytics-enabled prediction of future network state based on historical data, and a global view of all current service demands to calculate how to reallocate network capacity and regroom existing services to minimize capital expenditures, latency, blocking probablility and other metrics. Based on a historical and current global view of all the network resources and service demands, the analytics-based dynamic pricing engine tion with NumaConnect cards. The design is implemented in a chip, the NumaChip, with an external cache in DRAM, the NumaCache. The NumaChip can address up to 4,095 nodes in a single image system, and each node can have multiple processor cores. AMD processors can address 256 terabytes of data, and this does limit the total memory space of the systems. A directory-based cache coherence protocol handles scaling, with significant numbers of nodes sharing data to avoid overloading the interconnect between nodes with coherency traffic, which would seriously reduce real data throughput. Basic ring topology with distributed switching allows for a number of different interconnect configurations that are more scalable than those provided by most other interconnect switch fabrics. Ring topology also eliminates the need for a centralized switch and includes inherent redundancy for multidimensional topologies. The topologies used are two- and threedimensional topologies (torus) that have the advantage of built-in redun- sive visibility. The FSA achieves end-to-end high availability at every level in the system. At the ioMemory level, Fusion-io Adaptive Flashback software increases flash reliability and endurance by rebuilding data at the individual NAND banks. At the module level, the Fusion ioScale flash memory offers the reliability proven in the world's largest datacenters. At the chassis level, the OSS switch matrix, removable sleds and IPMI module allow for environmental monitoring, physical rerouting of storage traffic and hot-swap of the ioScale memory platform. At the array level, the Fusion-io ION Data Accelerator software provides replication clustering and SNMP realtime performance and physical array monitoring. During SC13, visit One Stop Systems at booth 1137. For more information, visit www.onestopsystems.com, 760-745-9883 or email call rruple@onestopsystems.com. application will show how operators can use pricing to simultaneously maximize revenue and minimize idle resources. It does this by presenting a higher price when network resource supply is projected to be scarce and/or new demands are expected to be high, and presenting a lower price when the opposite is projected. The customer then selects whichever price and parameter combination provides them the most value. Over time, the engine learns the price points that will incent the optimal aggregate behavior. Collectively, these demonstrations will show the value of creating and maintaining a fully open, multi-layer SDN-powered WAN in today's operator networks, in both the network and the back office. During SC13, visit Ciena at booth 1924. For more information, go to www.ciena.com, call 800-207-3714 or +44 20 7012 5555 or email pr@ciena.com. dancy, as opposed to systems based on centralized switches where the switch represents a single point of failure. Distributed switching reduces the cost of the system because there is no extra switch hardware to pay for. It also reduces the amount of rack space required to hold the system, as well as the power consumption and heat dissipation from the switch hardware and the associated energy loss of the power supply. Shared memory and OS simplify parallelization tasks. Running a singleimage standard OS is an advantage for reliability, operations and system management. The hardware integrates seamlessly with the processor cache system and takes advantage of standard optimization techniques. NumaConnect provides an affordable solution by delivering all the advantages of expensive shared memory computing for a cluster price point. Visit Numascale during SC13 at booth 2505. For more information, go to www.numascale.com, call 832-470-8200 or email ts@numascale.com.

Articles in this issue

Links on this page

view archives of Oser Communications Group - Super Computer Show Daily Nov 18 2013