|
|
December 8, 2015 -
You may have seen recent news items regarding the Human Brain Project (HBP), a ten-year European neuroscience research initiative. Interactive computer simulation of brain models is central to its success. Cray was recently awarded a contract for the third and final phase of an R&D program (known in the European Union as a Pre-Commercial Procurement or PCP) to deliver a pilot system on which interactive simulation and analysis techniques will be developed and tested. The Cray work is being undertaken by the newly launched Cray EMEA Research Lab. This article discusses the ideas being developed and tested, ideas that we expect to be useful to many Cray users.
Step one is to manage the computer system in the same way other large pieces of experimental equipment are managed. Users book time slots on the experiment rather than submitting work to a queue. This can be achieved using a scheduler with an advance reservation system. An alternative that we have been working on for some time is to suspend the running jobs, either to memory or to fast swap space on a Cray® DataWarp™ filesystem. A number of Cray sites already use these techniques to run repetitive production cycles at the same time each day. By pushing these ideas a little further we can open up the possibility of using Cray supercomputer systems interactively.
Memory capacity is a limiting factor in brain simulations. Researchers currently envision data sets that will require tens of petabytes of main memory, which already exceeds the capacities of the largest supercomputers on the planet. This requirement will then increase to hundreds of petabytes for a full brain-scale simulation! In short, it will be impossible to store all this data in memory at one time. One solution could be that users will interactively select the most interesting data to analyse and visualize, and a small subset of results to store. The simulation codes use so-called “recording devices” to store a selection of results and pass them on for further analysis. The simulation is started with default recording settings. If the initial analysis reveals interesting behaviour, then the experiment can be extended or repeated with detailed recording enabled for a subset of data objects.
Step two is to provide developers with the ability to couple simulation, analysis and visualization applications into a single workflow. We are all accustomed to HPC flows in which simulation jobs write their results, either periodically or at the end of the job, and post-processing jobs analyse the data generated. In a coupled workflow the simulation and analytics applications run simultaneously. An analysis job might run on dedicated resources or it might run on the same nodes as the simulation, feeding its results to a visualisation system. Both techniques require a fast method for transferring data between applications and efficient methods of synchronization. The pilot system will have a pool of GPU nodes dedicated to this task.
Many HPC workflows communicate through the filesystem, especially when large amounts of data need to be transferred between distinct applications in the workflow. This process can be accelerated by providing systems with a tier of nearby, shared, bandwidth-optimized storage. This storage is used for intermediate data, with only the final results — the output of the experiment — being written out to enterprise storage. Cray has developed an innovative, flash-based storage technology, DataWarp, which support two distinct types of use: private and shared. The private use case will provide local high-bandwidth communication between simulation and analytics applications using memory or storage on every node. In the shared use case a pool of flash-based storage servers provide a high-bandwidth filesystem through which large simulations and visualisation jobs running on other nodes can communicate. Both approaches are being evaluated as part of the HBP pilot.
The third and final element of the HBP work is the ability to steer simulations as they run. This is more a way of thinking about how to perform simulation than a specific technology. The experiment is set up by constructing the brain network wiring in memory, a time-consuming process that results in more than a petabyte of data even with today’s relatively small models. This network is retained in memory while the scientist runs a sequence of virtual experiments or “what if” studies in quick succession, with the results of one run steering the next. Steering data can be fed back to the simulation through socket connections or through the filesystem.
In addition to novel software, the HBP pilot system will preview a number of interesting memory and processor technologies. Check back at the Cray blog for updates on this and other advances at Cray.
If you are interested in learning more about brain simulation work, read the recent paper in the journal Cell by Henry Markram et al.: “Reconstruction and Simulation of Neocortical Microcircuitry.” |
|
December 8, 2015 -
In October 2015 Cray announced the creation of the Cray Europe, Middle East and Africa Research Lab (CERL) based out of Cray's EMEA headquarters in Bristol, England. Our investment in Europe is not new (the Cray®-1 and every machine since found a European home), but an explicit focus on research is a big and bold move for our company. The lab positions Cray to work with new and existing customers on various projects such as special research and development initiatives, the co-design of customer-specific technology solutions and collaborative joint research projects with a wide array of organizations. The research lab acts as Cray's main interface to European development programs, such as Horizon 2020 and the FET Flagship Initiatives.
The Cray EMEA Research Lab is currently engaging with a number of supercomputing centers on important research projects. Cray is working with the Jülich Supercomputing Centre in Germany to deliver a pilot supercomputing system for the Human Brain Project. Cray is also collaborating on various initiatives with other organizations such as the Edinburgh Parallel Computing Centre at the University of Edinburgh in the United Kingdom, the Alan Turing Institute in London and the Swiss National Supercomputing Centre in Lugano, Switzerland.
Cray’s new lab will be 100 percent focused on such collaborative R&D and codesign efforts, providing a single point of contact and a pooling of expertise.
Perhaps the strongest motivation for CERL lay in the difficulty in designing future hardware and software solutions. Current hardware trends indicate severe disruption in both future hardware and software, though the exact form of those changes is unknown. For example, the standard HPC programming model is widely considered to be insufficient for exascale system programming, though there is no consensus on its replacement. CERL researchers are investigating new technologies for high-level programming of supercomputers such as DSLs, Python and Chapel. We are also investigating new abstractions for data layout optimization and auto-parallelism, efficient usage of new memory technology (such as high-bandwidth and non-volatile memories), software infrastructures for interactive computing and programming models for High-Performance Data Analytics. These projects are worlds away from being mature products. The intent is that these collaborative projects help guide and shape Cray’s actual R&D programs so that future products tackle the deep technical challenges our customers are anticipating.
The European funding scenario has never been more vital, and with CERL we believe that Cray is well positioned to enter joint funding proposals in high-profile programs such as Horizon2020. That will certainly open new collaborations and partnerships for Cray. |
|
December 8, 2015 -
Global supercomputer leader Cray Inc. has been awarded a contract to provide a Cray® XC40™ supercomputer to the Interdisciplinary Centre for Mathematical and Computational Modelling (ICM) at the University of Warsaw in Poland.
The six-cabinet Cray XC40 system will be located in ICM's OCEAN research data center, and will enable interdisciplinary teams of scientists to address the most computationally complex challenges in areas such as life sciences, physics, cosmology, chemistry, environmental science, engineering, and the humanities. ICM is Poland's leading research center for computational and data driven sciences, and is one of the premier centers for large-scale high performance computing (HPC) simulations and big data analytics in Central and Eastern Europe.
"The new Cray XC40 substantially increases our HPC capabilities," said Professor Marek Niezgódka, managing director of ICM. "Contemporary science is undergoing a paradigm shift towards data-intensive scientific discovery. The unprecedented availability of large quantities of data, new algorithms and methods of analysis has created new opportunities for us, as well as new challenges. By selecting the Cray XC40 as our next generation compute platform, we're investing in our ability to conduct cutting edge research for years to come." |
|
December 8, 2015 -
Global supercomputer leader Cray Inc. has been awarded a contract to provide the Alfred Wegener Institute, Helmholtz Centre for Polar and Marine Research (AWI) with a Cray® CS400™ cluster supercomputer. Headquartered in Bremerhaven, Germany, AWI is one of the country's premier research institutes within the Helmholtz Association, and is an internationally respected center of expertise for polar and marine research.
The contract with AWI is the Company's first for a Cray® CS™ cluster supercomputer featuring the new Intel® Omni-Path Architecture. The system will also include next-generation Intel® Xeon® processors, which are the follow on to the Intel® Xeon® processor E5-2600 v3 product family, formerly code-named "Haswell."
Named after the German polar explorer who discovered the continental drift, the Alfred Wegener Institute explores nearly all aspects of the earth system -- from the atmosphere to the ocean floor. The Institute's work leverages field research in extreme conditions, cutting-edge laboratory equipment, and high performance supercomputing systems to better understand the polar and marine environments, ecosystems and their interaction and influence on the earth's climate system. AWI will use its new Cray CS400 cluster supercomputer to run advanced research applications related to climate and environmental studies, including global circulations models, regional atmospheric models, and other computing-intensive, numerical simulations.
"The new Cray HPC System will become the most innovative part of the AWI computing infrastructure and will further strengthen the position of our science and research activities," said Prof. Dr. Wolfgang Hiller, head of the AWI computing centre. "It will enable researchers at our institute to model the earth system and its components more efficiently with higher resolution and accuracy. It is also a new landmark in the long cooperation of the AWI computing centre with Cray, reaching back to the days when a Cray T3E system was one of our major work horses. That system led to major scientific results in modelling and understanding the Antarctic circumpolar current and the exchange of water masses between the world oceans in the Antarctic. Hence, we are looking forward to new exiting scientific research results which will be produced with the new system." |
|
December 8, 2015 -
AUSTIN, Tex., Nov. 17 — GCS member centre High Performance Computing Center of the University of Stuttgart (HLRS) announced that it has received top honours in the 2015 HPCwire Readers’ and Editors’ Choice Awards, which were presented at the 2015 International Conference for High Performance Computing, Networking, Storage and Analysis (SC15) in Austin, Texas. HLRS has been recognized as HPCwire Readers’ Choice in the category “Best Use of HPC in Automotive” for a project executed by HLRS in cooperation with data storage supplier DataDirect Networks (DDN) and the Automotive Simulation Center Stuttgart (ASCS).
The 2015 HPCwire readers’ and editors’ choice was made for a challenging simulation project run on the HLRS high-performance computing (HPC) system Hornet, a Cray XC40-system. In cooperation with its partner DDN and supported by the ASCS, HLRS ran more than 1,000 car crash simulations within 24 hours |
|
December 8, 2015 -
At the 2015 Supercomputing Conference in Austin, Texas, global supercomputer leader Cray Inc. announced the Company won six awards from the readers and editors of HPCwire, as part of the publication’s 2015 Readers’ and Editors’ Choice Awards. This marks the 12th consecutive year Cray has been selected for multiple HPCwire awards.
This year’s awards include:
- Readers’ Choice: Best HPC Server Product or Technology (Cray XC40 supercomputer)
- Readers’ Choice: Top 5 Vendors to Watch
- Readers’ Choice: Best Use of HPC Application in Life Sciences (A team from the Joint Genome Institute at Lawrence Berkeley National Lab and researchers from UC Berkeley used 15,000 cores on the Cray XC30 “Edison” supercomputer to boost the complete assembly of the human genome, bringing the time down to 8.4 minutes)
- Editors’ Choice: Best HPC Server Product or Technology (Cray XC40 supercomputer)
- Editors’ Choice: Best HPC Software Product or Technology (OpenACC parallel programming model for heterogeneous CPU/GPU systems)
- Editors’ Choice: Best Use of HPC Application in the Oil and Gas Industry (Intel, Cray and Altair collaborated to benchmark a subsea riser simulation solution used to perform advanced subsea CFD, and demonstrated a substantial speedup on a Cray XC40 supercomputer, even at 4,000+ cores, achieving a 20x L/D ratio increase on a single Cray XC cabinet)
“This marks the second year in a row that Cray has received six prestigious HPCwire awards,” said Peter Ungaro, president and CEO of Cray. “We are excited to see that our unique solutions with our partners are receiving awards, and we are proud that our flagship Cray XC40 supercomputer received both Readers’ and Editors’ Choice awards for ‘Best HPC Server Product or Technology.’ I also want to acknowledge the amazing and talented Cray employees who |
|
December 8, 2015 -
Global supercomputer leader Cray Inc announced the Company plans to join the OpenHPC Project led by The Linux Foundation. Cray's participation in OpenHPC will focus on making technology contributions that will help to standardize software stack components, leverage open-source technologies, and simplify the maintenance and operation of the software stack for end-users.
The OpenHPC Project is designed to create a unified community of key stakeholders across the HPC industry focused on developing a new open-source framework for high performance computing (HPC) environments. The group's goal is to create a stable environment for testing and validation, as well as provide a robust and diverse open-source software stack that minimizes user overhead and associated costs.
"Cray is committed to providing our customers with the highest performing supercomputing systems, while simultaneously embracing the collaborative creation of industry standards that streamline our customers' workflow," said Steve Scott, Cray's senior vice president and chief technology officer. "We believe OpenHPC will deliver efficiency benefits to supercomputing center administrators and programmers, as well as the researchers and scientists that use our systems every day. The open-source community continues to play an important role for Cray, and as part of this effort we plan to open-source components of our industry-leading software environment to OpenHPC."
|
|
September 8, 2015 -
In June Cray announced the establishment of its European, Middle East and Africa (EMEA) headquarters at the Company's new office in Bristol, United Kingdom. Cray's new EMEA headquarters will serve as a regional base for its EMEA sales, service, training and operations, and as an important development site for worldwide R&D initiatives. The new headquarters will also provide the company with a centralized location for business engagements with new and existing customers. - For further information |
|
September 8, 2015 -
International Computing for the Atmospheric Sciences Symposium (iCAS2015)
Annecy, France; September 13−17, 2015
Philip Brown will be presenting on "Cray Earth Sciences Update" on Thursday Sept 17th: 1400
Link to the event
5th European Lustre Administrator and Developer Workshop (LAD2015)
Paris, France; September 22-23, 2015
Presentations by Cray Lustre Experts Chris Horn and Justin Miller
Link to the event
8th European Altair Technology Conference (EATC)
Paris, France September 29th - October 1st, 2015
Link to event
International HPC User Forum
Paris, France; October 12-13, 2015
Presentations by Cray experts
Link to event
|
|
September 8, 2015 -
In November 2014 Cray was awarded a contract to provide King Abdullah University of Science and Technology (KAUST) in Saudi Arabia with multiple Cray systems that span the Company's line of compute, storage and analytics products. The Cray XC40 system at KAUST, named "Shaheen II," 25 times more powerful than its previous system and it was awarded the regional award at the top500 announced at ISC’15 as the fastest system in the Middle East: The only new entry in the Top 10 supercomputers on the latest list at No. 7 Shaheen II achieved 5.536 petaflop/s on the Linpack benchmark, making it the highest-ranked Middle East system in the 22-year history of the list and the first to crack the Top 10. - Source
Current survey and seismic simulation techniques involve incredibly large datasets and complex algorithms that face limits when run on commodity clusters. The oil and gas industry must find ways to use more data more effectively and make better decisions from their analyses. This has resulted in a huge influx of information entering into simulations, modelling and other supercomputing tasks.
At Cray, we noticed the importance of data-intensive analysis processes years ago and we’ve been developing solutions that are not built around traditional operating models and instead use innovative techniques to maximize operational efficiency.
Specifically, our HPC systems go beyond adding raw power to operations and instead focus on moving data between supercomputing nodes efficiently. Traditional computational and I/O techniques can be replaced by methods focused on improved interconnect and storage capabilities – alongside traditional computing functionality – to help oil and gas companies stay ahead of the competition. We’ve also designed systems that incorporate GPUs and coprocessors, alternatives to traditional multicore CPUs, which can be leveraged to run today’s most demanding seismic processing workflows.
Recently Cray was awarded a significant contract to provide Petroleum Geo-Services (PGS), a global oil-and-gas company, with a Cray® XC40™ supercomputer and a Cray® Sonexion® 2000 storage system. The five-petaflop Cray XC40 supercomputer and Sonexion storage system provides PGS with the advanced computational capabilities necessary to run highly-complex seismic processing and imaging applications. These applications include imaging algorithms for the PGS Triton survey, which is the most advanced seismic imaging survey ever conducted in the deep waters of the Gulf of Mexico.
Guillaume Cambois, executive vice president imaging & engineering with PGS said: “With access to the greater compute efficiency and reliability of the Cray system, we can extract the full potential of our complex GeoStreamer imaging technologies, such as SWIM and CWI.”
For further information
Click to know more about Cray solutions for the energy industry
Cray has just been awarded a $6 million contract to provide the Danish Meteorological Institute (DMI) with a Cray® XC™ supercomputer and a Cray® Sonexion® 2000 storage system. DMI has chosen to install the new Cray supercomputer and Sonexion storage system at the Icelandic Meteorological Office (IMO) datacenter in Reykjavik, Iceland for year-round power and cooling efficiency.
In its final configuration, the new Cray XC supercomputer will be ten times more powerful than DMI's previous system, and will provide the Institute with the supercomputing resources needed to produce high quality numerical weather predictions within specified time intervals and with a high level of reliability. - For further information
DMI is only the latest of a growing list of the world's leading meteorological centers which has chosen to run their complex, data-intensive climate and weather models on Cray supercomputers.
Nicole Hemsoth from the Platform recently covered the reasons behind Cray’s strong market share in the weather and climate: “Weather forecasting centers need large, finely-tuned systems that are optimized around the limited, but complex codes these centers use for their production forecasts. […] they need architecture, system, software, and application-centric expertise—a set of demands that few supercomputing vendors have been able to fill to capture massive market share in the way Cray has managed to do over the last ten years” - Source
One example of how applications see better scaling on Cray is the optimization work on the Met Office Unified Model (UM) on the Cray Supercomputer “ARCHER” which yielded up to 16% speedup. The researchers concluded that the investment in analysis and optimization resulted in performance gains that correspond to the saving of tens of millions of core hours on current climate projects. - For further information
At Cray we understands the demands of weather and climate spanning to compute, analyze and store: Cray’s Tiered Adaptive Storage (TAS) is a flexible archival solution for Numerical Weather Prediction, which can help address the challenges NWP organizations are facing as high resolution forecasts and larger ensembles drive super-linear growth in data archives.
If you are you concerned about the increasing growth of your data archives for weather and climate and want to talk to Cray experts contact us, our email is: Email: Veronique Selly
Improving Gas Turbine Performance with HiPSTAR
Gas turbines (GTs) are the backbone of propulsion- and power-generation systems. Given the very large installed base worldwide, any GT efficiency increase has significant potential to reduce fuel burn and environmental impact. For example, General Electric’s installed GT base alone burns $150 billion per year in oil and gas. In power generation, every percentage point increase in combined cycle efficiency would reduce GE’s turbine fuel costs by $1.5 billion per year and cut CO2 emissions per megawatt by 1.5 percent.
HiPSTAR (HighPerformance Solver for Turbulence and Aeroacoustics Research) is a highly accurate structured multiblock compressible fluid dynamics code in curvilinear/cylindrical coordinates, written in Fortran.
The University of Southampton and GE performed scaling tests of HiPSTAR on Cray Supercomputer ARCHER and on a single-block test problem with a total of 1.3 x 109 collocation points this has already showed good scaling up to 36,864 cores. Read more
ANSYS® Fluent® Helps Reduce Vehicle Wind Noise
The DDES turbulence model was used to simulate side-window wall-pressure fluctuation noise for the Alfa Romeo Giulietta, and a very accurate transient simulation was performed using ANSYS Fluent CFD software running on a Cray® XC40™ supercomputer.
Solving this demanding problem in a meaningful time frame requires a large number of compute cores. ANSYS Fluent is designed to take advantage of multiple cores using its MPI-based parallel implementation. Efficiently executing in parallel across a large number of cores depends on moving large amounts of data between the cores. The Cray XC40 system is designed to maximize the performance of interprocessor communication, allowing the cores to spend their time computing. The Aries interconnect with its low latency, high bandwidth and adaptive routing, allows effective scaling for demanding simulations regardless of other jobs running on the system. Read more
Trends in Crash Simulations
If you are interested in Crash simulations go to “Top Trends in Crash Simulation Practices” featuring Paul Du Bois, worldwide consultant and industry expert. Read more |
|
December 1st, 2014 -
FMI Researchers Run the Vlasiator Code on the Cray® XC30™ Supercomputer and Get Outstanding Results Vlasiator is a groundbreaking space plasma simulation code developed at the Finnish Meteorological Institute (FMI). Specifically, it simulates the dynamics of plasma (ionized gases) in near-Earth space using a finite volume method solver for the hybrid-Vlasov description. In this model, protons are described by a 6-D distribution function embedded in a self-consistent solution of electromagnetic fields in 3-D ordinary space. Using a Cray® XC30™ supercomputer featuring Intel Xeon E5 “Haswell” processors and hosted by the Finnish IT Center for Science Ltd. (CSC), researchers were able to run Vlasiator on an unprecedented 40,000 cores.
For More information
Cray Case Study: Hermit supercomputer used to develop 3D film of Biene Maja
Film production company Studio 100 wanted to create a full-length feature film version of the famous Maya the Bee character in 3-D. For M.A.R.K.13, the animation studio doing the technical work, the task required calculating each of the CGI-stereoscopic film’s 150,000 images twice — once for the perspective of the left and once for the right eye. Additionally, the film’s detail-rich setting featuring lots of grass, dew drops, sunlight and transparent insect wings increased the computational complexity. Completing the task using the standard PC computing resources available was impossible within the company’s timeframe and budget.
For More information
SICOS BW helped the film group obtain computing time on HLRS’s Cray® XE6™ “Hermit” supercomputer.
The system is normally used by researchers at the University of Stuttgart in Germany and throughout Europe, as well as by industrial companies for research and development efforts.
For more information
|
|
December 1st, 2014 -
At the end of August Cray announced the new CS-STORM cluster, a dense, accelerated cluster supercomputer that offers 250 teraflops in one rack. In the latest top500 list CS-STORM features as a new entry at number 10 with a 3.57 petaflop/s system installed at U.S. government site. CS-Storm proved how powerful and yet energy efficient it is by featuring at number 4, and that 3 out of top 10 greenest are Cray systems, both Clusters and supercomputers. Since then Cray has announced new products for computing, storing and analyzing: |
|
December 1st, 2014 -
Cray® XC40™: Scaling Across the Supercomputer Performance Spectrum
The Cray® XC40™ supercomputer is a massively parallel processing (MPP) architecture that leverages the combined advantages of next-generation Aries interconnect and Dragonfly network topology, Intel® Xeon® processors, integrated storage solutions and major enhancements to the Cray OS and programming environment. As part of the XC series, the XC40 is upgradable to 100 PF. An air-cooled option, the XC40-AC system, provides slightly smaller and less dense supercomputing cabinets with no requirement for liquid coolants or extra blower cabinets. The XC40 system is targeted at scientists, researchers, engineers, analysts and students across the technology, science, industry and academic fields.
Cray® CS400-AC™ Cluster Supercomputer
The Cray® CS400-AC™ system is our air-cooled cluster supercomputer. Designed to offer the widest possible choice of configurations, it is a modular, highly scalable platform based on the latest x86 processing, coprocessing and accelerator technologies from Intel and NVIDIA. Industry standards-based server nodes and components have been optimized for HPC workloads and paired with a comprehensive HPC software stack, creating a unified system that excels at capacity- and data-intensive workloads. At the system level, the CS400-AC cluster is built using blades or rackmount servers. The Cray® GreenBlade™ platform consists of server blades aggregated into chassis. The platform is designed to provide mix-and-match building blocks for easy, flexible configuration at the node, chassis and whole-system level.
Cray® CS400-LC™ Cluster Supercomputer
The Cray® CS400™ cluster supercomputer series is industry standards-based, highly customizable and designed to handle the broadest range of medium- to large-scale simulation and data analysis workloads. The CS400 cluster system is capable of scaling to over 11,000 compute nodes and 11 petaflops. Designed for significant energy savings, the CS400-LC™ system features liquid-cooling technology that uses heat exchangers instead of chillers to cool system components. Along with lowering operational costs, it also offers the latest x86 processor technologies from Intel. Industry-standard server nodes and components have been optimized for HPC and paired with a comprehensive HPC software stack. |
|
December 1st, 2014 -
Cray® Urika-XA™ Pre-integrated, Open Platform for High Performance Analytics
Cray’s Urika-XA™ extreme analytics platform is engineered for superior performance and cost efficiency on mission-critical analytics use cases. Pre-integrated with the industry-leading Hadoop® and Spark™ frameworks, yet versatile enough to serve analytics workloads of the future, the Urika-XA platform provides a turnkey analytics environment that enables organizations to extract valuable business insights. Optimized for compute-heavy, memory-centric analytics, the Urika-XA platform incorporates innovative use of memory-storage hierarchies, including SSDs for Hadoop and Spark acceleration, and fast interconnect, thus delivering excellent performance on emerging latency-sensitive applications.
Cray Adds Cloudera Enterprise to Cray Urika-XA System
For Further information
|
|
December 1st, 2014 -
Cray® Sonexion® 2000 Scale-out Lustre® Storage System
The Cray® Sonexion® 2000 scale-out Lustre® storage system helps you simplify, scale and protect your data for HPC and supercomputing, rapidly delivering precision performance at scale with the fewest number of components. The system stores over 2 petabytes in a single rack and scales from 7 GB/s to 45 GB/s in a single rack — and over 1.7 TB/s in a single Lustre file system. The system now includes a new type of software-based data protection system called GridRAID, which accelerates rebuilds to up to 3.5 times faster than conventional hardware RAID.
For more information
Do you need to speak to a Cray representation? Send an email to info@cray.com
|
|
December 1st, 2014 -
At the 2014 Supercomputing Conference in New Orleans, Louisiana, global supercomputer leader Cray Inc. (NASDAQ: CRAY) announced the Company won six awards from the readers and editors of HPCwire, as part of the publication's 2014 Readers' and Editors' Choice Awards. This marks the 11th consecutive year Cray has been selected for multiple HPCwire awards.
For more Information
|
|
December 1st, 2014 -
Green500: Piz Daint is Number 9 position in the Green500 list |
|
December 1st, 2014 -
Global supercomputer leader Cray Inc. (NASDAQ: CRAY) today announced the Company has been awarded a contract to provide the Met Office in the United Kingdom with multiple Cray® XC™ supercomputers and Cray® Sonexion® storage systems. Consisting of three phases spanning multiple years, the $128 million contract expands Cray's significant presence in the global weather and climate community, and is the largest supercomputer contract ever for Cray outside of the United States.
For more information
|
|
September 5, 2014 -
CRESTA (Collaborative Research into Exascale Systemware, Tools & Applications) is a collaborative research effort funded by the European Union exploring how to meet the exaflop challenge.
A key enabler of CRESTA’s work is access to the many large Cray supercomputers installed at the CRESTA partner sites in Europe. Below are some recent examples of customer projects that benefited from the CRESTA partnerships :
IFS : Preparing ECMWF’s Integrated Forecast System for Exascale
ECMWF uses the Integrated Forecast System (IFS) model to provide medium-range weather forecasts to its 34 European member states. Today’s simulations use a global grid with a 16km resolution, but ECMWF expects to reduce this to a 2.5km global weather forecast model by 2030 using an exascale-sized system. To achieve this, IFS needs to run efficiently on a thousand times more cores. The CRESTA improvements have already enabled IFS to use over 200,000 CPU cores on Titan. This is the largest number ever used for an operational weather forecasting code and represents the first use of the pre-exascale 5km resolution model that will be needed in medium-range forecasts in 2023. This breakthrough came from using new programming models to eliminate a performance bottleneck. For the first time, the Cray Compiler Environment (CCE) was used to nest Fortran coarrays within OpenMP, absorbing communication time into existing calculations.
For more information: http://www.cray.com/Assets/PDF/products/xc/XC30-ECMWF-IFS-0614.pdf
HemeLB - HPC is brain surgery: Cray® XC30™ supercomputer shows future clinical benefits as development tools partner helps application reach petascale.
The HemeLB research group at University College London (UCL) develops software to model intracranial blood flow, investigating blood flow patterns and the formation process of aneurysm. The team use ARCHER, the U.K.’s flagship Cray XC30 system with 3,008 nodes and a system total of 72,192 cores, to accelerate its science. A sophisticated communication approach based on MPI allows HemeLB to be run on large supercomputers, and therefore the project fits perfectly into CRESTA, as it stimulates the development of technologies leading to the next generation of supercomputers at the exascale. The HemeLB application crashed when using 50,000 processor cores. Allinea DDT parallel debugger handled debugging all 50,000 application processes simultaneously and getting HemeLB to scale to 50,000 ARCHER cores. Afterwards Allinea Software’s profiling tool, Allinea MAP, lead to an adjustment to avoid an I/O bottleneck, enabling the application to scale successfully – and improving performance on those cases by over 25 percent.
For more information: http://www.cray.com/Assets/PDF/products/xc/Case-Study-Allinea-Brain-Surgery.pdf
Preparing for Energy efficiency at Exascale
CRESTA members Cray and the Technische Universität Dresden presented at CUG14 a paper titled “User-level Power Monitoring and Application Performance on Cray XC30 Supercomputers”, which was runner-up for best paper at the conference. The study is a good example of how to begin a meaningful co-design process for energy efficient exascale supercomputers and applications.
New power measurement and control features introduced in the Cray XC supercomputer range for both system administrators and users can be utilized to monitor energy consumption, both for complete jobs and also for application phases. This information can then be used to investigate energy efficient application placement options on Cray XC30 architectures, including mixed use of both CPU and GPU on accelerated nodes and interleaving processes from multiple applications on the same node.
Conference Proceedings are available at the CUG site but if you would like to know more about the subject please send us an email to crayinfo@cray.com.
|
|
September 5, 2014 -
The Cray CS-Storm cluster is an accelerator-optimized system that consists of multiple high-density multi-GPU server nodes, designed for massively parallel computing workloads.
Each CS-Storm cluster rack holds up to 22 2U rackmount CS-Storm server nodes. Each server integrates eight accelerators and two Intel® Xeon® processors, delivering 246 GPU teraflops of compute performance in one 48U rack. The system is available with a comprehensive HPC software stack including tools that are customizable to work with most open-source and commercial compilers, schedulers and libraries.
The CS-Storm system provides performance advantages for users who collect and process massive amounts of data from diverse sources such as satellite images, surveillance cameras, financial markets and seismic processing. It is well suited for HPC workloads in the defense, oil and gas, media and entertainment, life sciences and business intelligence sectors. Typical applications include cybersecurity, geospatial intelligence, pattern recognition, seismic processing, rendering and machine learning.
For more information, see http://www.cray.com/Products/Computing/CS/Optimized_Solutions/CS-
|
|
|