In this video from ISC 2018, John Bent and Jay Lofstead describe how the IO500 benchmark measures storage performance in HPC environments.
"In this video from ISC 2018, John Bent and Jay Lofstead describe how the IO500 benchmark measures storage performance in HPC environments. The second IO500 list was revealed at ISC 2018 in Frankfurt, Germany.
The IO500 benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. Please submit and we look forward to seeing many of you at ISC 2018! Please note that submissions of all size are welcome; the site has customizable sorting so it is possible to submit on a small system and still get a very good per-client score for example. Additionally, the list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data. More details below.
Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 was created in 2017 and published its first list at SC17. The need for such an initiative has long been known within High Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and spirited discussion, finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking.
The multi-fold goals of the benchmark suite are as follows:
* Maximizing simplicity in running the benchmark suite
* Encouraging complexity in tuning for performance.
* Allowing submitters to highlight their “hero run” performance numbers Forcing submitters to simultaneously report performance for challenging IO patterns.
Specifically, the benchmark suite includes a hero-run of both IOR and mdtest configured however possible to maximize performance and establish an upper-bound for performance. It also includes an IOR and mdtest run with highly prescribed parameters in an attempt to determine a lower-bound. Finally, it includes a namespace search as this has been determined to be a highly sought-after feature in HPC storage systems that has historically not been well-measured. Submitters are encouraged to share their tuning insights for publication."
Learn more: http://io500.org
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
"In this video from ISC 2018, John Bent and Jay Lofstead describe how the IO500 benchmark measures storage performance in HPC environments. The second IO500 list was revealed at ISC 2018 in Frankfurt, Germany.
The IO500 benchmark suite is designed to be easy to run and the community has multiple active support channels to help with any questions. Please submit and we look forward to seeing many of you at ISC 2018! Please note that submissions of all size are welcome; the site has customizable sorting so it is possible to submit on a small system and still get a very good per-client score for example. Additionally, the list is about much more than just the raw rank; all submissions help the community by collecting and publishing a wider corpus of data. More details below.
Following the success of the Top500 in collecting and analyzing historical trends in supercomputer technology and evolution, the IO500 was created in 2017 and published its first list at SC17. The need for such an initiative has long been known within High Performance Computing; however, defining appropriate benchmarks had long been challenging. Despite this challenge, the community, after long and spirited discussion, finally reached consensus on a suite of benchmarks and a metric for resolving the scores into a single ranking.
The multi-fold goals of the benchmark suite are as follows:
* Maximizing simplicity in running the benchmark suite
* Encouraging complexity in tuning for performance.
* Allowing submitters to highlight their “hero run” performance numbers Forcing submitters to simultaneously report performance for challenging IO patterns.
Specifically, the benchmark suite includes a hero-run of both IOR and mdtest configured however possible to maximize performance and establish an upper-bound for performance. It also includes an IOR and mdtest run with highly prescribed parameters in an attempt to determine a lower-bound. Finally, it includes a namespace search as this has been determined to be a highly sought-after feature in HPC storage systems that has historically not been well-measured. Submitters are encouraged to share their tuning insights for publication."
Learn more: http://io500.org
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
- Category
- Network Storage
Be the first to comment