Difference between revisions of "Main Page"

From OVISWiki
Jump to: navigation, search
 
(6 intermediate revisions by the same user not shown)
Line 11: Line 11:
  
 
=== Analysis and Visualization ===
 
=== Analysis and Visualization ===
 +
OVIS data can be used for understanding system state and resource utilization.
 +
The [http://github.com/ovis-hpc/ovis current release version] of OVIS enables in transit calculations of functions of metrics at an aggregator before storing or forwarding data to additional consumers. A more flexible analysis and visualization pipeline is in development. 
  
 +
OVIS has been used for investigation of network congestion evolution in large-scale systems.
 +
 +
[[Image:BW_Cube_still.png|thumb|300px| Investigation of network congestion evolution on NCSA's Blue Waters Gemini Network (27,648 compute nodes)]]
 +
 +
Additional features in development include association of application phases and performance in conjunction with system state data.
 +
 +
<!--
 
[[Image:Screenshot.png||thumb|500px|OVIS 3.2 screenshot]]
 
[[Image:Screenshot.png||thumb|500px|OVIS 3.2 screenshot]]
  
Line 33: Line 42:
 
pane with information relevant to that job, and dropping a job onto the 3D display highlights
 
pane with information relevant to that job, and dropping a job onto the 3D display highlights
 
system values on only those components participating in the job.
 
system values on only those components participating in the job.
 +
 +
-->
  
 
=== Log Message Analysis ===
 
=== Log Message Analysis ===
 
<!-- OVIS includes prototype capabilities for log message searching. Additionally, OVIS analyses include the [[Baler_public|Baler]] tool for log message clustering.-->
 
<!-- OVIS includes prototype capabilities for log message searching. Additionally, OVIS analyses include the [[Baler_public|Baler]] tool for log message clustering.-->
OVIS includes prototype capabilities for log message searching. Additionally, OVIS analyses include the Baler tool for log message clustering.
+
OVIS analyses include the Baler tool for log message clustering.
 
   
 
   
 
=== Decision Support ===
 
=== Decision Support ===

Latest revision as of 12:43, 16 February 2018


OVIS is a modular system for HPC data collection, transport, storage, analysis, visualization, and response. The OVIS project seeks to enable more effective use of High Performance Computational Clusters via greater understanding of applications' use of resources, including the effects of competition for shared resources; discovery of abnormal system conditions; and intelligent response to conditions of interest.

Data Collection, Transport, and Storage

The Lightweight Distributed Metric Service (LDMS) is the OVIS data collection and transport system. LDMS provides capabilities for lightweight run-time collection of high-fidelity data. Data can be accessed on-node or transported off node. Additionally, LDMS can store data in a variety of storage options.

Analysis and Visualization

OVIS data can be used for understanding system state and resource utilization. The current release version of OVIS enables in transit calculations of functions of metrics at an aggregator before storing or forwarding data to additional consumers. A more flexible analysis and visualization pipeline is in development.

OVIS has been used for investigation of network congestion evolution in large-scale systems.

Investigation of network congestion evolution on NCSA's Blue Waters Gemini Network (27,648 compute nodes)

Additional features in development include association of application phases and performance in conjunction with system state data.


Log Message Analysis

OVIS analyses include the Baler tool for log message clustering.

Decision Support

The OVIS project includes research work in determining intelligent response to conditions of interest. This includes dynamic application (re-)mapping based upon application needs and resource state and invocation of resiliency responses upon discovery of potential pre-failure and/or abnormal conditions.

Collaborative Analysis Support

Shaun, a cluster supporting collaboration in HPC data analytics, is coming soon.