V4 Store Status

This website contains archival information. For updates, see https://github.com/ovis-hpc/ovis-wiki/wiki

amqp

Librabbitmq 0.8 based, no changes planned. Tracking of librabbitmq updates expected.

csv

Plans

  • Remove hard-coded limit on number of instances.
  • Extended flush options to manage latency and debugging issues.

Ideas under discussion

Timestamping of set arrival at the store. – under consideration
  • Possible inclusion of final agg time in the writeout
  • It will cost virtually nothing (but storage) to add an option to include a store-processing-date stamp.
  • This has been requested by (possibly among others) users concerned with data validation for collective (multinode) statistics. This my be better addressed by changes elsewhere in LDMS. E.g. The LDMS core might provide a service that collects current RTC from all nodes in the aggregation over as short a time as possible and publishes it as a data set (the sampler API does not support this collection activity). A custom “store” could generate a warning if any host clock is off by more than the length of time the scan took (or a small multiple thereof). The usage model of this on Cray’s odd clock systems is unclear.
Duplicate set instance detection (the same set instance arriving by distinct aggregation paths is silently stored twice).

This can be handled by individual stores keeping a hash of the set instance names and the last timestamp (or N timestamps in a network of aggregators with potentially out of order delivery) of data stored for that set instance. Any set with timestamp detected as already stored is a duplicate. As LDMS is also in the process of adding multiple set instance transport and collection, putting this logic in individual stores is redundant and error prone. The ldms core can/should ensure delivery of exactly one copy from a given origin and date to the stores; this requires a bit of state per store per set instance, and while we’re at it this state should include a void pointer for use of the store plugin. This would eliminate most lookups stores may be performing.

Conflicting schema detection (set instances with schema of same name and different content, result storage loss or silent error).
  • The schema conflict detection can be reduced making a metadata checksum at set creation and performing consistency checks at any storage plugin (such as csv) where a full set consistency constraint exists.
  • Storage policies/transforms which cherry pick named metrics must search instances by metric name every time (until ldms start managing a void pointer per policy instance per set instance) or must also enforce full set consistency.
Handle new schema delete old schema. Need this for dvs and perhaps more papi. Handle N schema.
  • See also conflicting schema and duplicate set detection. This all gets very easy if we stop looking at store plugins as singletons without per-set-instance state.
  • Check handling start/stop/load/unload. Multiple instance support?
File permissions and naming
  • File owner/permissions set at create has been added to 3.4.7 and 4.x.
  • Also want YYYYMMDD naming convention instead of epoch.
  • CSV has code to handle user-defined-templates for filenames at close/rename; it can be extended to file creation.
  • Users want this ability at the start of the file, not just at close/rename.
Rollover at subday intervals – think option 1 is sufficient for now. Also fixed name would be in alignment with production system usage, so should be considered.
  • This could be done instantly just using rollover option 1 with an interval less than 86400 seconds. This would drift unless we add some interval/offset semantics (but in seconds). This has been implemented as rolltype 5.
  • Presumably user wants something more cron-like (2 minutes past every third hour since midnight). This would entail either supporting a config file with cron syntax and the names of schema to roll on different schedules.
  • It might be better to just refactor the stores to work on fixed filenames and accept a command (with template) via ldmsctl to perform a log-close-and-rename-and-reopen. Actual cron or logrotate can then be used in the standard ways admins know.
LDMS core managed state pointers (void *) per client(transform policy/store policy)
  • Lack of these is making the store and transform APIs very difficult to finish.
  • When a set instance is assigned to be used in a plugin instance (of which there may be more than one per plugin), then associated with the (set-instance, plugin-instance) pair must also be a void * storage slot that the plugin is allowed to populate.
  • The plugin can hang off that void* (udata of the right flavor), any thing it needs.
    • The most obvious being cached data needed to resolve the problems listed above: the generation numbers and checksums of the schema and instance last seen by the store instance for that set instance.
    • The next most obvious being a cache of metadata generation number and metric indices wanted for the transform or policy, when the schema which might vary in content under the same name.

Problems observed or suspected & test to devise

Duplicate stored data — observed. Need to understand NOW (SEE MORE WITHIN).
  • L1 (I believe), with the fault injection testing. what happened internally to cause this?
  • UPDATE: cannot find this. I think this may have been because of the tail -f with the injections. Try to reproduce this.
  • Tests:
    1. Not clear that the fault injection will be a reliable test. Is there something we can check with connectivity that might account for this?
Ben's 0's issue here Redmine 383.
Will run out of ability to store sets due to hardwired max schema in the store structure — suspected (known). Need NOW.
  • Even in the current store api, the “N schemas” limitation is easy to fix; we can cut and paste the hash table/idx fix from store_flatfile that was fixed a while back.
  • any level. see above sections about these issues
  • note the relationship with special keys array that also has to be kept
  • Tests:
    1. Make many sets — is there a warning/failure at some point? (should be)
Will run into unnecessary bloat and ability to store sets due to inability to drop schema out of the store structure. — suspected (known) — need SOON
  • any level. see above section about these issues and also the relationship with special keys array that also has to be kept.
  • Tests:
    1. Make a set that will go obsolete: what notification does the store policy and plugin get? is there an explicit close, and if so does the store plugin get notification? what info is explicitly retained in the store?
Don't know how the store behaves under the set group behavior — dont know – need SOON

Tests:

  1. can we do a group and see if those are individual calls to the store plugin or what?
We believe that storing a subset of the metrics for a store is working (metric array has just the ones specified)…..however, if the schema are erroneously different there is a problem.
  • This was functionality supposed to be at the storage policy level, yes? We just need to be able to define more than one policy per schema, which should be possible since policy instances have their own names.
  • Tests:
    1. Try this. Note that this will be the same schema, but a different number of metrics
Is adding metrics to a set or to store for a set dynamically supposed to be supported? If so that is not supported in the store — suspected
  • The inability to easily get the metrics to print the header at the time of the store open means we end up getting them and printing the header in the store call itself. This is then not subsequently checked to avoid continuously incurring overhead.
  • Alternatively, if the store plugin did not have to keep state but rather it came with the set (see above) this would be easier, but would it bloat the sets, how would multiple storage instructions be handled, and what would all this entail for communications with the sets to get that info there dynamically? Even for static configurations, what info would make sense to supply re the store information as part of the set initialization, as opposed to at the aggregator, especially for multiple storage options?
  • Tests:
    1. Do the dynamic addition and see what info we get for that. Note that this will be the same schema, but different number of metrics.
Is there other dynamicsm that should be handled? e.g., metadata generation number changes?
  • Don’t know what this does in the innards of policy
  • Do know that there is no handling for re-configuring in the store plugin. Note that if there is unexpected death, there would not be a clean way for the plugin to know that there has been a change (meta data generation number would be the same).
What happens to store function csv under the same circumstances? — don't know

Tests:

  1. Try it

Rollover

Keep here notes about why we do our own rollover management.

  • rollover management — why couldn’t we use logrotate to handle our files? some recollection about performance and file pointers when writing to the file as logrotate was handling the roll.
    • log rotate has several options:
      • make daemon respond to signals (tricky: config state, threads)
      • stop and restart daemon around file moves (lossy)
      • copytruncate (lossy)
      • do what we do now without logrotate: either
        • close file and move to spool, or
        • use file open date stamp in file name.
    • There is the problem of what to do with file ingestion to other storage systems
      • Tailing daemons will hold on to closed file handles unless they have a notification option to reopen
      • Output pipes/fifos don’t need rotating

Proposal

This has been done. Including the backwards compatibility, except where helpful diagnostics are provided. See https://gitlab.opengridcomputing.com/ovis/ovis/merge_requests/1057 ; Ben and Ann will decide between them.

  • optional options file specified as a config argument to store csv. This has apriori all the arguments for a schema (e.g., altheader, rollover information) which is currently specified in the config line.
  • for backward compatibility, we can keep those as options in the config line with the new config file arg as an additional conflicting option.
  • however, config_custom will be dropped as a store_csv option. This will only affect people who are using it now (who are they?) — JIM
  • this will allow:
    • drop storek struct and fixed size special keys arrays IFF we could iterate through the store_idx (and not have to know the names). Which we can becuase we have the key in the csv_store_handle which is the void*!
    • the above also then drops custom config out of the configuration options.
    • can drop items out of the csv_store_keys when close_store is called (and no longer have to worry about keeping the state of the special keys around). is this called? Cleanup of all allocated store structures is handled correctly in a normal daemon shutdown.
  • and it will require
    • read the config file and create the csv_store_handle at the time of open_store (where the config is currently done)
    • determine config file syntax so that it looks like:
<plugin_name> <arg/value pairs> # for example: store_csv altheader=1 rollover=3 # e.g. all the defaults
<container> <schema> <arg/value pairs> # for any of the custom ones

where the config lines will look like (only adds the optional and possibly mutually exclusive conffile):

config name=store_csv <altheader=1 etc> OR <opt_file=file.txt>
strgp_add name=csv_mem_policy plugin=store_csv container=loadavg_store schema=loadavg
  • we keep the store_idx
  • note that the header file still cannot be built until the store call

Also:

  • store_function_csv — can also have the fixed array removed (note it was only for the rollover, not for the custom).
    • this would keep the same config it currently has

We are also interested in:

  • removing the optional print for user data. UPDATE: this still exists and is metadata, but the size and setting capabilities are limited. We will keep it as an optional print for now.
  • old config check functions (e.g., use of id_pos, set) from when we went from v2 -> v3. UPDATE: Alternate: allowed keyword lint check in a library is included in 1057 OK
  • a call next week to discuss putting in a checksum for the schema into the set that will be checked at the store.
  • general topic regarding store revamp requirements/options/considerations:
    • How do we allow a single store instance to say that it wants to receive all available sets independent of schema? E.g flatfile has no reason to care about schema.
    • Can we define a protocol whereby store instances or transform instances can hang a void* off each set instance it receives without any hashing?

SOS

SOS is in rapid development, and the corresponding store is tracking it.

flatfile

Production use of the flatfile store has led to a number of requested changes (below). These changes are sufficiently complicated that an alternately named store (store_var) is in development. The flatfile store will remain unchanged, so that existing production script use can continue per site until admins have time to switch.

  • Flush controls to manage latency.
  • Output only on change of metric.
    • Optionally with heartbeat metric output on specified long interval.
  • Output only of specific metrics.
  • Excluding output of specific metrics.
    • including producername, job id and component id, for single-job, and single-node use-cases.
  • Output of rate, delta, or integral delta values.
  • Periodic output at frequency lower than arrival, optionally with selectable statistics on suppressed data.
    • Statistics: min, max, avg, miss-count, nonzero-count, min-nonzero, sum, time-weighted sum, dt
  • Metric name aliasing.
  • Rounded to nearest second time stamps (when requested by the user, who is also using long intervals).
  • Check and log message (once) if a rail limit is observed.
  • Rename file after close/rollover following a template string.
  • Generation of splunk input schema.
  • Handling of array metrics.