V4Store Status

From OVISWiki
Revision as of 10:18, 21 November 2018 by Baallan (talk | contribs) (Things to deprecate)
Jump to: navigation, search

The primary LDMS store plugins support:

amqp

Librabbitmq 0.8 based, no changes planned. Tracking of librabbitmq updates expected.

csv

Plans:

  • Remove hard-coded limit on number of instances.
  • Extended flush options to manage latency and debugging issues.

Ideas under discussion:

  1. Timestamping of set arrival at the store. - under consideration
    • Possible inclusion of final agg time in the writeout
    • It will cost virtually nothing (but storage) to add an option to include a store-processing-date stamp.
    • This has been requested by (possibly among others) users concerned with data validation for collective (multinode) statistics. This my be better addressed by changes elsewhere in LDMS. E.g. The LDMS core might provide a service that collects current RTC from all nodes in the aggregation over as short a time as possible and publishes it as a data set (the sampler API does not support this collection activity). A custom "store" could generate a warning if any host clock is off by more than the length of time the scan took (or a small multiple thereof). The usage model of this on Cray's odd clock systems is unclear.
  2. Duplicate set instance detection (the same set instance arriving by distinct aggregation paths is silently stored twice).
    • This can be handled by individual stores keeping a hash of the set instance names and the last timestamp (or N timestamps in a network of aggregators with potentially out of order delivery) of data stored for that set instance. Any set with timestamp detected as already stored is a duplicate. As LDMS is also in the process of adding multiple set instance transport and collection, putting this logic in individual stores is redundant and error prone. The ldms core can/should ensure delivery of exactly one copy from a given origin and date to the stores; this requires a bit of state per store per set instance, and while we're at it this state should include a void pointer for use of the store plugin. This would eliminate most lookups stores may be performing.
  3. Conflicting schema detection (set instances with schema of same name and different content, result storage loss or silent error).
    • The schema conflict detection can be reduced making a metadata checksum at set creation and performing consistency checks at any storage plugin (such as csv) where a full set consistency constraint exists.
    • Storage policies/transforms which cherry pick named metrics must search instances by metric name every time (until ldms start managing a void pointer per policy instance per set instance) or must also enforce full set consistency.
  4. Handle new schema delete old schema. Need this for dvs and perhaps more papi. Handle N schema.
    • See also conflicting schema and duplicate set detection. This all gets very easy if we stop looking at store plugins as singletons without per-set-instance state.
    • Check handling start/stop/load/unload. Multiple instance support?
  5. File permissions and naming
    • File owner/permissions set at create has been added to 3.4.7 and 4.x.
    • Also want YYYYMMDD naming convention instead of epoch.
    • CSV has code to handle user-defined-templates for filenames at close/rename; it can be extended to file creation.
    • Users want this ability at the start of the file, not just at close/rename.
  6. Rollover at subday intervals - think option 1 is sufficient for now. Also fixed name would be in alignment with production system usage, so should be considered.
    • This could be done instantly just using rollover option 1 with an interval less than 86400 seconds. This would drift unless we add some interval/offset semantics (but in seconds). This has been implemented as rolltype 5.
    • Presumably user wants something more cron-like (2 minutes past every third hour since midnight). This would entail either supporting a config file with cron syntax and the names of schema to roll on different schedules.
    • It might be better to just refactor the stores to work on fixed filenames and accept a command (with template) via ldmsctl to perform a log-close-and-rename-and-reopen. Actual cron or logrotate can then be used in the standard ways admins know.
  7. LDMS core managed state pointers (void *) per client(transform policy/store policy)
    • Lack of these is making the store and transform APIs very difficult to finish.
    • When a set instance is assigned to be used in a plugin instance (of which there may be more than one per plugin), then associated with the (set-instance, plugin-instance) pair must also be a void * storage slot that the plugin is allowed to populate.
    • The plugin can hang off that void* (udata of the right flavor), any thing it needs.
      • The most obvious being cached data needed to resolve the problems listed above: the generation numbers and checksums of the schema and instance last seen by the store instance for that set instance.
      • The next most obvious being a cache of metadata generation number and metric indices wanted for the transform or policy, when the schema which might vary in content under the same name.

Problems observed or suspected + Tests to devise:

Mark which ones are required for current functionality vs which are defensive or can be addressed in the refactor?

  • Duplicate stored data -- observed. Need to understand NOW (SEE MORE WITHIN).
    • L1 (I believe), with the fault injection testing. what happened internally to cause this?
    • UPDATE: cannot find this. I think this may have been because of the tail -f with the injections. Try to reproduce this.
    • Tests:
      1. Not clear that the fault injection will be a reliable test. Is there something we can check with connectivity that might account for this?
  • Will run out of ability to store sets due to hardwired max schema in the store structure -- suspected (known). Need NOW.
    • any level. see above sections about these issues
    • note the relationship with special keys array that also has to be kept
    • Tests:
      1. Make many sets -- is there a warning/failure at some point? (should be)
  • Will run into unnecessary bloat and ability to store sets due to inability to drop schema out of the store structure. -- suspected (known) -- need SOON
    • any level. see above section about these issues and also the relationship with special keys array that also has to be kept.
    • Tests:
      1. Make a set that will go obsolete: what notification does the store policy and plugin get? is there an explicit close, and if so does the store plugin get notification? what info is explicitly retained in the store?
  • Dont know how the store behaves under the set group behavior -- dont know - need SOON
    • Tests:
      1. can we do a group and see if those are individual calls to the store plugin or what?
  • Is specifying only storing a subset of the metrics for a store supposed to be supported? If so that may not be supported in the store -- suspected
    • Tests:
      1. Try this. Note that this will be the same schema, but a different number of metrics
  • Is adding metrics to a set or to store for a set dynamically supposed to be supported? If so that is not supported in the store -- suspected
    • The inability to easily get the metrics to print the header at the time of the store open means we end up getting them and printing the header in the store call itself. This is then not subsequently checked to avoid continuously incurring overhead.
    • Alternatively, if the store plugin did not have to keep state but rather it came with the set (see above) this would be easer, but would it bloat the sets, how would multiple storage instructions be handled, and what would all this entail for communications with the sets to get that info there dynamically? Even for static configurations, what info would make sense to supply re the store information as part of the set initialization, as opposed to at the aggregator, especially for multiple storage options?
    • Tests:
      1. Do the dynamic addition and see what info we get for that. Note that this will be the same schema, but different number of metrics.
  • What happens to store function csv under the same circumstances? -- dont know
    • Tests:
      1. Try it

Things to deprecate

Consider if we want to deprecate soon before people start using them:

  • special keys/custom config (allows override altheader, etc).
    • If we could support multiple store_csv plugins we could drop this. what stops this now?
      • Lack of self pointers for store *instances* across the plugin api stops this now; currently we have only the pointer for the plugin. Same problem for samplers.
    • or if sets come with that info, we could drop this, but I have misgivings about this
    • Special keys ties us to initial configuration information, and to keeping that information, that is specified per schema.
      • It would be rather more useful if we can specify multiple store plugin instances (different policies) per schema.
  • print out user data
    • yes please make it go away.
  • rollover management -- why couldnt we use logrotate to handle our files? some recollection about performance and file pointers when writing to the file as logrotate was handling the roll.
    • log rotate has several options:
      • make daemon respond to signals (tricky: config state, threads)
      • stop and restart daemon around file moves (lossy)
      • copytruncate (lossy)
      • do what we do now without logrotate: either
        • close file and move to spool, or
        • use file open date stamp in file name.
    • There is the problem of what to do with file ingestion to other storage systems
      • Tailing daemons will hold on to closed file handles unless they have a notification option to reopen
      • Output pipes/fifos don't need rotating
  • old config checks from when we went from v2 -> v3.


SOS

SOS is in rapid development, and the corresponding store is tracking it.

flatfile

Production use of the flatfile store has led to a number of requested changes (below). These changes are sufficiently complicated that an alternately named store (store_var) is in development. The flatfile store will remain unchanged, so that existing production script use can continue per site until admins have time to switch.

  • Flush controls to manage latency.
  • Output only on change of metric.
    • Optionally with heartbeat metric output on specified long interval.
  • Output only of specific metrics.
  • Excluding output of specific metrics.
    • including producername, job id and component id, for single-job, and single-node use-cases.
  • Output of rate, delta, or integral delta values.
  • Periodic output at frequency lower than arrival, optionally with selectable statistics on suppressed data.
    • Statistics: min, max, avg, miss-count, nonzero-count, min-nonzero, sum, time-weighted sum, dt
  • Metric name aliasing.
  • Rounded to nearest second time stamps (when requested by the user, who is also using long intervals).
  • Check and log message (once) if a rail limit is observed.
  • Rename file after close/rollover following a template string.
  • Generation of splunk input schema.
  • Handling of array metrics.