Omniverse Telemetry Implementation Details

Overview

On its own, the structured log core (omni.structuredlog.plugin) is simply a structured messaging system - it will only emit a structured message to a local log file. The log file(s) will later be consumed by the message transmission system. For each message that is found, it will be validated against a number of approved schemas. If the message is validated against one of the schemas and the user’s consent has been given, it will be sent to the collection servers. If the message could not be validated against any approved schema, it will simply be rejected and remain only in the local log file.

The structured log core runs a single event message processing thread for the process. This thread manages a queue of events that need to be written to the log file(s). Note that a crash handler is needed to ensure all messages are flushed in case of a crash (carb.crashreporter-breakpad handles this). The queue has a limited amount of storage space. By default the queue is sized at 2MiB. Its size can be changed at any time with IStructuredLogSettings::setEventQueueSize(). The size of the buffer should be tuned to the expected needs of the app; there is one queue per-process, so the queue needs to be tuned for the needs of all components within a process. Since there is only one thread processing the events to the log, it will become a bottleneck if messages are sent too frequently. The size of the queue depends on the size of the average event being sent and the frequency of those events. For something like a profiler or high volume logging system, the queue should be very large (ie: 50+MB). For an app that only infrequently sends small events, this could be set to the minimum size of 512KB. The size of the queue is left entirely up to the discretion of the host app.

Structured log events are written directly to the event queue’s buffer. The minimum amount of information required will be written to the event buffer so that the sending thread may continue as quickly as possible. Once the event data is in the queue, it will be processed at some point in the near future.

Telemetry Transmitter

In Omniverse apps, the omni.telemetry.transmitter helper app is used to process log files and send telemetry events up to the data servers. This helper app will only ever have one running instance of itself at any given time. If a new transmitter app is launched by an Omniverse app while another instance is already running, the new instance will simply exit immediately.

After launching, the transmitter downloads a zip file that contains the approved JSON schemas. This allows the events collected to be updated without having to update the transmitter app.

The transmitter app requires that the Omniverse Launcher be running in order to authenticate and send event messages to the data servers. If the launcher is not running, the transmitter app will simply sit idle as long as at least one Omniverse app is running. If the user starts the Launcher app during this idle time, the transmitter will wake up and start processing events.

Currently there is a limit on the size of a single message that can be sent to the data servers. This limit is 10MB (ie: 10,000,000 bytes). Any single message larger than this will currently be dropped even if it validates against an approved schema. This behaviour may change in future versions to split the transmission of large message over multiple transmission blocks. If multiple messages can be processed and are smaller than this limit, they will all be sent in one large block.

This message size limit should generally not be a concern for most usage cases. An example of a usage case where this might be problematic would be in trying to send an image file as part of a message (ie: base64 encoded binary blob). Support for this type of behaviour may come in a future revision of both the structured logging core and the message transmitter app.

Alternative Endpoints and Usages

The simplest way to integrate the Omniverse telemetry pipeline for a new purpose is to change the URL where the transmitter sends events, referred to as the endpoint. This can be easily changed by editing the setting /telemetry/endpoint from ISettings. Additionally, the URL where approved schemas are downloaded is set with the setting: /telemetry/schemasUrl. These can be set in the app’s config.toml file or on the command line.

If the telemetry transmitter isn’t suitable for your purposes, you can replace the transmitter component and leave the rest of the pipeline. For example, a viewer for structured log events on the local system would be a use case for this. omni.structuredlog.plugin writes events into log files on the local filesystem, so an alternative transmitter application should be easy to implement.

By default, all log files will be located in the Omniverse logs folder. This will be located relative to the current user’s home folder. For example, on Linux this will be the $HOME folder and on Windows it will be the %USERPROFILE% folder. The default log folder is located at HOME_PATH/.nvidia-omniverse/logs/. The structured log output path may be changed by the host app at any point as well as needed.

Log File Format

The local log files have a very simple format. Each line in the file is a JSON object. The first line in the file is a header. Every line after the header is a single event message. Each event message will be formatted as a JSON CloudEvents message.

The header is a JSON object with some whitespace at the end to allow for the transmitter app to add some state data to the header. The header has a key value pair "source": "omni.structuredlog" to verify that the log file is actually potentially for Omniverse telemetry data. The header also has a key value pair "version": "1.0" to ensure that future incompatibilities can be detected by the transmitter app. The header also has a creation time field labeled time. The telemetry transmitter app distributed with Carbonite uses this to check whether a log file has been rotated out. A transmitter application just needs to verify the source and the version. The Omniverse telemetry transmitter adds a "seek":<integer> key value pair to the header to track the last read byte offset in the file.

On Linux, the log file is locked by omni.structuredlog.plugin with fcntl(F_SETLK) before writing a message to it. Transmitter applications should also lock the log file with fcntl(F_SETLK) to avoid reading the file in an inconsistent state.

Message Format

All messages emitted by the structured log core will be formatted as CloudEvents messages. These are simply JSON objects that have a certain minimum set of required top-level properties. These properties allow the potential for easy filtering, matching, validation, and basic understanding of individual messages.

Each CloudEvents message will consist of the top-level required properties plus some optional properties:

  • "id": this is intended to act as a unique identifier for a single occurrence of an event message. If two messages have the same id value, they may be considered to be the same occurrence and one therefore ignored. The idea of this identifier is that when combined with the source value, this will make a unique identifier within a data collection system’s context. This identifier is most often expressed as a 64-bit integer, a UUID, or an encoded index plus timestamp. There is no fixed format for this property, but it must be expressed as a string. This property is required in all Omniverse messages.

  • "type": this is intended to identify what type of event has occurred. This is a schema specified name for the event. This property is often used as a primary means of filtering or matching events. There is no fixed format of this property, but it is often expressed as a reverse DNS name or a simple descriptive string. This property is required in all Omniverse messages.

  • "source": this is intended to identify the origin of the event message. This is expressed as the user’s name followed by the client name, separated with an @ character. This provides an overall context grouping for all messages that come from the same location or source. There is no fixed format to this property’s value, but in Omniverse telemetry it will be the app’s or component’s name followed by its version or release number. This property is required in all Omniverse messages.

  • "specversion": this property is required for all CloudEvents compliant messages and is intended to specify the version of the CloudEvents specification that the message is compliant with. Currently this must always be set to “1.0”. This is also a required property in all Omniverse messages.

  • "time": this is intended to identify when an event occurred. This must always be a date/time stamp in the format laid out in RFC3339. In Omniverse messages, this timestamp should have at least microsecond resolution and should always be in the UTC/Zulu time zone. While this is not a required property for CloudEvents messages, it is required in all Omniverse messages.

  • "dataschema": this is intended to identify a schema that can be used to decode the data object. This can be a URI for the schema if it is posted publicly somewhere, or simply a name that can be used by certain tools to find the appropriate schema in a known location. In Omniverse messages, this will be the schema’s name and version number in the form <schemaName>-<schemaVersion>. This property is required in all Omniverse messages.

  • "data": this is intended to store the main data payload for any message. This is the property whose structure will be described by a JSON event schema. For Omniverse messages, this must always be an object containing the properties specific to the event. This may also be null if no payload is required. This property is required in all Omniverse messages.

Configuration Options

Structured Logging Options

The Carbonite structured logging system has several configuration options that can be used to control its behaviour. These are specified either in an app’s config file or on the command line. The following settings keys are defined:

  • "/structuredLog/enable": Global enable/disable for structured logging. When set to false, the structured log system will be disabled. This will prevent any event messages from being written out unless the host app explicitly wants them to. When set to true, the structured log system will be enabled and event messages will be emitted normally. This defaults to false.

  • "/structuredLog/defaultLogName": The default log name to use. If a default log name is set, all events that do not use the omni::structuredlog::fEventFlagUseLocalLog flag will write their messages to this log file. Events that do use the omni::structuredlog::fEventFlagUseLocalLog flag will write only to their schema’s log file. This value must be only the log file’s name, not including its path. The logs will always be created in the structured logging system’s current log output path. This defaults to an empty string.

  • "/structuredLog/logRetentionCount": The setting path for the log retention count. This controls how many log files will be left in the log directory when a log rotation occurs. When a log file reaches its size limit, it is renamed and a new empty log with the original name is created. A rolling history of the few most recent logs is maintained after a rotation. This setting controls exactly how many of each log will be retained after a rotation. This defaults to 3.

  • "/structuredLog/logSizeLimit": The setting path for the log size limit in megabytes. When a log file reaches this size, it is rotated out by renaming it and creating a new log file with the original name. If too many logs exist after this rotation, the oldest one is deleted. This defaults to 50MB.

  • "/structuredLog/eventQueueSize": The setting path for the size of the event queue buffer in kilobytes. The size of the event queue controls how many messages can be queued in the message processing queue before events start to get dropped (or a stall potentially occurs). The event queue can fill up if the app is emitting messages from multiple threads at a rate that is higher than they can be processed or written to disk. In general, there should not be a situation where the app is emitting messages at a rate that causes the queue to fill up. However, this may be beyond the app’s control if (for example) the drive the log is being written to is particularly slow or extremely busy. This defaults to 2048KiB.

  • "/structuredLog/eventIdMode": The setting path for the event identifier mode. This controls how event identifiers are generated. Valid values are fast-sequential, sequential, and random. This defaults to fast-sequential. This setting is not case sensitive. Each mode has its own benefits and drawbacks:

    • sequential ensures that all generated event IDs are in sequential order. When the event ID type is set to UUID, this will ensure that each generated event ID can be easily sorted after the previous one. With a UUID type ID, this mode can be expensive to generate. With a uint64 ID, this is the most performant to generate.

    • fast-sequential is only effective when the event ID type is set to UUID. In this mode, the UUIDs that are generated are sequential, but in memory order, not lexigraphical order. It takes some extra effort to sort these events on the data analysis side, but they are generated very quickly. When the event ID type is not UUID, this mode behaves in the same way as sequential.

    • random generates a random event ID for each new event. This does not preserve any kind of order of events. If the app does not require sequential events, this can be more performant to generate especially for UUIDs.

  • "/structuredLog/eventIdType": The setting path for the event identifier data type. This determines what kind of event ID will be generated for each new event and how it will be printed out with each message. This defaults to UUID. This setting is not case sensitive. The following types are supported:

    • UUID generates a 128 bit universally unique identifier. The event ID mode determines how one event ID will be related to the next. This is printed into each event message in the standard UUID format (“00000000-0000-0000-0000-000000000000”). This type provides the most uniqueness and room for scaling in large data sets.

    • uint64 generates a 64 bit integer identifier. The event ID mode determines how one event ID will be related to the next. This is printed into each event message as a simple decimal integer value.

  • "/structuredLog/enableLogConsumer": The setting path for the log consumer toggle. This enables or disables the redirection of normal Carbonite (ie: CARB_LOG_*()) and Omni (ie: OMNI_LOG_*()) messages as structured log events as well. The log messages will still go to their original destination (stdout, stderr, log file, MSVC output window, etc) as well. This defaults to false.

  • "/structuredLog/state/schemas": The setting path that will contain zero or more keys that will be used to disable schemas when they are first registered. Each key under this setting will have a name that matches zero or schema names. From a .schema file, this would match the “name” property. From a JSON schema file, this would match the #/schemaMeta/clientName property. The key’s value is expected to be a boolean that indicates whether it is enabled upon registration.

    The names of the keys under this path may either be a schema’s full name or a wildcard string that matches to zero or more schema names. In either version, the case of the non-wildcard portions of the key name is important. The wildcard characters * (match to zero or more characters) and ? (match to exactly one character) may be used. This is only meant to be a simple wildcard filter, not a full regular expression.

    For example, in a TOML file, these settings may be used to disable or enable multiple schemas:

    [structuredLog.state.schemas]
    "omni.test_schema" = false  # disable 'omni.test_schema' on registration.
    "omni.other_schema" = true  # enable 'omni.other_schema' on registration.
    "carb.*" = false            # disable all schemas starting with 'carb.'.
    

    Note

    The keys in this setting path are inherently unordered. If a set of dependent enable/disable settings is needed, the "/structuredLog/schemaStates" setting path should be used instead. This other setting allows an array to be specified that preserves the order of keys. This is useful for doing things like disabling all schemas then only enabling a select few.

  • "/structuredLog/state/events": The setting path that will contain zero or more keys that will be used to disable events when they are first registered. Each key under this setting will have a name that matches zero or event names. From a .schema file, this would match the “namespace” property plus one of the properties under #/events/. From a JSON schema file, this would match one of the event properties under #/definitions/events/. The key’s value is expected to be a boolean that indicates whether it is enabled upon registration.

    The names of the keys under this path may either be an event’s full name or a wildcard string that matches to zero or more event names. In either version, the case of the non-wildcard portions of the key name is important. The wildcard characters * (match to zero or more characters) and ? (match to exactly one character) may be used. This is only meant to be a simple wildcard filter, not a full regular expression.

    For example, in a TOML file, these settings may be used to disable or enable multiple events:

    [structuredLog.state.events]
    "com.nvidia.omniverse.fancy_event" = false
    "com.nvidia.carbonite.*" = false            # disable all 'com.nvidia.carbonite' events.
    

    Note

    The keys in this setting path are inherently unordered. If a set of dependent enable/disable settings is needed, the "/structuredLog/eventStates" setting path should be used instead. This other setting allows an array to be specified that preserves the order of keys. This is useful for doing things like disabling all events then only enabling a select few.

  • "/structuredLog/schemaStates": The setting path to an array that will contain zero or more values that will be used to disable or enable schemas when they are first registered. Each value in this array will have a name that matches zero or more schema names. From a .schema file, this would match the “name” property. From a JSON schema file, this would match the #/schemaMeta/clientName property. The schema name may be optionally prefixed by either + or - to enable or disable (respectively) matching schemas. Alternatively, the schema’s name may be assigned a boolean value to indicate whether it is enabled or not. If neither a +/- prefix nor a boolean assignment suffix is specified, ‘enabled’ is assumed.

    The names in this array either be a schema’s full name or a wildcard string that matches to zero or more schema names. In either version, the case of the non-wildcard portions of the key name is important. The wildcard characters * (match to zero or more characters) and ? (match to exactly one character) may be used. This is only meant to be a simple wildcard filter, not a full regular expression.

    For example, in a TOML file, these settings may be used to disable or enable multiple schemas:

    structuredLog.schemaStates = [
        "-omni.test_schema",        # disable 'omni.test_schema' on registration.
        "omni.other_schema = true", # enable 'omni.other_schema' on registration.
        "-carb.*"                   # disable all schemas starting with 'carb.'.
    ]
    

    Note

    TOML does not allow static arrays such as above to be appended to with later lines. Attempting to do so will result in a parsing error.

  • "/structuredLog/eventStates": The setting path to an array that will contain zero or more values that will be used to disable or enable events when they are first registered. Each value in this array will have a name that matches zero or more event names. From a .schema file, this would match one of the property names under #/events/. From a JSON schema file, this would match one of the event object names in #/definitions/events/. The event name may be optionally prefixed by either + or - to enable or disable (respectively) matching event(s). Alternatively, the event’s name may be assigned a boolean value to indicate whether it is enabled or not. If neither a +/- prefix nor a boolean assignment suffix is specified, ‘enabled’ is assumed.

    The names in this array either be an event’s full name or a wildcard string that matches to zero or more event names. In either version, the case of the non-wildcard portions of the array entry name is important. The wildcard characters * (match to zero or more characters) and ? (match to exactly one character) may be used. This is only meant to be a simple wildcard filter, not a full regular expression.

    For example, in a TOML file, these settings may be used to disable or enable multiple schemas:

    structuredLog.schemaStates = [
        "-com.nvidia.omniverse.fancy_event",
        "com.nvidia.carbonite.* = false"            # disable all 'com.nvidia.carbonite' events.
    ]
    

    Note

    TOML does not allow static arrays such as above to be appended to with later lines. Attempting to do so will result in a parsing error.

Telemetry Transmitter Options

The Carbonite telemetry transmitter also has several configuration options that can be used to control its behaviour. These are specified either in an app’s config file or on the command line. The following settings keys are defined:

  • "/telemetry/log/file": The log file path that will be used for any transmitter processes launched. If it is not specified, a default path will be used.

  • "/telemetry/log/level": The log level that will be used for any transmitter process launched. If this is not specified, the parent process’s log level will be used.

  • "/telemetry/stayAlive": This setting will cause the transmitter to stay alive until it is manually killed. This setting is meant for developer use mostly meant for developer use, but could also be used in server or farm environments where Omniverse apps are frequently run and exited. This will bypass the normal exit condition check of testing whether all apps that tried to launch the transmitter have exited on their own. Once all apps have exited, the transmitter will exit on its own. This defaults to false.

  • "/telemetry/pollTime": The time, in seconds, that the transmitter will wait between polling the log files. This determines how reactive the transmitter will be to checking for new messages and how much work it potentially does in the background. This defaults to 60 seconds.

  • "/telemetry/mode": The mode to run the transmitter in. The value of this setting can be either “dev” or “test”. By default, the transmitter will run in “production” mode. In “dev” mode, the transmitter will use the default development schemas URL. In “test” mode, the default staging endpoint URL will be used. The “test” mode is only supported in debug builds. If “test” is used in a release build, it will be ignored and the production endpoint will be used instead.

  • "/telemetry/allowRoot": This allows the transmitter to run as the root user on Linux. The root user is disabled by default because it could make some of the transmitter’s files non-writable by regular users. By default, if the transmitter is launched as the root user or with sudo, it will report an error and exit immediately. If it is intended to launch as the root or super user, this option must be explicitly specified so that there is a clear intention from the user. The default value is false.

  • "/telemetry/restrictedRegions": The list of restricted regions for the transmitter. If the transmitter is launched in one of these regions, it will exit immediately on startup. The entries in this list are either the country names or the two-letter ISO 3166-1 alpha-2 country code. Entries in the list are separated by commas (‘,’), bars (‘|’), slash (‘/’), or whitespace. Whitespace should not be used when specifying the option on the command line. It is typically best to use the two-letter country codes in the list since they are standardized. This defaults to an empty list. This feature is currently disabled and this option will be ignored on all platforms.

  • "/telemetry/restrictedRegionAssumption": The assumption of success or failure that should be assumed if the country name and code could not be retrieved. Set this to true if the transmitter should be allowed to run if the country code or name could not be retrieved. Set this to false if the transmitter should not be allowed to run in that error case. This defaults to true. This feature is currently disabled and this option will be ignored on all platforms.

  • "/telemetry/transmitter": This settings key holds an object or an array of objects. Each object is one transmitter instance. A transmitter instance sends data to a telemetry endpoint with a specific configuration. Each instance can be configured to use a different protocol and a different set of schema URLs, so data sent to each endpoint can be substantially different. To send data to multiple endpoints, you set up an array of objects under this settings key. To specify a single transmitter, you can simply write to /telemetry/transmitter/*:

    "$CARB_PATH/omni.telemetry.transmitter" \
    "--/telemetry/transmitter/endpoint=https://telemetry.not-a-real-url.nvidia.com"
    

    To specify an array of settings on the command line, you need to give indices to the settings:

    "$CARB_PATH/omni.telemetry.transmitter" \
    "--/telemetry/transmitter/0/endpoint=https://telemetry.not-a-real-url.nvidia.com" \
    "--/telemetry/transmitter/1/endpoint=https://metrics.also-not-a-real-url.nvidia.com"
    

    This is easier to do in a JSON config file:

    {
        "telemetry": {
            "transmitter": [
                {
                    "endpoint": "https://telemetry.not-a-real-url.nvidia.com",
                },
                {
                    "endpoint": "https://metrics.also-not-a-real-url.nvidia.com"
                }
            ]
        }
    }
    

    The following settings are options within the transmitter object:

    • "resendEvents": If this is set to true, the transmitter will ignore the seek field in the header and start parsing from the start of each log again. This is only intended to be used for testing purposes. This defaults to false.

    • "transmissionLimit": The maximum number of bytes to send in one transmission to the server. Event messages will be sent in batches as large as possible up to this size. If more message data is available than this limit, it will be broken up into multiple transmission units. This must be less than or equal to the transmission limit of the endpoint the messages are being sent to. If this is larger than the server’s limit, large message buffers will simply be rejected and dropped. This defaults to 10MiB.

    • "queueLimit": This sets the maximum number of messages to process in a single pass on this transmitter. The transmitter will stop processing log messages and start to upload messages when it has found, validated, processed, and queued at most this number of messages on any given transmitter object. Only validated and queued messages will count toward this limit. This limit paired with transmissionLimit helps to limit how much of any log file is processed and transmitted at any given point. This defaults to 10000 messages.

    • "endpoint": Sets the URL to send the telemetry events to. This can be used to send the events to a custom endpoint. You can set this as an array if you want to specify multiple fallback endpoints to use if the retry limit is exhausted; each endpoint will be tried in order and will be abandoned if connectivity fails after the retry limit. By default, the "/telemetry/mode" setting will determine the endpoint URL to use.

    • "schemasUrl": This specifies the URL or URLs to download approved schemas from. This may be used to override the default URL and the one specified by "/telemetry/mode". This may be treated either as a single URL string value or as an array of URL strings. The usage depends on how the setting is specified by the user or config. If an array of URLs is given, they are assumed to be in priority order starting with the first as the highest priority URL. The first one that successfully downloads will be used as the schema package. The others are considered to be backup URLs. You can specify a file or directory of files to use by adding a "file://" prefix to the file system path.

    • "authenticate": This specifies that authentication should be enabled when sending event messages to the telemetry endpoint. When disabled, this will prevent the auth token from being retrieved from the Omniverse Launcher app. This should be used in situations where the Omniverse Launcher app is not running and an endpoint that does not need or expect authorization to be used. If this is not used and the auth token cannot be retrieved from the Launcher app, the transmitter will go into an idle mode where nothing is processed or sent. This mode is expecting the Launcher app to become available at some future point. Setting this option to false will disable the authentication checks and just attempt to push the events to the specified telemetry endpoint URL. Note that the default endpoint URL will not work if this option is used since it expects authentication. The default value is true.

    • "authTokenUrl": This specifies the URL to download the authentication token from. This option will be ignored if "authenticate" is false. A file will be expected to be downloaded from this URL. The downloaded file is expected to be JSON formatted and is expected to contain the authentication token in the format that the authentication server expects. The name of the data property in the file that contains the actual token to be sent with each message is specified in "authTokenKeyName". The data property in the file that contains the token’s expiry date is specified in "authTokenExpiryName". Alternatively, this setting may also point to a file on disk (either with the ‘file://’ protocol or by naming the file directly). If a file on disk is named, it is assumed to either be JSON formatted and also contain the token data and expiry under the same key names as given with "authTokenKeyName" and "authTokenExpiryName", or it will be a file whose entire contents will be the token itself. The latter mode is only used when the @ref kTelemetryAuthTokenKeyNameSetting setting is an empty string. By default, the URL for the Omniverse Launcher’s authentication web API will be used.

    • "authTokenKeyName": This specifies the name of the key in the downloaded authentication token’s JSON data that contains the actual token data to be sent with each set of uploaded messages. This option will be ignored if "authenticate" is false. This must be in the format that the authentication server expects. The token data itself can be any length, but is expected to be string data in the JSON file. The key is looked up in a case sensitive manner. This defaults to “idToken”.

    • "authTokenExpiryName": This specifies the name of the key in the downloaded authentication token’s JSON data that contains the optional expiry date for the token. This option will be ignored if "authenticate" is false.If this property exists, it will be used to predict when the token should be retrieved again. If this property does not exist, the token will only be retrieved again if sending a message results in a failure due to a permission error. This property is expected to be string data in the JSON file. Its contents must be formatted as an RFC3339 date/time string. This defaults to “expires”.

    • "authTokenType": This specifies the expected type of the authentication token. This can be any of “auto”, “jwt”, or “api-key”. By default this is set to “auto” and will attempt to detect the type of authentication token based on where the kTelemetryAuthTokenUrlSetting setting points. If the value points to a URL, a JWT will be assumed. If the value points to a file on disk or directly contains the token itself, a long-lived API key will be assumed. If an alternate storage method is needed but the assumed type doesn’t match the actual token type, this setting can be used to override the auto detection. The default value is “auto”. This setting will be ignored if kTelemetryAuthenticateSetting is set to false.

    • "oldEventsThreshold": The number of days before the current time where events in a log start to be considered old. When set to 0 (and this setting isn’t overridden on a per-schema or per-event level), all events are processed as normal and no events are considered old. When set to a non-zero value, any events that are found to be older than this number of days will be processed differently depending on the flags specified globally ("ignoreOldEvents" and "pseudonymizeOldEvents"), at the schema level (fSchemaFlagIgnoreOldEvents and fSchemaFlagPseudonymizeOldEvents), and at the event level (fEventFlagIgnoreOldEvents and fEventFlagPseudonymizeOldEvents). If no special flags are given, the default behaviour is to anonymize old events before transmitting them. This will only override the old events threshold given at the schema (#/oldEventsThreshold) or event (#/events/<eventName>/oldEventsThreshold) if it is a smaller non-zero value than the values given at the lower levels. This defaults to 0 days (ie: disables checking for ‘old’ events).

    • "ignoreOldEvents": Flag to indicate that when an old event is detected, it should simply be discarded instead of being anonymized or pseudonymized before transmission. This is useful if processing old events is not interesting for analysis or if transmitting an old event will violate a data retention policy. An event is considered old if it is beyond the ‘old events threshold’ (see "oldEventsThreshold"). By default, old events will be anonymized before being transmitted. If this flag is set here at the global level, it will override any old event settings from the schema and event levels.

    • "pseudonymizeOldEvents": Flag to indicate that when an old event is detected, it should be pseudonymized instead of anonymized before transmission. This setting is ignored for any given event if the "ignoreOldEvents" setting, fSchemaFlagIgnoreOldEvents flag, or fEventFlagIgnoreOldEvents flag is used for that event. When not specified, the default behaviour is to anonymize old events before transmission.

    • "eventProtocol": What serialization protocol is being used by the transmitter. This can be set to two possible (case-insensitive) values:

      • “default”: This is the default serialization protocol. This is a batch serialization protocol where up to kTransmissionLimitSetting events are sent to server with each JSON object separated by a newline (aka JSON lines format). This serializes events mostly as-is; the only modification to the individual events is that the data payload is turned into a string.

      • “NVDF”: This is also a batch serialization protocol, except that the event property names are modified to follow the expectations of the server. This flattens each event’s data payload into the event base. The time field is renamed to ts_created and its format is changed to a *nix epoch time in milliseconds. The id field is renamed to _id. All data fields are also prefixed in a Hungarian-notation style system.

    • "seekTagName": The tag name that will be used to record how much of the file was processed. Unless kResendEventsSetting is set to true, the seek tags are used to tell the transmitter how much of the log has been processed so far so that it won’t try to send old events again. The default value of this is "seek". You may want to change the seek tag name if you need to send data to multiple telemetry endpoints with separate transmitter processes.

    • retryLimit: The number of attempts to transmit data that will be made before giving up. This can be set to -1 to retry forever. This setting is only important in a multi-endpoint context; if one endpoint goes offline, a retry limit will allow the transmitter to give up and start sending data to other endpoints. There is an exponential backoff after every retry, so the first retry will occur after 1 second, then 2 second, then 4 seconds, etc. The backoff takes the transmission time into account, so the 4 second backoff would only wait 1 second if the failed transmission took 3 seconds. A setting of 5 will roughly wait for 1 minute in total. A setting of 11 will roughly wait for 1 hour in total. After the 12th wait, wait time will no longer increase, so each subsequent wait will only last 4096 seconds.

    • messageMatchMode: This controls which messages will be considered by the transmitter and which will be filtered out before validating against a schema. This is intended to match each message’s transmission mode against the mode that the transmitter is currently running in. This value can be one of the following:

      • “all”: allow all messages to be validated against schemas.

      • “matching”: allow messages whose source mode matches the transmitter’s current run mode.

      • “matching+”: allow messages whose source mode matches the transmitter’s current run mode or higher.

      • “test”: only consider messages from the ‘test’ source mode.

      • “dev”: only consider messages from the ‘dev’ source mode.

      • “prod”: only consider messages from the ‘prod’ source mode.

      • “dev,prod” (or other combinations): only consider messages from the listed source modes.

      This defaults to “all”. This setting value may be used either on a per-transmitter basis when using multiple transmitters, or it may be used in the legacy setting /telemetry/messageMatchMode. If both the legacy and the per-transmitter setting are present, the per-transmitter ones will always override the legacy setting.

      Note

      Setting this to a value other than “all” would result in certain messages not being transmitted because they were rejected during a previous run and were skipped over in the log. It is intended that if this setting is used for a given transmitter, it is likely to be used with that same matching mode most often on any given machine. Note that multiple transmitters with different matching modes can also be used.