Parameterized SQL Aggregation Jobs
A Spark SQL aggregation job where user-defined parameters are injected into a built-in SQL template at runtime.
Legacy Product
A Spark SQL aggregation job where user-defined parameters are injected into a built-in SQL template at runtime.
A SQL aggregation job where users provide parameters to be injected into a built-in SQL template at runtime.
The ID for this Spark job. Used in the API to reference this job. Allowed characters: a-z, A-Z, dash (-) and underscore (_)
<= 128 characters
Match pattern: ^[A-Za-z0-9_\-]+$
Collection containing documents to be aggregated.
The collection to write the aggregates to on output. Defaults to the input collection if not specified.
A short description about this job.
Parameters bound on the SQL template at runtime.
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
The query to select the desired signals. If not set then '*:*' will be used, or equivalent.
Default: *:*
The time range to select signals on, e.g., `[* TO NOW]`. See Solr date range for more options (https://solr.apache.org/guide/8_8/working-with-dates.html).
>= 1 characters
If checked, only aggregate new signals created since the last time the job was successfully run. If there is a record of such previous run then this overrides the starting time of time range set in 'timeRange' property. If unchecked, then all matching signals are aggregated and any previously aggregated docs are deleted to avoid double counting.
Default: true
Use SQL to perform the aggregation. You do not need to include a time range filter in the WHERE clause as it gets applied automatically before executing the SQL statement.
>= 1 characters
Use SQL to perform a rollup of previously aggregated docs. If left blank, the aggregation framework will supply a default SQL query to rollup aggregated metrics.
>= 1 characters
Additional configuration settings to fine-tune how input records are read for this aggregation.
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
If the catch-up flag is enabled and this field is checked, the job framework will execute a fast Solr query to determine if this run can be skipped.
Default: true
Default: sql_template
Allowed values: sql_template