Ranking Metrics Jobs
Calculate relevance metrics (nDCG and so on) by replaying ground truth queries against catalog data using variants from an experiment.
Legacy Product
Calculate relevance metrics (nDCG and so on) by replaying ground truth queries against catalog data using variants from an experiment.
use this job to calculate relevance metrics (nDCG etc..) by replaying ground truth queries (see ground truth job) against catalog data using variants from an experiment.
Configure properties for Ground truth dataset
Field containing ranked doc id's
Default: docId
Solr filter queries to apply against Ground truth collection
Default: "type:ground_truth"
Input collection representing ground truth dataset
>= 1 characters
Query field in the collection
Default: query
Field representing the weight of document to the query
Default: weight
The ID for this Spark job. Used in the API to reference this job. Allowed characters: a-z, A-Z, dash (-) and underscore (_). Maximum length: 63 characters.
<= 63 characters
Match pattern: [a-zA-Z][_\-a-zA-Z0-9]*[a-zA-Z0-9]?
Calculate ranking metrics per each query in ground truth set and save them to Solr collection
Default: true
Output collection to save the ranking metrics to
>= 1 characters
Configure properties for the experiment
Default query profile to use if not specified in experiment variants
Doc id field to retrieve values (Must return values that match the ground truth data)
Default: id
Calculate ranking metrics using variants from experiment
>= 1 characters
Experiment objective name
>= 1 characters
Collection to run the experiment on
>= 1 characters
Pipeline variants for experiment
Ranking position at K for metrics calculation
Default: 10
Spark configuration settings.
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
Default: ranking_metrics
Allowed values: ranking_metrics