Legacy Product

Fusion 5.10
    Fusion 5.10

    use this job to calculate relevance metrics (nDCG etc..) by replaying ground truth queries (see ground truth job) against catalog data using variants from an experiment.

    groundTruthConfig - Configure ground truth dataset

    Configure properties for Ground truth dataset

    docIdField - string

    Field containing ranked doc id's

    Default: docId

    filterQueries - array[string]

    Solr filter queries to apply against Ground truth collection

    Default: "type:ground_truth"

    inputCollection - string

    Input collection representing ground truth dataset

    >= 1 characters

    queryField - string

    Query field in the collection

    Default: query

    weightField - string

    Field representing the weight of document to the query

    Default: weight

    id - stringrequired

    The ID for this Spark job. Used in the API to reference this job. Allowed characters: a-z, A-Z, dash (-) and underscore (_). Maximum length: 63 characters.

    <= 63 characters

    Match pattern: [a-zA-Z][_\-a-zA-Z0-9]*[a-zA-Z0-9]?

    metricsPerQuery - boolean

    Calculate ranking metrics per each query in ground truth set and save them to Solr collection

    Default: true

    outputCollection - stringrequired

    Output collection to save the ranking metrics to

    >= 1 characters

    rankingExperimentConfig - Configure experiment

    Configure properties for the experiment

    defaultProfile - string

    Default query profile to use if not specified in experiment variants

    docIdField - string

    Doc id field to retrieve values (Must return values that match the ground truth data)

    Default: id

    experimentId - string

    Calculate ranking metrics using variants from experiment

    >= 1 characters

    experimentObjectiveName - string

    Experiment objective name

    >= 1 characters

    inputCollection - string

    Collection to run the experiment on

    >= 1 characters

    queryPipelines - array[string]

    Pipeline variants for experiment

    rankingPositionK - integer

    Ranking position at K for metrics calculation

    Default: 10

    sparkConfig - array[object]

    Spark configuration settings.

    object attributes:{key required : {
     display name: Parameter Name
     type: string
    }
    value : {
     display name: Parameter Value
     type: string
    }
    }

    type - stringrequired

    Default: ranking_metrics

    Allowed values: ranking_metrics