Use this job to build training data for query classification by joining signals with catalog.
analyzerConfig - stringrequired
The style of text analyzer you would like to use.
Default: { "analyzers": [{ "name": "StdTokLowerStop","charFilters": [ { "type": "htmlstrip" } ],"tokenizer": { "type": "standard" },"filters": [{ "type": "lowercase" }] }],"fields": [{ "regex": ".+", "analyzer": "StdTokLowerStop" } ]}
catalogFormat - stringrequired
Spark-compatible format that contains catalog data (like 'solr', 'parquet', 'orc' etc)
catalogIdField - stringrequired
Item Id field in catalog, which will be used to join with signals
catalogPath - stringrequired
Catalog collection or cloud storage path which contains item categories.
categoryField - stringrequired
Item category field in catalog.
countField - stringrequired
Count Field in raw or aggregated signals.
Default: aggr_count_i
dataFormat - string
Spark-compatible format that contains training data (like 'solr', 'parquet', 'orc' etc)
>= 1 characters
Default: solr
dataOutputFormat - string
Spark-compatible output format (like 'solr', 'parquet', etc)
>= 1 characters
Default: solr
fieldToVectorize - stringrequired
Field containing query strings.
>= 1 characters
Default: query_s
id - stringrequired
The ID for this Spark job. Used in the API to reference this job. Allowed characters: a-z, A-Z, dash (-) and underscore (_). Maximum length: 63 characters.
<= 63 characters
Match pattern: [a-zA-Z][_\-a-zA-Z0-9]*[a-zA-Z0-9]?
itemIdField - stringrequired
Item Id field in signals, which will be used to join with catalog.
Default: doc_id_s
outputPath - stringrequired
Output collection or cloud storage path which contains item categories.
partitionCols - string
If writing to non-Solr sources, this field will accept a comma-delimited list of column names for partitioning the dataframe before writing to the external output
randomSeed - integer
For any deterministic pseudorandom number generation
Default: 1234
readOptions - array[object]
Options used when reading input from Solr or other sources.
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
signalsPath - stringrequired
Signals collection or cloud storage path which contains item categories.
sparkConfig - array[object]
Spark configuration settings.
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}
sparkSQL - string
Use this field to create a Spark SQL query for filtering your input data. The input data will be registered as spark_input
Default: SELECT * from spark_input
topCategoryProportion - number
Proportion of the top category has to be among all categories.
Default: 0.5
topCategoryThreshold - integer
Minimum number of query,category pair counts.
>= 1
exclusiveMinimum: false
Default: 1
trainingDataFilterQuery - string
Solr query to additionally filter signals. For non-solr data source use SPARK SQL FILTER QUERY under Advanced to filter results
Default: *:*
trainingDataFrameConfigOptions - object
Additional spark dataframe loading configuration options
trainingDataSamplingFraction - number
Fraction of the training data to use
<= 1
exclusiveMaximum: false
Default: 1
type - stringrequired
Default: build-training
Allowed values: build-training
writeOptions - array[object]
Options used when writing output to Solr or other sources
object attributes:{key
required : {
display name: Parameter Name
type: string
}value
: {
display name: Parameter Value
type: string
}}