Legacy Product

Fusion 5.10
    Fusion 5.10

    Trains a shallow neural model, and projects each document onto this vector embedding space

    analyzerConfig - string

    LuceneTextAnalyzer schema for tokenization (JSON-encoded)

    Default: { "analyzers": [{ "name": "StdTokLowerStop","charFilters": [ { "type": "htmlstrip" } ],"tokenizer": { "type": "standard" },"filters": [{ "type": "lowercase" },{ "type": "KStem" },{ "type": "length", "min": "2", "max": "32767" },{ "type": "fusionstop", "ignoreCase": "true", "format": "snowball", "words": "org/apache/lucene/analysis/snowball/english_stop.txt" }] }],"fields": [{ "regex": ".+", "analyzer": "StdTokLowerStop" } ]}

    dataFormat - string

    Spark-compatible format which training data comes in (like 'solr', 'hdfs', 'file', 'parquet' etc)

    Default: solr

    Allowed values: solrhdfsfileparquet

    fieldToVectorize - stringrequired

    Solr field containing text training data. Data from multiple fields with different weights can be combined by specifying them as field1:weight1,field2:weight2 etc.

    >= 1 characters

    id - stringrequired

    The ID for this Spark job. Used in the API to reference this job. Allowed characters: a-z, A-Z, dash (-) and underscore (_). Maximum length: 63 characters.

    <= 63 characters

    Match pattern: [a-zA-Z][_\-a-zA-Z0-9]*[a-zA-Z0-9]?

    maxDF - number

    To be kept, terms must occur in no more than this number of documents (if > 1.0), or no more than this fraction of documents (if <= 1.0)

    Default: 1

    minDF - number

    To be kept, terms must occur in at least this number of documents (if > 1.0), or at least this fraction of documents (if <= 1.0)

    Default: 0

    minSparkPartitions - integer

    Minimum number of Spark partitions for training job.

    >= 1

    exclusiveMinimum: false

    Default: 200

    modelId - string

    Identifier for the model to be trained; uses the supplied Spark Job ID if not provided.

    >= 1 characters

    norm - integer

    p-norm to normalize vectors with (choose -1 to turn normalization off)

    Default: 2

    Allowed values: -1012

    numRelatedTerms - integer

    For each collection of input words, find this many word2vec-related words

    >= 1

    exclusiveMinimum: false

    Default: 10

    outputCollection - stringrequired

    Solr Collection to store model-labeled data to

    outputField - string

    Solr field which will contain terms which the word2vec model considers are related to the input

    Default: related_terms_txt

    overwriteExistingModel - boolean

    If a model exists in the model store, overwrite when this job runs

    Default: true

    randomSeed - integer

    For any deterministic pseudorandom number generation

    Default: 1234

    sourceFields - string

    Solr fields to load (comma-delimited). Leave empty to allow the job to select the required fields to load at runtime.

    sparkConfig - array[object]

    Spark configuration settings.

    object attributes:{key required : {
     display name: Parameter Name
     type: string
    }
    value : {
     display name: Parameter Value
     type: string
    }
    }

    trainingCollection - stringrequired

    Solr Collection containing labeled training data

    >= 1 characters

    trainingDataFilterQuery - string

    Solr query to use when loading training data

    >= 3 characters

    Default: *:*

    trainingDataFrameConfigOptions - object

    Additional spark dataframe loading configuration options

    trainingDataSamplingFraction - number

    Fraction of the training data to use

    <= 1

    exclusiveMaximum: false

    Default: 1

    type - stringrequired

    Default: word2vec

    Allowed values: word2vec

    uidField - string

    Field containing the unique ID for each document

    >= 1 characters

    w2vDimension - integer

    Word-vector dimensionality to represent text

    exclusiveMinimum: false

    Default: 50

    w2vMaxIter - integer

    Maximum number of iterations of the word2vec training

    Default: 1

    w2vMaxSentenceLength - integer

    Sets the maximum length (in words) of each sentence in the input data. Any sentence longer than this threshold will be divided into chunks of up to `maxSentenceLength` size.

    >= 3

    exclusiveMinimum: false

    Default: 1000

    w2vStepSize - number

    Training parameter for word2vec convergence (change at your own peril)

    >= 0.005

    exclusiveMinimum: false

    Default: 0.025

    w2vWindowSize - integer

    The window size (context words from [-window, window]) for word2vec

    >= 3

    exclusiveMinimum: false

    Default: 5

    withIdf - boolean

    Weight vector components based on inverse document frequency

    Default: true

    writeOptions - array[object]

    Options used when writing output to Solr.

    object attributes:{key required : {
     display name: Parameter Name
     type: string
    }
    value : {
     display name: Parameter Value
     type: string
    }
    }