Jobs Configuration
These reference topics provide complete information about configuration properties for the Spark jobs that are enabled with a Fusion AI license.
For conceptual information and instructions for configuring and scheduling jobs, see Jobs and Schedules.
Additional jobs are available as part of the basic Fusion Server feature set.
-
Use this job when you want to compute user recommendations or item similarities using a collaborative filtering recommender. You can also implement a user-to-item recommender in the advanced section of this job’s configuration UI. This job uses SparkML’s Alternating Least Squares (ALS).
-
Use this job when you already have clusters or well-defined document categories, and you want to discover and attach keywords to see representative words within those existing clusters. (If you want to create new clusters, use the Document Clustering job.)
-
Use this job when you want to compute basic metrics about your collection, like average word length, phrase percentages, and outlier documents (with very many or very few documents).
-
Cluster a set of documents and attach cluster labels.
-
Estimate ground truth queries using click signals and query signals, with document relevance per query determined using a click/skip formula.
-
Perform head/tail analysis of queries from collections of raw or aggregated signals, to identify underperforming queries and the reasons. This information is valuable for improving overall conversions, Solr configurations, auto-suggest, product catalogs, and SEO/SEM strategies, in order to improve conversion rates.
-
Compute user recommendations based on a pre-computed item similarity model.
-
Use this job when you only want to compute item-to-item similarities. This method is more lightweight than the generic Recommendations job.
-
Logistic Regression Classifier Training
Train a regularized logistic regression model for text classification.
-
Use this job when you want to find outliers from a set of documents and attach labels for each outlier group.
-
The Parallel Bulk Loader (PBL) job enables bulk ingestion of structured and semi-structured data from big data systems, NoSQL databases, and common file formats like Parquet and Avro.
-
A Spark SQL aggregation job where user-defined parameters are injected into a built-in SQL template at runtime.
-
Identify multi-word phrases in signals.
-
Train a collaborative filtering matrix decomposition recommender using SparkML’s Alternating Least Squares (ALS) to batch-compute query-query similarities. This can be used for items-for-query recommendations as well as queries-for-query recommendations.
-
Random Forest Classifier Training
Train a random forest classifier for text classification.
-
Calculate relevance metrics (nDCG and so on) by replaying ground truth queries against catalog data using variants from an experiment.
-
SQL-Based Experiment Metric (deprecated)
This job is created by an experiment in order to calculate an objective.
SQL-Based Experiment Metric job is deprecated as of Fusion AI 4.0.2. -
Token and Phrase Spell Correction
Detect misspellings in queries or documents using the numbers of occurrences of words and phrases.
-
Train a shallow neural model, and project each document onto this vector embedding space.