Legacy Product

Fusion 5.10
    Fusion 5.10

    Javascript V2 Connector Configuration Reference

    Table of Contents

    The Javascript connector allows users to write ad-hoc document retrieval routines to fetch content from filesystems and websites.

    Deprecation and removal notice

    This connector is deprecated as of August 24, 2020 and is removed or expected to be removed as of May 18, 2022. Use the index pipeline for fetching content.

    For more information about deprecations and removals, including possible alternatives, see Deprecations and Removals.

    Configuration

    When entering configuration values in the UI, use unescaped characters, such as \t for the tab character. When entering configuration values in the API, use escaped characters, such as \\t for the tab character.

    Connector for document routines written in JavaScript to fetch content from filesystems and websites.

    id - stringrequired

    Unique name for this datasource.

    >= 1 characters

    Match pattern: ^[a-zA-Z0-9_-]+$

    pipeline - stringrequired

    Name of an existing index pipeline for processing documents.

    >= 1 characters

    description - string

    Optional description for this datasource.

    properties - Properties

    Datasource configuration properties

    db - Connector DB

    Type and properties for a ConnectorDB implementation to use with this datasource.

    type - string

    Fully qualified class name of ConnectorDb implementation.

    >= 1 characters

    Default: com.lucidworks.connectors.db.impl.MapDbConnectorDb

    inlinks - boolean

    Keep track of incoming links. This negatively impacts performance and size of DB.

    Default: false

    aliases - boolean

    Keep track of original URI-s that resolved to the current URI. This negatively impacts performance and size of DB.

    Default: false

    inv_aliases - boolean

    Keep track of target URI-s that the current URI resolves to. This negatively impacts performance and size of DB.

    Default: false

    startLinks - array[string]

    One or more starting URIs for this datasource.

    Default: "__js__"

    dedupe - boolean

    If true, documents will be deduplicated. Deduplication can be done based on an analysis of the content, on the content of a specific field, or by a JavaScript function. If neither a field nor a script are defined, content analysis will be used.

    Default: false

    dedupeField - string

    Field to be used for dedupe. Define either a field or a dedupe script, otherwise the full raw content of each document will be used.

    dedupeScript - string

    Custom javascript to dedupe documents. The script must define a 'genSignature(content){}' function, but can use any combination of document fields. The function must return a string.

    dedupeSaveSignature - boolean

    If true,the signature used for dedupe will be stored in a 'dedupeSignature_s' field. Note this may cause errors about 'immense terms' in that field.

    Default: false

    delete - boolean

    Set to true to remove documents from the index when they can no longer be accessed as unique documents.

    Default: true

    deleteErrorsAfter - integer

    Number of fetch failures to tolerate before removing a document from the index. The default of -1 means no fetch failures will be removed.

    Default: -1

    fetchThreads - integer

    The number of threads to use during fetching. The default is 5.

    Default: 5

    emitThreads - integer

    The number of threads used to send documents from the connector to the index pipeline. The default is 5.

    Default: 5

    chunkSize - integer

    The number of items to batch for each round of fetching. A higher value can make crawling faster, but memory usage is also increased. The default is 1.

    Default: 1

    fetchDelayMS - integer

    Number of milliseconds to wait between fetch requests. The default is 0. This property can be used to throttle a crawl if necessary.

    Default: 0

    refreshAll - boolean

    Set to true to always recrawl all items found in the crawldb.

    Default: true

    refreshStartLinks - boolean

    Set to true to recrawl items specified in the list of start links.

    Default: false

    refreshErrors - boolean

    Set to true to recrawl items that failed during the last crawl.

    Default: false

    refreshOlderThan - integer

    Number of seconds to recrawl items whose last fetched date is longer ago than this value.

    Default: -1

    refreshIDPrefixes - array[string]

    A prefix to recrawl all items whose IDs begin with this value.

    refreshIDRegexes - array[string]

    A regular expression to recrawl all items whose IDs match this pattern.

    refreshScript - string

    A JavaScript function ('shouldRefresh()') to customize the items recrawled.

    forceRefresh - boolean

    Set to true to recrawl all items even if they have not changed since the last crawl.

    Default: false

    forceRefreshClearSignatures - boolean

    If true, signatures will be cleared if force recrawl is enabled.

    Default: true

    retryEmit - boolean

    Set to true for emit batch failures to be retried on a document-by-document basis.

    Default: true

    depth - integer

    Number of levels in a directory or site tree to descend for documents.

    Default: -1

    maxItems - integer

    Maximum number of documents to fetch. The default (-1) means no limit.

    Default: -1

    failFastOnStartLinkFailure - boolean

    If true, when Fusion cannot connect to any of the provided start links, the crawl is stopped and an exception logged.

    Default: true

    crawlDBType - string

    The type of crawl database to use, in-memory or on-disk.

    Default: on-disk

    Allowed values: in-memoryon-disk

    commitAfterItems - integer

    Commit the crawlDB to disk after this many items have been received. A smaller number here will result in a slower crawl because of commits to disk being more frequent; conversely, a larger number here will cause a resumed job after a crash to need to recrawl more records.

    Default: 10000

    initial_mapping - Initial field mapping

    Provides mapping of fields before documents are sent to an index pipeline.

    skip - boolean

    Set to true to skip this stage.

    Default: false

    label - string

    A unique label for this stage.

    <= 255 characters

    condition - string

    Define a conditional script that must result in true or false. This can be used to determine if the stage should process or not.

    reservedFieldsMappingAllowed - boolean

    Default: false

    mappings - array[object]

    List of mapping rules

    Default: {"source":"charSet","target":"charSet_s","operation":"move"}{"source":"fetchedDate","target":"fetchedDate_dt","operation":"move"}{"source":"lastModified","target":"lastModified_dt","operation":"move"}{"source":"signature","target":"dedupeSignature_s","operation":"move"}{"source":"length","target":"length_l","operation":"move"}{"source":"mimeType","target":"mimeType_s","operation":"move"}{"source":"parent","target":"parent_s","operation":"move"}{"source":"owner","target":"owner_s","operation":"move"}{"source":"group","target":"group_s","operation":"move"}

    object attributes:{source required : {
     display name: Source Field
     type: string
    }
    target : {
     display name: Target Field
     type: string
    }
    operation : {
     display name: Operation
     type: string
    }
    }

    unmapped - Unmapped Fields

    If fields do not match any of the field mapping rules, these rules will apply.

    source - string

    The name of the field to be mapped.

    target - string

    The name of the field to be mapped to.

    operation - string

    The type of mapping to perform: move, copy, delete, add, set, or keep.

    Default: copy

    Allowed values: copymovedeletesetaddkeep

    excludeExtensions - array[string]

    File extensions that should not to be fetched. This will limit this datasource to all extensions except this list.

    excludeRegexes - array[string]

    Regular expressions for URI patterns to exclude. This will limit this datasource to only URIs that do not match the regular expression.

    includeExtensions - array[string]

    File extensions to be fetched. This will limit this datasource to only these file extensions.

    includeRegexes - array[string]

    Regular expressions for URI patterns to include. This will limit this datasource to only URIs that match the regular expression.

    retainOutlinks - boolean

    Set to true for links found during fetching to be stored in the crawldb. This increases precision in certain recrawl scenarios, but requires more memory and disk space.

    Default: false

    aliasExpiration - integer

    The number of crawls after which an alias will expire. The default is 1 crawl.

    Default: 1

    restrictToTree - boolean

    If true, only documents found in a tree below the start links will be fetched. By default, this means limiting the crawl to the domain of the start links. For example, if the start link is 'http://host.com/US' then only links to the 'host.com' domain will be followed. Further options are available for modifying this behavior.

    Default: false

    restrictToTreeAllowSubdomains - boolean

    Modifies the behavior of 'Restrict crawl to start-link tree' so that a link to any sub-domain of the start links is allowed. For example, if the start link is 'http://host.com', this option ensures that links to 'http://news.host.com' are also followed. This option requires 'Restrict to start-link tree' to be enabled to have any effect.

    Default: false

    restrictToTreeUseHostAndPath - boolean

    Modifies the behavior of 'Restrict crawl to start-link tree' to include the 'path' of the start link in the restriction logic. For example, if the start link is 'http://host.com/US', this option will limit all followed URLs to ones starting with the '/US/' path. This option requires 'Restrict to start-link tree' to be enabled to have any effect.

    Default: false

    restrictToTreeIgnoredHostPrefixes - array[string]

    Modifies the behavior of 'Restrict crawl to start-link tree' to ignore the configured list of prefixes when restricting the crawl. Commonly, 'www.' is ignored so links with the same domain are allowed, whether of the form 'http://host.com' or 'http://www.host.com'. This option requires 'Restrict to start-link tree' to be enabled to have any effect.

    Default: "www."

    f.script - string

    JavaScript program to fetch documents.

    rewriteLinkScript - string

    A Javascript function 'rewriteLink(link) { }' to modify links to documents before they are fetched.

    diagnosticMode - boolean

    Enable to print more detailed information to the logs about each request.

    Default: false