Legacy Product

Fusion 5.10
    Fusion 5.10

    Web V1 Connector Configuration Reference

    Table of Contents

    The Web V1 connector retrieves data from a Web site using HTTP and starting from a specified URL.

    V1 deprecation and removal notice

    Starting in Fusion 5.12.0, all V1 connectors are deprecated. This means they are no longer being actively developed and will be removed in Fusion 5.13.0.

    The replacement for this connector is the Web V2 connector.

    If you are using this connector, you must migrate to the replacement connector or a supported alternative before upgrading to Fusion 5.13.0. We recommend migrating to the replacement connector as soon as possible to avoid any disruption to your workflows.

    Fusion uses the Open Graph Protocol as the default configuration for fields. Deviation from that standard configuration may exclude information from indexing during the crawl.

    Crawl options

    Configuration

    When entering configuration values in the UI, use unescaped characters, such as \t for the tab character. When entering configuration values in the API, use escaped characters, such as \\t for the tab character.

    If you experience CrawlDB errors such as "File is already opened and is locked", then raise the Alias Expiration setting.

    Connector for websites and web-based content resources.

    id - stringrequired

    Unique name for this datasource.

    >= 1 characters

    Match pattern: ^[a-zA-Z0-9_-]+$

    pipeline - stringrequired

    Name of an existing index pipeline for processing documents.

    >= 1 characters

    description - string

    Optional description for this datasource.

    parserId - string

    Parser used when parsing raw content. For some connectors, a configuration to 'retry' parsing if an error occurs is available as an advanced setting

    Default: _system

    properties - Properties

    Datasource configuration properties

    db - Connector DB

    Type and properties for a ConnectorDB implementation to use with this datasource.

    type - string

    Fully qualified class name of ConnectorDb implementation.

    >= 1 characters

    Default: com.lucidworks.connectors.db.impl.MapDbConnectorDb

    inlinks - boolean

    Keep track of incoming links. This negatively impacts performance and size of DB.

    Default: false

    aliases - boolean

    Keep track of original URI-s that resolved to the current URI. This negatively impacts performance and size of DB.

    Default: false

    inv_aliases - boolean

    Keep track of target URI-s that the current URI resolves to. This negatively impacts performance and size of DB.

    Default: false

    startLinks - array[string]

    The URL(s) that the crawler will start crawling from, for example: https://en.wikipedia.org/wiki/Main_Page

    dedupe - boolean

    If true, documents will be deduplicated. Deduplication can be done based on an analysis of the content, on the content of a specific field, or by a JavaScript function. If neither a field nor a script are defined, content analysis will be used.

    Default: false

    dedupeField - string

    Field to be used for dedupe. Define either a field or a dedupe script, otherwise the full raw content of each document will be used.

    dedupeScript - string

    Custom javascript to dedupe documents. The script must define a 'genSignature(content){}' function, but can use any combination of document fields. The function must return a string.

    dedupeSaveSignature - boolean

    If true,the signature used for dedupe will be stored in a 'dedupeSignature_s' field. Note this may cause errors about 'immense terms' in that field.

    Default: false

    delete - boolean

    Set to true to remove documents from the index when they can no longer be accessed as unique documents.

    Default: true

    deleteErrorsAfter - integer

    Number of times a website can error out, for example with a 500 error or a connection timeout, before a document is removed from the index. The default of -1 means such documents are never removed. Note that pages that return a 404 status code can be configured to be removed immediately regardless of this setting.

    Default: -1

    fetchThreads - integer

    The number of threads to use during fetching. The default is 5.

    Default: 5

    emitThreads - integer

    The number of threads used to send documents from the connector to the index pipeline. The default is 5.

    Default: 5

    chunkSize - integer

    The number of items to batch for each round of fetching. A higher value can make crawling faster, but memory usage is also increased. The default is 1.

    Default: 1

    fetchDelayMS - integer

    Number of milliseconds to wait between fetch requests. The default is 0. This property can be used to throttle a crawl if necessary.

    Default: 0

    refreshAll - boolean

    Set to true to always recrawl all items found in the crawldb.

    Default: true

    refreshStartLinks - boolean

    Set to true to recrawl items specified in the list of start links.

    Default: false

    refreshErrors - boolean

    Set to true to recrawl items that failed during the last crawl.

    Default: false

    refreshOlderThan - integer

    Number of seconds to recrawl items whose last fetched date is longer ago than this value.

    Default: -1

    refreshIDPrefixes - array[string]

    A prefix to recrawl all items whose IDs begin with this value.

    refreshIDRegexes - array[string]

    A regular expression to recrawl all items whose IDs match this pattern.

    refreshScript - string

    A JavaScript function ('shouldRefresh()') to customize the items recrawled.

    forceRefresh - boolean

    Set to true to recrawl all items even if they have not changed since the last crawl.

    Default: false

    forceRefreshClearSignatures - boolean

    If true, signatures will be cleared if force recrawl is enabled.

    Default: true

    retryEmit - boolean

    Set to true for emit batch failures to be retried on a document-by-document basis.

    Default: true

    depth - integer

    Number of levels in a directory or site tree to descend for documents.

    Default: -1

    maxItems - integer

    Maximum number of documents to fetch. The default (-1) means no limit.

    Default: -1

    failFastOnStartLinkFailure - boolean

    If true, when Fusion cannot connect to any of the provided start links, the crawl is stopped and an exception logged.

    Default: true

    crawlDBType - string

    The type of crawl database to use, in-memory or on-disk.

    Default: on-disk

    Allowed values: in-memoryon-disk

    commitAfterItems - integer

    Commit the crawlDB to disk after this many items have been received. A smaller number here will result in a slower crawl because of commits to disk being more frequent; conversely, a larger number here will cause a resumed job after a crash to need to recrawl more records.

    Default: 10000

    initial_mapping - Initial field mapping

    Provides mapping of fields before documents are sent to an index pipeline.

    skip - boolean

    Set to true to skip this stage.

    Default: false

    label - string

    A unique label for this stage.

    <= 255 characters

    condition - string

    Define a conditional script that must result in true or false. This can be used to determine if the stage should process or not.

    reservedFieldsMappingAllowed - boolean

    Default: false

    mappings - array[object]

    List of mapping rules

    Default: {"source":"charSet","target":"charSet_s","operation":"move"}{"source":"fetchedDate","target":"fetchedDate_dt","operation":"move"}{"source":"lastModified","target":"lastModified_dt","operation":"move"}{"source":"signature","target":"dedupeSignature_s","operation":"move"}{"source":"length","target":"length_l","operation":"move"}{"source":"mimeType","target":"mimeType_s","operation":"move"}{"source":"parent","target":"parent_s","operation":"move"}{"source":"owner","target":"owner_s","operation":"move"}{"source":"group","target":"group_s","operation":"move"}

    object attributes:{source required : {
     display name: Source Field
     type: string
    }
    target : {
     display name: Target Field
     type: string
    }
    operation : {
     display name: Operation
     type: string
    }
    }

    unmapped - Unmapped Fields

    If fields do not match any of the field mapping rules, these rules will apply.

    source - string

    The name of the field to be mapped.

    target - string

    The name of the field to be mapped to.

    operation - string

    The type of mapping to perform: move, copy, delete, add, set, or keep.

    Default: copy

    Allowed values: copymovedeletesetaddkeep

    excludeExtensions - array[string]

    File extensions that should not to be fetched. This will limit this datasource to all extensions except this list.

    excludeRegexes - array[string]

    Regular expressions for URI patterns to exclude. This will limit this datasource to only URIs that do not match the regular expression.

    includeExtensions - array[string]

    File extensions to be fetched. This will limit this datasource to only these file extensions.

    includeRegexes - array[string]

    Regular expressions for URI patterns to include. This will limit this datasource to only URIs that match the regular expression.

    retainOutlinks - boolean

    Set to true for links found during fetching to be stored in the crawldb. This increases precision in certain recrawl scenarios, but requires more memory and disk space.

    Default: false

    aliasExpiration - integer

    The number of crawls after which an alias will expire. The default is 1 crawl.

    Default: 1

    restrictToTree - boolean

    If true, only URLs that match the startLinks URL domain will be followed

    Default: true

    restrictToTreeAllowSubdomains - boolean

    Modifies the behavior of 'Restrict crawl to start-link tree' so that a link to any sub-domain of the start links is allowed. For example, if the start link is 'http://host.com', this option ensures that links to 'http://news.host.com' are also followed. This option requires 'Restrict to start-link tree' to be enabled to have any effect.

    Default: false

    restrictToTreeUseHostAndPath - boolean

    Modifies the behavior of 'Restrict crawl to start-link tree' to include the 'path' of the start link in the restriction logic. For example, if the start link is 'http://host.com/US', this option will limit all followed URLs to ones starting with the '/US/' path. This option requires 'Restrict to start-link tree' to be enabled to have any effect.

    Default: false

    restrictToTreeIgnoredHostPrefixes - array[string]

    Modifies the behavior of 'Restrict crawl to start-link tree' to ignore the configured list of prefixes when restricting the crawl. Commonly, 'www.' is ignored so links with the same domain are allowed, whether of the form 'http://host.com' or 'http://www.host.com'. This option requires 'Restrict to start-link tree' to be enabled to have any effect.

    Default: "www."

    f.maxSizeBytes - integer

    Maximum size, in bytes, of a document to fetch.

    Default: 4194304

    f.timeoutMS - integer

    Time in milliseconds to wait for server response.

    Default: 10000

    f.requestRetryCount - integer

    If an http request fails, retry up to this many times before giving up. If set to 0, requests will not be retried. This is useful in situations where your crawls are failing with errors like "The target server failed to respond".

    Default: 0

    f.defaultCharSet - string

    Default character set to use when one is not declared in the HTTP headers.

    Default: UTF-8

    f.obeyCharSet - boolean

    Use the encoding sent by the web server (if any) when parsing content. If unset, Fusion will try to guess the character set when parsing.

    Default: true

    f.defaultMIMEType - string

    Default MIME type to use when one is not declared in the HTTP headers.

    Default: application/octet-stream

    f.sitemapURLs - array[string]

    URLs for sitemaps, to be used a basis for link discovery. Rules found in sitemaps will not be processed.

    f.maintainCookies - boolean

    If you are not using authentication, then by default cookies are not stored in between web requests (stateless). If checked, cookies will be maintained between requests during the web crawl even when you are not using authentication. If you are using authentication, this checkbox has no effect on the crawl and can be ignored.

    Default: false

    f.bulkStartLinks - string

    If a large number of start links must be defined, you can provide them here. One link per line.

    f.basicAuth - array[object]

    Settings for Basic authentiation

    object attributes:{host required : {
     display name: Host
     type: string
    }
    port required : {
     display name: Port
     type: integer
    }
    realm : {
     display name: Realm
     type: string
    }
    userName : {
     display name: User
     type: string
    }
    password : {
     display name: Password
     type: string
    }
    id : {
     display name: Auth Config id
     type: string
    }
    }

    f.digestAuth - array[object]

    Settings for Digest authentication

    object attributes:{host required : {
     display name: Host
     type: string
    }
    port required : {
     display name: Port
     type: integer
    }
    realm : {
     display name: Realm
     type: string
    }
    userName : {
     display name: User
     type: string
    }
    password : {
     display name: Password
     type: string
    }
    id : {
     display name: Auth Config id
     type: string
    }
    }

    f.ntlmAuth - array[object]

    Settings for NTLM authentication

    object attributes:{host required : {
     display name: Host
     type: string
    }
    port required : {
     display name: Port
     type: integer
    }
    realm : {
     display name: Realm
     type: string
    }
    userName : {
     display name: User
     type: string
    }
    password : {
     display name: Password
     type: string
    }
    id : {
     display name: Auth Config id
     type: string
    }
    domain : {
     display name: Domain
     type: string
    }
    workstation : {
     display name: Workstation
     type: string
    }
    }

    f.formAuth - array[object]

    Settings for Form based authentication

    object attributes:{action required : {
     display name: URL
     type: string
    }
    ttl : {
     display name: TTL (ms)
     type: number
    }
    params : {
     display name: Parameters
     type: object
    }
    passwordParamName : {
     display name: Password Parameter
     type: string
    }
    password : {
     display name: Password
     type: string
    }
    id : {
     display name: Auth Config id
     type: string
    }
    }

    f.samlAuth - array[object]

    Settings for SAML/Smart Form based authentication allows you to visit one or more web pages that contain form inputs such as username, password, security questions, etc., submitting each one in turn in order to become authenticated.

    object attributes:{action required : {
     display name: URL
     type: string
    }
    ttl : {
     display name: TTL (ms)
     type: number
    }
    params : {
     display name: Parameters
     type: object
    }
    passwordParamName : {
     display name: Password Parameter
     type: string
    }
    password : {
     display name: Password
     type: string
    }
    id : {
     display name: Auth Config id
     type: string
    }
    }

    f.appendTrailingSlashToLinks - boolean

    If true, a trailing '/' will be added to link URLs when the URL does not end in a dot ('.').

    Default: false

    f.discardLinkURLQueries - boolean

    If true, query parameters found in URLs will be removed before being added to the discovery queue.

    Default: false

    f.discardLinkURLAnchors - boolean

    If true, anchors found in URLs will be removed before being added to the discovery queue.

    Default: true

    f.scrapeLinksBeforeFiltering - boolean

    If true, links will be extracted from documents before any other document processing has ocurred. By default, links are extracted after all other document processing.

    Default: false

    f.crawlJS - boolean

    Evaluate JavaScript on web pages when crawling. This makes it possible for the Web fetcher to extract content from pages that is only available after JavaScript has prepared the document, but it may make the crawl slower because JavaScript loading can be time consuming.

    Default: false

    f.jsEnabledAuth - boolean

    Evaluate JavaScript when doing SAML/SmartForm authentication. This is only applicable if you have specified a SmartForms/SAML Authentication element in the "Crawl Authentication" area.

    Default: false

    f.jsPageLoadTimeout - integer

    The time to wait in milliseconds for a page load to complete. If the timeout is -1, page loads can be indefinite. Maximum: 180,000ms i.e. 3 minutes

    >= -1

    <= 180000

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 20000

    f.jsScriptTimeout - integer

    The time to wait in milliseconds wait for an asynchronous script to finish execution. If the timeout is -1, then the script will be allowed to run indefinitely. Maximum: 30,000ms

    >= -1

    <= 180000

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 20000

    f.jsAjaxTimeout - integer

    The time in milliseconds after which an AJAX request will be ignored when considering whether all AJAX requests have completed. Maximum: 180,000ms i.e. 3 minutes

    >= -1

    <= 180000

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 20000

    f.extraLoadTimeMs - integer

    The JavaScript evaluation process will first wait for the DOM 'document.readyState' to be set to 'complete'; then it will wait until there are no more pending Ajax before emitting the page’s contents. Use this property to wait an additional number of milliseconds before emitting the contents. This gives background JavaScript routines a chance to finish rendering the page before the contents is emitted.

    >= -1

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 250

    f.extraPageLoadDeltaChars - integer

    This parameter is used when the "Extra time to wait for content after page load (ms)" parameter is > 0. It will stop the additional wait time if it sees the web page's content grows by at least this many characters. If set to 0 (the default) any increase in character count indicates the page load is finished.

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 0

    f.quitTimeoutMs - integer

    The amount of time to wait for a web browser to quit before killing the browser process.

    >= -1

    <= 9999999

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 5000

    f.useRequestCounter - boolean

    Use the request counter plugin to wait for all pending ajax requests to be complete before loading the page contents.

    Default: true

    f.requestCounterMinWaitMs - integer

    When the requestcounter is enabled, often early on the requestcount may say there are 0 pending requests... but there may still be ajax requests that haven't run yet. This parameter provides a certain time in milliseconds to wait for a non-zero count to be returned. If a requestcount is non-zero at any point, then the next requestcount = 0 is assumed to signify this page is done loading.

    <= 99999999

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 5000

    f.requestCounterMaxWaitMs - integer

    The request counter plugin counts active ajax requests after a page was loaded until there are no more pending ajax requests. This parameter says how long to wait in milliseconds for the requestcount to go to 0 before giving up.

    >= 1

    <= 99999999

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 20000

    f.headlessBrowser - boolean

    Applicable only when "Evaluate JavaScript" is selected, deselect this checkbox if you want to actually see browser windows display while fetchers process web pages. Otherwise, if selected, browsers will run in "headless" mode which means they will run in the background. If running on a server with no desktop interface, this must stay selected.

    Default: true

    f.takeScreenshot - boolean

    Applicable only when "Evaluate JavaScript" is selected, take a screenshot of the fully rendered page and index it. Screenshots will be indexed in a field called "screenshot_bin". You must make sure your schema specifies this field as a binary field or indexing will fail. To add this, go to System -> Solr Config -> Managed Schema then add <dynamicField indexed="true" name="*_bin" stored="true" type="binary"/>

    Default: false

    f.screenshotFullscreen - boolean

    When taking a screenshot, capture the full screen.

    Default: false

    f.viewportWidth - integer

    Set an optional browser viewport width. If not specified, will default to 800.

    >= 1

    <= 9999999

    exclusiveMinimum: false

    exclusiveMaximum: false

    f.viewportHeight - integer

    Set an optional browser viewport height. If not specified, will default to 600.

    >= 1

    <= 9999999

    exclusiveMinimum: false

    exclusiveMaximum: false

    f.deviceScreenFactor - integer

    Set an optional browser device screen factor. If not specified, will default to 1 (no scaling).

    >= 1

    <= 99999

    exclusiveMinimum: false

    exclusiveMaximum: false

    f.simulateMobile - boolean

    Simulate a mobile device

    Default: false

    f.mobileScreenWidth - integer

    If simulate mobile is checked, this species the device'semulated screen width.

    >= 1

    <= 9999999

    exclusiveMinimum: false

    exclusiveMaximum: false

    f.mobileScreenHeight - integer

    If simulate mobile is checked, this species the device'semulated screen height.

    >= 1

    <= 9999999

    exclusiveMinimum: false

    exclusiveMaximum: false

    f.chromeBinaryPath - string

    This property is no longer in use, and is only in place due to backwards compatible configuration validation purposes.

    f.chromeExtraCommandLineArgs - string

    Specify additional command line arguments to add to the chromium executable when it is run.

    f.obeyRobots - boolean

    If true, Allow, Disallow and other rules found in a robots.txt file will be obeyed.

    Default: true

    f.obeyRobotsMeta - boolean

    If true, rules like 'noindex', 'nofollow' and others found in a robots meta tag on a page or in the headers of the HTTP response are obeyed.

    Default: true

    f.obeyLinkNofollow - boolean

    If true, rel='nofollow' on links are obeyed

    Default: true

    f.obeyRobotsDelay - boolean

    If true, Crawl-Delay rules in robots.txt will be obeyed. Disabling this option will speed up crawling, but is considered negative behavior for sites you do not control.

    Default: true

    f.tagFields - array[string]

    HTML tags of elements to put into their own field in the index. The field will have the same name as the tag.

    f.tagIDFields - array[string]

    HTML tag IDs of elements to put into their own field in the index. The field will have the same name as the tag ID.

    f.tagClassFields - array[string]

    HTML tag classes of elements to put into their own field in the index. The field will have the same name as the tag class.

    f.selectorFields - array[string]

    List of Jsoup selectors for elements to put into their separate field in the index. The field will have the same name as the element. Syntax for jsoup selectors is available at http://jsoup.org/apidocs/org/jsoup/select/Selector.html.

    f.filteringRootTags - array[string]

    Root HTML elements whose child elements will be used to extract content. By default 'body' and 'head' elements are already included.

    Default: "body""head"

    f.includeSelectors - array[string]

    Jsoup-formatted selectors for elements to include in the crawled content.

    f.includeTags - array[string]

    HTML tag names of elements to include in the crawled content.

    f.includeTagClasses - array[string]

    HTML tag classes of elements to include in the crawled content.

    f.includeTagIDs - array[string]

    HTML tag IDs of elements to include in the crawled content.

    f.excludeSelectors - array[string]

    Jsoup-formatted selectors for elements to exclude from the crawled content. Syntax for jsoup selectors is available at http://jsoup.org/apidocs/org/jsoup/select/Selector.html.

    f.excludeTags - array[string]

    HTML tag names of elements to exclude from the crawled content.

    f.excludeTagClasses - array[string]

    HTML tag classes of elements to exclude from the crawled content.

    f.excludeTagIDs - array[string]

    HTML tag IDs of elements to exclude from the crawled content.

    f.proxy - string

    Address of the HTTP proxy, if required. This should be entered in the format host:port.

    f.respectMetaEquivRedirects - boolean

    If true, the connector will follow metatags with refresh redirects.

    Default: false

    f.allowCircularRedirects - boolean

    If true, a request can be redirected to the same URL multiple times

    Default: false

    f.followCanonicalTags - boolean

    Deduplicate, by only indexing the document at the URL specified in the canonical tag. https://en.wikipedia.org/wiki/Canonical_link_element

    Default: false

    f.canonicalTagsRedirectLimit - integer

    Because canonical tag resolution may be cyclical, a limit must be applied to the total number of requests. This value ensures that the resolution finishes in a reasonable amount of time.

    Default: 4

    f.allowAllCertificates - boolean

    If false, security checks will be performed on all SSL/TLS certificate signers and origins. This means self-signed certificates would not be supported.

    Default: false

    f.useIpAddressForSslConnections - boolean

    Use IP address instead of host name for SSLconnections. This is used to work around misconfigured HTTP server throwing 'unrecognized name' error whenSNI is enabled. (This only works if 'Allow all certificates' setting is also enabled)

    Default: false

    f.userAgentName - string

    Name the connector should use when identifying itself to a website in order to crawl it.

    Default: Lucidworks-Anda/2.0

    f.userAgentEmail - string

    Email address to use as part of connector identification.

    f.userAgentWebAddr - string

    Web address to use as part of connector identification.

    f.cookieSpec - string

    Default: browser-compatibility

    Allowed values: browser-compatibilityrfc-2965best-matchignore-all

    parserRetryCount - integer

    The maximum number of times the configured parser will try getting content before giving up

    <= 5

    exclusiveMinimum: false

    exclusiveMaximum: true

    Default: 0

    delete404 - boolean

    Select this option to delete indexed pages that return a 404 or 410 error.

    Default: true

    f.addedHeaders - string

    Add these headers to http requests. This is useful for web sites that require certain headers to let you visit them. Write each header on its own line in the format HeaderName: HeaderValue

    f.customLinkSelectors - array[string]

    By default, only standard anchor tags, iframe tags, frame tags, and link tags are fetched. This allows you to use one or more XPath expressions to parse links from custom places. Such as //option/@value

    fetchDelayMSPerHost - boolean

    If true, the 'Fetch delay (ms)' property will be applied for each host.

    Default: true

    rewriteLinkScript - string

    A Javascript function 'rewriteLink(link) { }' to modify links to documents before they are fetched.

    diagnosticMode - boolean

    Enable to print more detailed information to the logs about each request.

    Default: false

    trackEmbeddedIDs - boolean

    Track IDs produced by splitters to enable dedupe and deletion of embedded content?

    Default: true

    sitemap_incremental_crawling - boolean

    When enabled, only URLs found in the sitemap will be processed and crawled.

    Default: false

    f.index_items_discarded - boolean

    Enable to index discarded document metadata

    Default: false