Legacy Product

Fusion 5.10
    Fusion 5.10

    Web V2 Connector Configuration Reference

    Table of Contents

    The Web V2 connector retrieves data from a Web site using HTTP and starting from a specified URL.

    There is a known issue that the Docker image of the Web V2 Connector plugin does not contain JavaScript dependencies, so the Web V2 connector will not work with JavaScript enabled. If you require JavaScript, use the Web V1 connector.

    Fusion uses the Open Graph Protocol as the default configuration for fields. Deviation from that standard configuration may exclude information from indexing during the crawl.

    If crawls fail with a corrupted CrawlDB error, reinstall the connector.

    Remote connectors

    V2 connectors support running remotely in Fusion versions 5.7.1 and later. Refer to Configure Remote V2 Connectors.

    Configuration

    When entering configuration values in the UI, use unescaped characters, such as \t for the tab character. When entering configuration values in the API, use escaped characters, such as \\t for the tab character.

    Connector for websites and web-based content resources.

    description - string

    Optional description

    <= 125 characters

    pipeline - stringrequired

    Name of the IndexPipeline used for processing output.

    >= 1 characters

    Match pattern: ^[a-zA-Z0-9_-]+$

    Default: lucidworks-web

    diagnosticLogging - boolean

    Enable diagnostic logging; disabled by default

    Default: false

    parserId - string

    The Parser to use in the associated IndexPipeline.

    Match pattern: ^[a-zA-Z0-9_-]+$

    Default: lucidworks-web

    coreProperties - Core Properties

    Common behavior and performance settings.

    fetchSettings - Fetch Settings

    System level settings for controlling fetch behavior and performance.

    numFetchThreads - number

    Maximum number of fetch threads; defaults to 20.This setting controls the number of threads that call the Connectors fetch method.Higher values can, but not always, help with overall fetch performance.

    >= 1

    <= 500

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 20

    Multiple of: 1

    indexingThreads - number

    Maximum number of indexing threads; defaults to 4.This setting controls the number of threads in the indexing service used for processing content documents emitted by this datasource.Higher values can sometimes help with overall fetch performance.

    >= 1

    <= 10

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 4

    Multiple of: 1

    pluginInstances - number

    Maximum number of plugin instances for distributed fetching. Only specified number of plugin instanceswill do fetching. This is useful for distributing load between different instances.

    <= 500

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 0

    Multiple of: 1

    fetchResponseScheduledTimeout - number

    The maximum amount of time for a response to be scheduled. The task will be canceled if this setting is exceeded.

    >= 1000

    <= 500000

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 300000

    Multiple of: 1

    indexingInactivityTimeout - number

    The maximum amount of time to wait for indexing results (in seconds). If exceeded, the job will fail with an indexing inactivity timeout.

    >= 60

    <= 691200

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 86400

    Multiple of: 1

    pluginInactivityTimeout - number

    The maximum amount of time to wait for plugin activity (in seconds). If exceeded, the job will fail with a plugin inactivity timeout.

    >= 60

    <= 691200

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 600

    Multiple of: 1

    indexMetadata - boolean

    When enabled the metadata of skipped items will be indexed to the content collection

    Default: false

    indexContentFields - boolean

    When enabled, content fields will be indexed to the crawl-db collection

    Default: false

    id - stringrequired

    A unique identifier for this Configuration.

    >= 1 characters

    Match pattern: ^[a-zA-Z0-9_-]+$

    properties - Properties

    Plugin specific properties.

    startLinks - array[string]

    The URL(s) that the crawler will start crawling from, for example: https://en.wikipedia.org/wiki/Main_Page

    bulkStartLinks - string

    If a large number of start links must be defined, you can provide them here. One link per line.

    limitDocumentsConfig - Limit Documents Properties

    depth - number

    Number of levels in a directory or site tree to descend for documents.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: -1

    Multiple of: 1

    maxItems - number

    Maximum number of documents to fetch. The default (-1) means no limit.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: -1

    Multiple of: 1

    excludeExtensions - array[string]

    File extensions that should not to be fetched. This will limit this datasource to all extensions except this list.

    excludeRegexes - array[string]

    Regular expressions for URI patterns to exclude. This will limit this datasource to only URIs that do not match the regular expression.

    includeExtensions - array[string]

    File extensions to be fetched. This will limit this datasource to only these file extensions.

    includeRegexes - array[string]

    Regular expressions for URI patterns to include. This will limit this datasource to only URIs that match the regular expression.

    maxSizeBytes - number

    Maximum size, in bytes, of a document to fetch.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 20000000

    Multiple of: 1

    indexItemsDiscarded - boolean

    Enable to index discarded document metadata

    Default: false

    crawlAuthenticationConfig - Crawl Authentication Properties

    maintainCookies - boolean

    If you are not using authentication, then by default cookies are not stored in between web requests (stateless). If checked, cookies will be maintained between requests during the web crawl even when you are not using authentication. If you are using authentication, this checkbox has no effect on the crawl and can be ignored.

    Default: false

    basicAuth - array[object]

    Settings for Basic authentication

    object attributes:{host : {
     display name: Host
     type: string
    }
    port : {
     display name: Port
     type: number
    }
    realm : {
     display name: Realm
     type: string
    }
    username : {
     display name: Username
     type: string
    }
    password : {
     display name: Password
     type: string
    }
    }

    digestAuth - array[object]

    Settings for Digest authentication

    object attributes:{host : {
     display name: Host
     type: string
    }
    port : {
     display name: Port
     type: number
    }
    realm : {
     display name: Realm
     type: string
    }
    username : {
     display name: Username
     type: string
    }
    password : {
     display name: Password
     type: string
    }
    }

    ntlmAuth - array[object]

    Settings for NTLM authentication

    object attributes:{host : {
     display name: Host
     type: string
    }
    port : {
     display name: Port
     type: number
    }
    realm : {
     display name: Realm
     type: string
    }
    username : {
     display name: Username
     type: string
    }
    password : {
     display name: Password
     type: string
    }
    }

    formAuth - array[object]

    Settings for Form based authentication

    object attributes:{host : {
     display name: Host
     type: string
    }
    port : {
     display name: Port
     type: number
    }
    realm : {
     display name: Realm
     type: string
    }
    username : {
     display name: Username
     type: string
    }
    password : {
     display name: Password
     type: string
    }
    }

    samlAuth - array[object]

    Settings for SAML/Smart Form based authentication allows you to visit one or more web pages that contain form inputs such as username, password, security questions, etc., submitting each one in turn in order to become authenticated.

    object attributes:{workstation : {
     display name: Workstation
     type: string
    }
    domain : {
     display name: Domain
     type: string
    }
    host : {
     display name: Host
     type: string
    }
    port : {
     display name: Port
     type: number
    }
    realm : {
     display name: Realm
     type: string
    }
    username : {
     display name: Username
     type: string
    }
    password : {
     display name: Password
     type: string
    }
    }

    credentialsFile - string

    This property is no longer in use, and is only in place due to backwards compatible configuration validation purposes.

    kerberosEnabled - boolean

    This property is no longer in use, and is only in place due to backwards compatible configuration validation purposes.

    Default: false

    kerberosLoginContextName - string

    This property is no longer in use, and is only in place due to backwards compatible configuration validation purposes.

    kerberosSpn - string

    This property is no longer in use, and is only in place due to backwards compatible configuration validation purposes.

    kerberosPrincipal - string

    This property is no longer in use, and is only in place due to backwards compatible configuration validation purposes.

    kerberosKeytabFile - string

    This property is no longer in use, and is only in place due to backwards compatible configuration validation purposes.

    kerberosKeytabBase64 - string

    This property is no longer in use, and is only in place due to backwards compatible configuration validation purposes.

    kerberosPassword - string

    This property is no longer in use, and is only in place due to backwards compatible configuration validation purposes.

    obeyRobots - boolean

    If true, Allow, Disallow and other rules found in a robots.txt file will be obeyed.

    Default: false

    obeyRobotsMeta - boolean

    If true, rules like 'noindex', 'nofollow' and others found in a robots meta tag on a page or in the headers of the HTTP response are obeyed.

    Default: false

    obeyLinkNofollow - boolean

    If true, rel='nofollow' on links are obeyed

    Default: false

    proxy - string

    Address of the HTTP proxy, if required. This should be entered in the format host:port.

    allowAllCertificates - boolean

    If false, security checks will be performed on all SSL/TLS certificate signers and origins. This means self-signed certificates would not be supported.

    Default: false

    useIpAddressForSslConnections - boolean

    Use IP address instead of host name for SSL connections. This is used to work around mis-configured HTTP server throwing 'unrecognized name' error whenSNI is enabled. (This only works if 'Allow all certificates' setting is also enabled)

    Default: false

    crawlHistoryConfig - Crawl History Properties

    crawlDBType - string

    The type of crawl database to use, in-memory or on-disk.

    Default: on-disk

    commitAfterItems - number

    Commit the crawlDB to disk after this many items have been received. A smaller number here will result in a slower crawl because of commits to disk being more frequent; conversely, a larger number here will cause a resumed job after a crash to need to recrawl more records.

    >= 1

    <= 9999999

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 10000

    Multiple of: 1

    retainOutlinks - boolean

    Set to true for links found during fetching to be stored in the crawldb. This increases precision in certain recrawl scenarios, but requires more memory and disk space.

    Default: false

    aliasExpiration - number

    The number of crawls after which an alias will expire. The default is 1 crawl.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 1

    Multiple of: 1

    discardLinkURLQueries - boolean

    If true, query parameters found in URLs will be removed before being added to the discovery queue.

    Default: false

    discardLinkURLAnchors - boolean

    If true, anchors found in URLs will be removed before being added to the discovery queue.

    Default: false

    crawlIdConfig - Crawl Id Properties

    userAgentName - string

    Name the connector should use when identifying itself to a website in order to crawl it.

    Default: Lucidworks-Anda/2.0

    userAgentEmail - string

    Email address to use as part of connector identification.

    userAgentWebAddr - string

    Web address to use as part of connector identification.

    crawlPerformanceConfig - Crawl Performance Properties

    fetchDelayMSPerHost - boolean

    If true, the 'Fetch delay (ms)' property will be applied for each host.

    Default: false

    fetchThreads - number

    The number of threads to use during fetching. The default is 5.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 5

    Multiple of: 1

    emitThreads - number

    The number of threads used to send documents from the connector to the index pipeline. The default is 5.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 5

    Multiple of: 1

    chunkSize - number

    The number of items to batch for each round of fetching. A higher value can make crawling faster, but memory usage is also increased. The default is 1.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 1

    Multiple of: 1

    fetchDelayMS - number

    Number of milliseconds to wait between fetch requests. The default is 0. This property can be used to throttle a crawl if necessary.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 0

    Multiple of: 1

    retryEmit - boolean

    Set to true for emit batch failures to be retried on a document-by-document basis.

    Default: true

    failFastOnStartLinkFailure - boolean

    If true, when Fusion cannot connect to any of the provided start links, the crawl is stopped and an exception logged.

    Default: true

    timeoutMS - number

    Time in milliseconds to wait for server response.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 10000

    Multiple of: 1

    requestRetryCount - number

    If an http request fails, retry up to this many times before giving up. If set to 0, requests will not be retried. This is useful in situations where your crawls are failing with errors like "The target server failed to respond".

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 0

    Multiple of: 1

    obeyRobotsDelay - boolean

    If true, Crawl-Delay rules in robots.txt will be obeyed. Disabling this option will speed up crawling, but is considered negative behavior for sites you do not control.

    Default: true

    parserRetryCount - number

    The maximum number of times the configured parser will try getting content before giving up

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 0

    Multiple of: 1

    dedupeConfig - Dedupe Properties

    dedupe - boolean

    If true, documents will be deduplicated. Deduplication can be done based on an analysis of the content, on the content of a specific field, or by a JavaScript function. If neither a field nor a script are defined, content analysis will be used.

    Default: false

    dedupeField - string

    Field to be used for dedupe. Define either a field or a dedupe script, otherwise the full raw content of each document will be used.

    dedupeScript - string

    Custom javascript to dedupe documents. The script must define a 'genSignature(content){}' function, but can use any combination of document fields. The function must return a string.

    dedupeSaveSignature - boolean

    If true,the signature used for dedupe will be stored in a 'dedupeSignature_s' field. Note this may cause errors about 'immense terms' in that field.

    Default: false

    followCanonicalTags - boolean

    Deduplicate, by only indexing the document at the URL specified in the canonical tag. https://en.wikipedia.org/wiki/Canonical_link_element

    Default: false

    canonicalTagsRedirectLimit - number

    Because canonical tag resolution may be cyclical, a limit must be applied to the total number of requests. This value ensures that the resolution finishes in a reasonable amount of time.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 4

    Multiple of: 1

    documentParsingConfig - Document Parsing Properties

    defaultCharSet - string

    Default character set to use when one is not declared in the HTTP headers.

    Default: UTF-8

    obeyCharSet - boolean

    Use the encoding sent by the web server (if any) when parsing content. If unset, Fusion will try to guess the character set when parsing.

    Default: true

    defaultMIMEType - string

    Default MIME type to use when one is not declared in the HTTP headers.

    Default: application/octet-stream

    appendTrailingSlashToLinks - boolean

    If true, a trailing '/' will be added to link URLs when the URL does not end in a dot ('.').

    Default: false

    scrapeLinksBeforeFiltering - boolean

    If true, links will be extracted from documents before any other document processing has ocurred. By default, links are extracted after all other document processing.

    Default: false

    tagFields - array[string]

    HTML tags of elements to put into their own field in the index. The field will have the same name as the tag.

    tagIDFields - array[string]

    HTML tag IDs of elements to put into their own field in the index. The field will have the same name as the tag ID.

    tagClassFields - array[string]

    HTML tag classes of elements to put into their own field in the index. The field will have the same name as the tag class.

    selectorFields - array[string]

    List of Jsoup selectors for elements to put into their separate field in the index. The field will have the same name as the element. Syntax for jsoup selectors is available at http://jsoup.org/apidocs/org/jsoup/select/Selector.html.

    filteringRootTags - array[string]

    Root HTML elements whose child elements will be used to extract content. By default 'body' and 'head' elements are already included.

    includeSelectors - array[string]

    Jsoup-formatted selectors for elements to include in the crawled content.

    includeTags - array[string]

    HTML tag names of elements to include in the crawled content.

    includeTagClasses - array[string]

    HTML tag classes of elements to include in the crawled content.

    includeTagIDs - array[string]

    HTML tag IDs of elements to include in the crawled content.

    excludeSelectors - array[string]

    Jsoup-formatted selectors for elements to exclude from the crawled content. Syntax for jsoup selectors is available at http://jsoup.org/apidocs/org/jsoup/select/Selector.html.

    excludeTags - array[string]

    HTML tag names of elements to exclude from the crawled content.

    excludeTagClasses - array[string]

    HTML tag classes of elements to exclude from the crawled content.

    excludeTagIDs - array[string]

    HTML tag IDs of elements to exclude from the crawled content.

    customLinkSelectors - array[string]

    By default, only standard anchor tags, iframe tags, frame tags, and link tags are fetched. This allows you to use one or more XPath expressions to parse links from custom places. Such as //option/@value

    javascriptEvaluationConfig - Javascript Evaluation Properties

    crawlJS - boolean

    Evaluate JavaScript on web pages when crawling. This makes it possible for the Web fetcher to extract content from pages that is only available after JavaScript has prepared the document, but it may make the crawl slower because JavaScript loading can be time consuming.

    Default: false

    jsEnabledAuth - boolean

    Evaluate JavaScript when doing SAML/SmartForm authentication. This is only applicable if you have specified a SmartForms/SAML Authentication element in the "Crawl Authentication" area.

    Default: false

    jsPageLoadTimeout - number

    The time to wait in milliseconds for a page load to complete. If the timeout is -1, page loads can be indefinite. Maximum: 180,000ms i.e. 3 minutes

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 20000

    Multiple of: 1

    jsScriptTimeout - number

    The time to wait in milliseconds wait for an asynchronous script to finish execution. If the timeout is -1, then the script will be allowed to run indefinitely. Maximum: 30,000ms

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 20000

    Multiple of: 1

    jsAjaxTimeout - number

    The time in milliseconds after which an AJAX request will be ignored when considering whether all AJAX requests have completed. Maximum: 180,000ms i.e. 3 minutes

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 20000

    Multiple of: 1

    extraLoadTimeMs - number

    The JavaScript evaluation process will first wait for the DOM 'document.readyState' to be set to 'complete'; then it will wait until there are no more pending Ajax before emitting the page’s contents. Use this property to wait an additional number of milliseconds before emitting the contents. This gives background JavaScript routines a chance to finish rendering the page before the contents is emitted.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 250

    Multiple of: 1

    extraPageLoadDeltaChars - number

    This parameter is used when the "Extra time to wait for content after page load (ms)" parameter is > 0. It will stop the additional wait time if it sees the web page's content grows by at least this many characters. If set to 0 (the default) any increase in character count indicates the page load is finished.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 0

    Multiple of: 1

    quitTimeoutMs - number

    The amount of time to wait for a web browser to quit before killing the browser process.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 5000

    Multiple of: 1

    useRequestCounter - boolean

    Use the request counter plugin to wait for all pending ajax requests to be complete before loading the page contents.

    Default: true

    requestCounterMinWaitMs - number

    When the requestcounter is enabled, often early on the requestcount may say there are 0 pending requests... but there may still be ajax requests that haven't run yet. This parameter provides a certain time in milliseconds to wait for a non-zero count to be returned. If a requestcount is non-zero at any point, then the next requestcount = 0 is assumed to signify this page is done loading.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 750

    Multiple of: 1

    requestCounterMaxWaitMs - number

    The request counter plugin counts active ajax requests after a page was loaded until there are no more pending ajax requests. This parameter says how long to wait in milliseconds for the requestcount to go to 0 before giving up.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 20000

    Multiple of: 1

    headlessBrowser - boolean

    Applicable only when "Evaluate JavaScript" is selected, deselect this checkbox if you want to actually see browser windows display while fetchers process web pages. Otherwise, if selected, browsers will run in "headless" mode which means they will run in the background. If running on a server with no desktop interface, this must stay selected.

    Default: true

    takeScreenshot - boolean

    Applicable only when "Evaluate JavaScript" is selected, take a screenshot of the fully rendered page and index it. Screenshots will be indexed in a field called "screenshot_bin". You must make sure your schema specifies this field as a binary field or indexing will fail. To add this, go to System -> Solr Config -> Managed Schema then add <dynamicField indexed="true" name="*_bin" stored="true" type="binary"/>

    Default: false

    screenshotFullscreen - boolean

    When taking a screenshot, capture the full screen.

    Default: false

    viewportWidth - number

    Set an optional browser viewport width. If not specified, will default to 800.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Multiple of: 1

    viewportHeight - number

    Set an optional browser viewport height. If not specified, will default to 600.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Multiple of: 1

    deviceScreenFactor - number

    Set an optional browser device screen factor. If not specified, will default to 1 (no scaling).

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Multiple of: 1

    simulateMobile - boolean

    Simulate a mobile device

    Default: false

    mobileScreenWidth - number

    If simulate mobile is checked, this species the device's emulated screen width.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Multiple of: 1

    mobileScreenHeight - number

    If simulate mobile is checked, this species the device's emulated screen height.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Multiple of: 1

    chromeBinaryPath - string

    This property is no longer in use, and is only in place due to backwards compatible configuration validation purposes.

    chromeExtraCommandLineArgs - string

    Specify additional command line arguments to add to the chromium executable when it is run.

    linkDiscoveryConfig - Link Discovery

    restrictToTreeIgnoredHostPrefixes - array[string]

    Modifies the behavior of 'Restrict crawl to start-link tree' to ignore the configured list of prefixes when restricting the crawl. Commonly, 'www.' is ignored so links with the same domain are allowed, whether of the form 'http://host.com' or 'http://www.host.com'. This option requires 'Restrict to start-link tree' to be enabled to have any effect.

    restrictToTree - boolean

    If true, only URLs that match the startLinks URL domain will be followed

    Default: true

    restrictToTreeAllowSubdomains - boolean

    Modifies the behavior of 'Restrict crawl to start-link tree' so that a link to any sub-domain of the start links is allowed. For example, if the start link is 'http://host.com', this option ensures that links to 'http://news.host.com' are also followed. This option requires 'Restrict to start-link tree' to be enabled to have any effect.

    Default: false

    restrictToTreeUseHostAndPath - boolean

    Modifies the behavior of 'Restrict crawl to start-link tree' to include the 'path' of the start link in the restriction logic. For example, if the start link is 'http://host.com/US', this option will limit all followed URLs to ones starting with the '/US/' path. This option requires 'Restrict to start-link tree' to be enabled to have any effect.

    Default: false

    sitemapURLs - array[string]

    URLs for sitemaps, to be used a basis for link discovery. Rules found in sitemaps will not be processed.

    respectMetaEquivRedirects - boolean

    If true, the connector will follow metatags with refresh redirects such as <meta http-equiv="refresh" />.

    Default: false

    allowCircularRedirects - boolean

    If true, a request can be redirected to the same URL multiple times

    Default: false

    addedHeaders - string

    Add these headers to http requests. This is useful for web sites that require certain headers to let you visit them. Write each header on its own line in the format HeaderName: HeaderValue

    recrawlRulesConfig - Recrawl Rules

    delete - boolean

    Set to true to remove documents from the index when they can no longer be accessed as unique documents.

    Default: true

    deleteErrorsAfter - number

    Number of times a website can error out, for example with a 500 error or a connection timeout, before a document is removed from the index. The default of -1 means such documents are never removed. Note that pages that return a 404 status code can be configured to be removed immediately regardless of this setting.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: -1

    Multiple of: 1

    refreshAll - boolean

    Set to true to always recrawl all items found in the crawldb.

    Default: false

    refreshStartLinks - boolean

    Set to true to recrawl items specified in the list of start links.

    Default: false

    refreshErrors - boolean

    Set to true to recrawl items that failed during the last crawl.

    Default: false

    refreshOlderThan - number

    Number of seconds to recrawl items whose last fetched date is longer ago than this value.

    >= -2147483648

    <= 2147483647

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: -1

    Multiple of: 1

    refreshIDPrefixes - array[string]

    A prefix to recrawl all items whose IDs begin with this value.

    refreshIDRegexes - array[string]

    A regular expression to recrawl all items whose IDs match this pattern.

    refreshScript - string

    A JavaScript function ('shouldRefresh()') to customize the items recrawled.

    forceRefresh - boolean

    Set to true to recrawl all items even if they have not changed since the last crawl.

    Default: false

    forceRefreshClearSignatures - boolean

    If true, signatures will be cleared if force recrawl is enabled.

    Default: false

    delete404 - boolean

    Select this option to delete indexed pages that return a 404 or 410 error.

    Default: true

    sitemapIncrementalCrawling - boolean

    When enabled, only URLs found in the sitemap will be processed and crawled.

    Default: false

    cookieSpec - string

    Default: browser-compatibility

    rewriteLinkScript - string

    A Javascript function 'rewriteLink(link) { }' to modify links to documents before they are fetched.