Legacy Product

Fusion 5.10
    Fusion 5.10

    Drupal 7.x V1 Connector Configuration Reference

    Table of Contents

    The Drupal connector requires Drupal’s Services 7.x3.11 Module REST API. Refer to this page to install the necessary packages: www.drupal.org/node/783236

    This connector is no longer functional in Fusion 5.9 and later. This incompatibility arises due to changes implemented in the data source version or related APIs.

    Although the V1 connector might still be visible within the Fusion UI, it cannot be used effectively. To ensure uninterrupted operation, we strongly recommend switching to a valid V2 alternate connector.

    The replacement for this connector is in active development at this time and will be released at a future date.

    For the Drupal 8/9 connector, see Drupal 8/9 Connector Configuration Reference.

    Configuration

    When entering configuration values in the UI, use unescaped characters, such as \t for the tab character. When entering configuration values in the API, use escaped characters, such as \\t for the tab character.

    Drupal connector to retrieve data from a Drupal instance. This connector requires installation of Drupal's Services module as described at https://www.drupal.org/project/services and has only been tested with Drupal 7.x.

    description - string

    Optional description for this datasource.

    id - stringrequired

    Unique name for this datasource.

    >= 1 characters

    Match pattern: ^[a-zA-Z0-9_-]+$

    parserId - string

    Parser used when parsing raw content. Retry parsing setting is available under crawl performance (advance setting)

    pipeline - stringrequired

    Name of an existing index pipeline for processing documents.

    >= 1 characters

    properties - Properties

    Datasource configuration properties

    aliasExpiration - integer

    The number of crawls after which an alias will expire. The default is 1 crawl.

    Default: 1

    chunkSize - integer

    The number of items to batch for each round of fetching. A higher value can make crawling faster, but memory usage is also increased. The default is 1.

    Default: 1

    crawlDBType - string

    The type of crawl database to use, in-memory or on-disk.

    Default: in-memory

    Allowed values: in-memoryon-disk

    db - Connector DB

    Type and properties for a ConnectorDB implementation to use with this datasource.

    aliases - boolean

    Keep track of original URI-s that resolved to the current URI. This negatively impacts performance and size of DB.

    Default: false

    inlinks - boolean

    Keep track of incoming links. This negatively impacts performance and size of DB.

    Default: false

    inv_aliases - boolean

    Keep track of target URI-s that the current URI resolves to. This negatively impacts performance and size of DB.

    Default: false

    type - string

    Fully qualified class name of ConnectorDb implementation.

    >= 1 characters

    Default: com.lucidworks.connectors.db.impl.MapDbConnectorDb

    dedupe - boolean

    If true, documents will be deduplicated. Deduplication can be done based on an analysis of the content, on the content of a specific field, or by a JavaScript function. If neither a field nor a script are defined, content analysis will be used.

    Default: false

    dedupeField - string

    Field to be used for dedupe. Define either a field or a dedupe script, otherwise the full raw content of each document will be used.

    dedupeSaveSignature - boolean

    If true,the signature used for dedupe will be stored in a 'dedupeSignature_s' field. Note this may cause errors about 'immense terms' in that field.

    Default: false

    dedupeScript - string

    Custom javascript to dedupe documents. The script must define a 'genSignature(content){}' function, but can use any combination of document fields. The function must return a string.

    delete - boolean

    Set to true to remove documents from the index when they can no longer be accessed as unique documents.

    Default: true

    deleteErrorsAfter - integer

    Number of fetch failures to tolerate before removing a document from the index. The default of -1 means no fetch failures will be removed.

    Default: -1

    depth - integer

    Number of levels in a directory or site tree to descend for documents.

    Default: -1

    diagnosticMode - boolean

    Enable to print more detailed information to the logs about each request.

    Default: false

    emitThreads - integer

    The number of threads used to send documents from the connector to the index pipeline. The default is 5.

    Default: 5

    excludeExtensions - array[string]

    File extensions that should not to be fetched. This will limit this datasource to all extensions except this list.

    excludeRegexes - array[string]

    Regular expressions for URI patterns to exclude. This will limit this datasource to only URIs that do not match the regular expression.

    f.cacheSize - integer

    The number of entries to cache when making REST requests.

    Default: 2000

    f.comment - string

    Name of the Comment resource to be able to index comment data. If you did not create an alias for the 'comment' object, keep the default.

    Default: comment

    f.drupal_password - string

    Password to access the REST service, if required.

    f.drupal_username - string

    Optional username, only required if the REST service requires authentication.

    f.endpoint - string

    Name of the REST endpoint defined when you added the REST service to Drupal.

    Default: rest

    f.file - string

    Name of the File resource to be able to index file data. If you did not create an alias for the 'file' object, keep the default.

    Default: file

    f.node - string

    Name of the Node resource to be able to index node data. If you did not create an alias for the 'node' object, keep the default.

    Default: node

    f.pageSize - integer

    The number of items that will be returned. The Drupal default without this value is 20, this allows you to request more items and reduce the overall number of Node requests to fetch all content.

    Default: 100

    f.taxonomy_term - string

    Name of the Taxonomy Term resource to be able to index taxonomy term data. If you did not create an alias for the 'taxonomy_term' object, keep the default.

    Default: taxonomy_term

    f.taxonomy_vocabulary - string

    Name of the Taxonomy Vocabulary resource to be able to index taxonomy data. If you did not create an alias for the 'taxonomy_vocabulary' object, keep the default.

    Default: taxonomy_vocabulary

    f.timeoutMS - integer

    Time in ms to wait for a server response.

    Default: 10000

    f.user - string

    Name of the User resource to be able to login, if authenticating to the REST service is required. If you did not create an alias for the 'user' object, keep the default.

    Default: user

    failFastOnStartLinkFailure - boolean

    If true, when Fusion cannot connect to any of the provided start links, the crawl is stopped and an exception logged.

    Default: true

    fetchDelayMS - integer

    Number of milliseconds to wait between fetch requests. The default is 0. This property can be used to throttle a crawl if necessary.

    Default: 0

    fetchDelayMSPerHost - boolean

    If true, the 'Fetch delay (ms)' property will be applied for each host.

    Default: false

    fetchThreads - integer

    The number of threads to use during fetching. The default is 5.

    Default: 5

    forceRefresh - boolean

    Set to true to recrawl all items even if they have not changed since the last crawl.

    Default: false

    includeExtensions - array[string]

    File extensions to be fetched. This will limit this datasource to only these file extensions.

    includeRegexes - array[string]

    Regular expressions for URI patterns to include. This will limit this datasource to only URIs that match the regular expression.

    indexCrawlDBToSolr - boolean

    EXPERIMENTAL: Set to true to index the crawl-database into a 'crawldb_<datasource-ID>' collection in Solr.

    Default: false

    initial_mapping - Initial field mapping

    Provides mapping of fields before documents are sent to an index pipeline.

    condition - string

    Define a conditional script that must result in true or false. This can be used to determine if the stage should process or not.

    label - string

    A unique label for this stage.

    <= 255 characters

    mappings - array[object]

    List of mapping rules

    Default: {"operation":"move","source":"charSet","target":"charSet_s"}{"operation":"move","source":"fetchedDate","target":"fetchedDate_dt"}{"operation":"move","source":"lastModified","target":"lastModified_dt"}{"operation":"move","source":"signature","target":"dedupeSignature_s"}{"operation":"move","source":"contentSignature","target":"signature_s"}{"operation":"move","source":"length","target":"length_l"}{"operation":"move","source":"mimeType","target":"mimeType_s"}{"operation":"move","source":"parent","target":"parent_s"}{"operation":"move","source":"owner","target":"owner_s"}{"operation":"move","source":"group","target":"group_s"}

    object attributes:{operation : {
     display name: Operation
     type: string
    }
    source required : {
     display name: Source Field
     type: string
    }
    target : {
     display name: Target Field
     type: string
    }
    }

    reservedFieldsMappingAllowed - boolean

    Default: false

    skip - boolean

    Set to true to skip this stage.

    Default: false

    unmapped - Unmapped Fields

    If fields do not match any of the field mapping rules, these rules will apply.

    operation - string

    The type of mapping to perform: move, copy, delete, add, set, or keep.

    Default: copy

    Allowed values: copymovedeletesetaddkeep

    source - string

    The name of the field to be mapped.

    target - string

    The name of the field to be mapped to.

    maxItems - integer

    Maximum number of documents to fetch. The default (-1) means no limit.

    Default: -1

    parserRetryCount - integer

    The maximum number of times the configured parser will try getting content before giving up

    <= 5

    exclusiveMinimum: false

    exclusiveMaximum: true

    Default: 0

    reevaluateCrawlDbOnStart - boolean

    Reevaluate existing crawldb entries for legality on startup?

    Default: false

    refreshAll - boolean

    Set to true to always recrawl all items found in the crawldb.

    Default: true

    refreshErrors - boolean

    Set to true to recrawl items that failed during the last crawl.

    Default: false

    refreshIDPrefixes - array[string]

    A prefix to recrawl all items whose IDs begin with this value.

    refreshIDRegexes - array[string]

    A regular expression to recrawl all items whose IDs match this pattern.

    refreshOlderThan - integer

    Number of seconds to recrawl items whose last fetched date is longer ago than this value.

    Default: -1

    refreshScript - string

    A JavaScript function ('shouldRefresh()') to customize the items recrawled.

    refreshStartLinks - boolean

    Set to true to recrawl items specified in the list of start links.

    Default: false

    retainOutlinks - boolean

    Set to true for links found during fetching to be stored in the crawldb. This increases precision in certain recrawl scenarios, but requires more memory and disk space.

    Default: false

    retryEmit - boolean

    Set to true for emit batch failures to be retried on a document-by-document basis.

    Default: true

    rewriteLinkScript - string

    A Javascript function 'rewriteLink(link) { }' to modify links to documents before they are fetched.

    startLinks - array[string]

    One or more starting URIs for this datasource.