Legacy Product

Fusion 5.10
    Fusion 5.10

    Confluence V1 Connector Configuration Reference

    Retrieve data from the Atlassian Confluence Wiki CMS. You can configure this datasource to crawl pages, spaces, blog posts, comments, and attachments.

    V1 deprecation and removal notice

    Starting in Fusion 5.12.0, all V1 connectors are deprecated. This means they are no longer being actively developed and will be removed in Fusion 5.13.0.

    The replacement for this connector is in active development at this time and will be released at a future date.

    If you are using this connector, you must migrate to the replacement connector or a supported alternative before upgrading to Fusion 5.13.0. We recommend migrating to the replacement connector as soon as possible to avoid any disruption to your workflows.

    The Fusion Confluence connector supports Confluence Server versions 5.5 and later and Confluence Cloud.

    Confluence Connector’s security trimming

    Why do some field names have different numbers?

    After crawling some test Confluence content, the Solr index has ACL fields such as acl_users_0_s and acl_groups_0_ss, but the field names can have different numbers. For example, some documents have acl_users_1_s or acl_users_6_s.

    This is due to the strange way that Confluence handles user and group viewing permissions. Each of these fields represents an ancestor of the item’s security. If a user does not match EACH level of permissions, the user cannot see the document and the doc will be filtered out.

    You will see three fields that are used during security trimming:

    • ancestorCount_i stores the number of ancestors this item has

    • acl_users_i_s stores the users allowed to see this item at ancestor number i

    • acl_groups_i_s stores the groups allowed to see this item at ancestor number i

    Users/groups that want to see a document in Confluence are processed ancestor-by-ancestor linearly.

    During security trimming, you will give the filter a queryUser and we return the Confluence documents this user can access.

    The Confluence security trimming algorithm does the following:

    1. Calculate the maximum ancestorCount_i of all documents in the index (max(ancestorCount_i)).

    2. Query Confluence for the Confluence Security Groups that queryUser is part of.

    3. Then for i from [0 to max(ancestorCount_i)], append an AND clause for the security filter to match against each ancestor level for the acl_users_i_s and acl_groups_1_s fields:

        (acl_users_i_s:_lw_confluence_anonymous_ OR acl_users_i_s:queryUser OR acl_group_i_s:group1 OR acl_group_i_s:group2 ... )

    For example:

    queryUser = ndipiazza
    groupsUserIsIn = EngGroup, NorthAmericaGroup
    max(ancestorCount_i) = 3

    Then the filter would be:

    (acl_users_0_s:lw_confluence_anonymous OR acl_users_0_s:ndipiazza OR acl_group_0_s:EngGroup OR acl_group_0_s:NorthAmericaGroup) AND(acl_users_1_s:lw_confluence_anonymous OR acl_users_1_s:ndipiazza OR acl_group_1_s:EngGroup OR acl_group_1_s:NorthAmericaGroup) AND(acl_users_2_s:lw_confluence_anonymous OR acl_users_2_s:ndipiazza OR acl_group_2_s:EngGroup OR acl_group_2_s:NorthAmericaGroup)

    As you see, because these are AND’d together, if the user does not match EACH level of permissions, the user cannot see the document and the doc will be filtered out.

    Configuration

    When entering configuration values in the UI, use unescaped characters, such as \t for the tab character. When entering configuration values in the API, use escaped characters, such as \\t for the tab character.

    Atlassian Confluence Wiki CMS

    description - string

    Optional description for this datasource.

    id - stringrequired

    Unique name for this datasource.

    >= 1 characters

    Match pattern: ^[a-zA-Z0-9_-]+$

    parserId - string

    Parser used when parsing raw content. For some connectors, a configuration to 'retry' parsing if an error occurs is available as an advanced setting

    pipeline - stringrequired

    Name of an existing index pipeline for processing documents.

    >= 1 characters

    properties - Properties

    Datasource configuration properties

    aliasExpiration - integer

    The number of crawls after which an alias will expire. The default is 1 crawl.

    Default: 1

    chunkSize - integer

    The number of items to batch for each round of fetching. A higher value can make crawling faster, but memory usage is also increased. The default is 1.

    Default: 1

    commitAfterItems - integer

    Commit the crawlDB to disk after this many items have been received. A smaller number here will result in a slower crawl because of commits to disk being more frequent; conversely, a larger number here will cause a resumed job after a crash to need to recrawl more records.

    Default: 10000

    crawlDBType - string

    The type of crawl database to use, in-memory or on-disk.

    Default: on-disk

    Allowed values: in-memoryon-disk

    db - Connector DB

    Type and properties for a ConnectorDB implementation to use with this datasource.

    aliases - boolean

    Keep track of original URI-s that resolved to the current URI. This negatively impacts performance and size of DB.

    Default: false

    inlinks - boolean

    Keep track of incoming links. This negatively impacts performance and size of DB.

    Default: false

    inv_aliases - boolean

    Keep track of target URI-s that the current URI resolves to. This negatively impacts performance and size of DB.

    Default: false

    type - string

    Fully qualified class name of ConnectorDb implementation.

    >= 1 characters

    Default: com.lucidworks.connectors.db.impl.MapDbConnectorDb

    dedupe - boolean

    If true, documents will be deduplicated. Deduplication can be done based on an analysis of the content, on the content of a specific field, or by a JavaScript function. If neither a field nor a script are defined, content analysis will be used.

    Default: false

    dedupeField - string

    Field to be used for dedupe. Define either a field or a dedupe script, otherwise the full raw content of each document will be used.

    dedupeSaveSignature - boolean

    If true,the signature used for dedupe will be stored in a 'dedupeSignature_s' field. Note this may cause errors about 'immense terms' in that field.

    Default: false

    dedupeScript - string

    Custom javascript to dedupe documents. The script must define a 'genSignature(content){}' function, but can use any combination of document fields. The function must return a string.

    delete - boolean

    Set to true to remove documents from the index when they can no longer be accessed as unique documents.

    Default: true

    deleteErrorsAfter - integer

    Number of fetch failures to tolerate before removing a document from the index. The default of -1 means no fetch failures will be removed.

    Default: -1

    diagnosticMode - boolean

    Enable to print more detailed information to the logs about each request.

    Default: false

    emitThreads - integer

    The number of threads used to send documents from the connector to the index pipeline. The default is 5.

    Default: 5

    enable_security_trimming - Enable Security Trimming

    f.cacheUserGroupLimit - integer

    Only applicable when cacheUserGroups is enabled, this will limit the number of users who will have their groups cached. This is used for testing purposes only. The default of -1 will cause all users to be cached.

    >= -1

    exclusiveMinimum: false

    Default: -1

    f.cacheUserGroups - boolean

    If true, user groups will be cached so that they confluence api is not called at query time.

    Default: false

    f.enableSecurityTrimming - boolean

    Enable security trimming of Confluence searches. To properly check security the user will need Admin rights to see everything and filter accordingly.

    Default: true

    f.indexGroupPermissions - boolean

    Enable indexing of user groups that have permission to view Confluence content.

    Default: true

    f.indexUserPermissions - boolean

    Enable indexing of users who have permission to view Confluence content.

    Default: true

    f.useJsonRpc - boolean

    Use JSON-RPC instead of REST API for permissions retrieval. This can be useful for older Confluence Server versions that don't have full permissions support via REST API.

    Default: false

    f.userGroupCacheCollectionName - string

    The name of the solr collection that will store this datasources' user group cache. This user group cache collection can be shared with other datasources. There is a `ds_id_s` field that is used to query user/groups separately.

    Default: confluence_usr_grp

    excludeExtensions - array[string]

    File extensions that should not to be fetched. This will limit this datasource to all extensions except this list.

    excludeRegexes - array[string]

    Regular expressions for URI patterns to exclude. This will limit this datasource to only URIs that do not match the regular expression.

    f.attachmentMaxSizeBytes - integer

    Maximum size, in bytes, of an attachment to fetch.

    Default: 4194304

    f.commentFormat - string

    Index comments as JSON in 'comments_ss' or as separate documents?

    Default: Separate doc

    Allowed values: Embedded JSONSeparate doc

    f.confluenceAuthType - string

    Authentication method to use. Note: Basic is the only allowed method for connecting to Confluence hosted by Atlassian

    Default: basic

    Allowed values: basicrequestntlm

    f.confluenceCtxPath - string

    Context path under which Confluence instance is deployed. Part of the URL for Spaces. An example path might be `/confluence/`.

    Default: /

    f.confluenceHost - string

    Hostname domain portion of the Confluence server to crawl.

    f.confluencePassword - string

    Password/API Token for the Confluence user.

    f.confluencePort - integer

    An outbound port used by the Confluence server.

    Default: 443

    f.confluenceUsername - string

    Name of any existing Confluence user. Admin access not required.

    f.crawlAttachments - boolean

    Enable indexing of attachments.

    Default: true

    f.crawlBlogPosts - boolean

    Enable indexing of Confluence blog posts.

    Default: true

    f.crawlComments - boolean

    Enable indexing of comments.

    Default: true

    f.crawlPages - boolean

    Enable indexing of Confluence pages.

    Default: true

    f.crawlPersonalSpaces - boolean

    Enable indexing of personal spaces of Confluence users.

    Default: true

    f.domain - string

    Applicable only for when Confluence when usingNTLM authentication, this parameter is the domain of the username.

    f.excludedSpaces - array[string]

    Confluence Spaces that should be skipped during the crawl. Use the confluence 'space name' or 'space key' to identify the spaces.

    f.includeArchivedSpaces - boolean

    If true, archived spaces will be included. It is respected for Confluence versions greater or equal than 5.10, for earlier versions all the archived spaces will be included whether this property is true or false.

    Default: true

    f.includePrivateContent - boolean

    If true, all the private content would be included. Only respected if security trimming is on, if security trimming is disabled private content will still be indexed.

    Default: true

    f.includedSpaces - array[string]

    Required. Confluence Spaces that should be crawled. When using this setting, other spaces will be skipped. Use the confluence 'space name' or 'space key' to identify the spaces.

    f.indexNonCurrentContent - boolean

    Enable indexing of non-current (older) versions of Confluence content.

    Default: false

    f.indexSpacesAsDocs - boolean

    Create a separate document for each Confluence Space indexed.

    Default: false

    f.pageSize - integer

    Number of records to retrieve per page when making requests to Confluence REST API.

    Default: 200

    f.prefetch - boolean

    Enable prefetching of Confluence content metadata. When enabled content metadata will be fetched and cached in parallel with the main crawl.

    Default: false

    f.sessionTTL - integer

    Time in milliseconds until HTTP session is considered expired and re-login is performed.

    Default: 150000

    f.timeout - integer

    Time in milliseconds to wait for a server response.

    Default: 10000

    f.useHttps - boolean

    Enable to use SSL when connecting to the Confluence server.

    Default: true

    f.verify_access - boolean

    Try to connect to Confluence server with current properties before saving changes to datasource.

    Default: true

    failFastOnStartLinkFailure - boolean

    If true, when Fusion cannot connect to any of the provided start links, the crawl is stopped and an exception logged.

    Default: true

    fetchDelayMS - integer

    Number of milliseconds to wait between fetch requests. The default is 100. This property can be used to throttle a crawl if necessary.

    Default: 100

    fetchThreads - integer

    The number of threads to use during fetching. The default is 5.

    Default: 5

    forceRefresh - boolean

    Set to true to recrawl all items even if they have not changed since the last crawl.

    Default: false

    forceRefreshClearSignatures - boolean

    If true, signatures will be cleared if force recrawl is enabled.

    Default: true

    includeExtensions - array[string]

    File extensions to be fetched. This will limit this datasource to only these file extensions.

    includeRegexes - array[string]

    Regular expressions for URI patterns to include. This will limit this datasource to only URIs that match the regular expression.

    initial_mapping - Initial field mapping

    Provides mapping of fields before documents are sent to an index pipeline.

    condition - string

    Define a conditional script that must result in true or false. This can be used to determine if the stage should process or not.

    label - string

    A unique label for this stage.

    <= 255 characters

    mappings - array[object]

    List of mapping rules

    Default: {"operation":"move","source":"charSet","target":"charSet_s"}{"operation":"move","source":"fetchedDate","target":"fetchedDate_dt"}{"operation":"move","source":"lastModified","target":"lastModified_dt"}{"operation":"move","source":"signature","target":"dedupeSignature_s"}{"operation":"move","source":"length","target":"length_l"}{"operation":"move","source":"mimeType","target":"mimeType_s"}{"operation":"move","source":"parent","target":"parent_s"}{"operation":"move","source":"owner","target":"owner_s"}{"operation":"move","source":"group","target":"group_s"}

    object attributes:{operation : {
     display name: Operation
     type: string
    }
    source required : {
     display name: Source Field
     type: string
    }
    target : {
     display name: Target Field
     type: string
    }
    }

    reservedFieldsMappingAllowed - boolean

    Default: false

    skip - boolean

    Set to true to skip this stage.

    Default: false

    unmapped - Unmapped Fields

    If fields do not match any of the field mapping rules, these rules will apply.

    operation - string

    The type of mapping to perform: move, copy, delete, add, set, or keep.

    Default: copy

    Allowed values: copymovedeletesetaddkeep

    source - string

    The name of the field to be mapped.

    target - string

    The name of the field to be mapped to.

    reevaluateCrawlDbOnStart - boolean

    Reevaluate existing crawldb entries for legality on startup?

    Default: false

    refreshAll - boolean

    Set to true to always recrawl all items found in the crawldb. When false, no previously-crawled items will be crawled or updated.

    Default: true

    refreshErrors - boolean

    Set to true to recrawl items that failed during the last crawl.

    Default: false

    refreshIDPrefixes - array[string]

    A prefix to recrawl all items whose IDs begin with this value.

    refreshIDRegexes - array[string]

    A regular expression to recrawl all items whose IDs match this pattern.

    refreshOlderThan - integer

    Number of seconds to recrawl items whose last fetched date is longer ago than this value.

    Default: -1

    refreshScript - string

    A JavaScript function ('shouldRefresh()') to customize the items recrawled.

    refreshStartLinks - boolean

    Set to true to recrawl items specified in the list of start links.

    Default: false

    retainOutlinks - boolean

    Set to true for links found during fetching to be stored in the crawldb. This increases precision in certain recrawl scenarios, but requires more memory and disk space.

    Default: false

    retryEmit - boolean

    Set to true for emit batch failures to be retried on a document-by-document basis.

    Default: true

    rewriteLinkScript - string

    A Javascript function 'rewriteLink(link) { }' to modify links to documents before they are fetched.