Legacy Product

Fusion 5.10
    Fusion 5.10

    Box.com V1 Connector Configuration Reference

    Table of Contents

    The Box connector retrieves data from a Box.com cloud-based data repository. To fetch content from multiple Box users, you must create a Box app that uses OAuth 2.0 with JWT server authentication. For limited testing using a single user account, you can create a Box app that uses Standard OAuth 2.0 authentication.

    Deprecation and removal notice

    This connector is deprecated as of Fusion 5.2 and is removed or expected to be removed as of Fusion 5.3. Use the Box V2 connector instead.

    For more information about deprecations and removals, including possible alternatives, see Deprecations and Removals.

    Configuration

    When entering configuration values in the UI, use unescaped characters, such as \t for the tab character. When entering configuration values in the API, use escaped characters, such as \\t for the tab character.

    Connector for Box.com. This connector can work in one of two ways: 1) It can crawl a single user's files (and files shared with that user) using OAuth, or 2) It supports a JWT Service account method that will crawl all users in an enterprise using Box.com's "As-User" header to simulate each user. For large distributed accounts, JWT Service Account is recommended. Otherwise you need to explicitly provide a single user access to every file you want to crawl.

    id - stringrequired

    Unique name for this datasource.

    >= 1 characters

    Match pattern: ^[a-zA-Z0-9_-]+$

    pipeline - stringrequired

    Name of an existing index pipeline for processing documents.

    >= 1 characters

    description - string

    Optional description for this datasource.

    parserId - string

    Parser used when parsing raw content. For some connectors, a configuration to 'retry' parsing if an error occurs is available as an advanced setting

    properties - Properties

    Datasource configuration properties

    db - Connector DB

    Type and properties for a ConnectorDB implementation to use with this datasource.

    type - string

    Fully qualified class name of ConnectorDb implementation.

    >= 1 characters

    Default: com.lucidworks.connectors.db.impl.MapDbConnectorDb

    inlinks - boolean

    Keep track of incoming links. This negatively impacts performance and size of DB.

    Default: false

    aliases - boolean

    Keep track of original URI-s that resolved to the current URI. This negatively impacts performance and size of DB.

    Default: false

    inv_aliases - boolean

    Keep track of target URI-s that the current URI resolves to. This negatively impacts performance and size of DB.

    Default: false

    startLinks - array[string]

    The IDs of the folders or files to crawl. For example if the URL to your folder is https://app.box.com/folder/12345, then enter 12345. To crawl the entire Box account, enter 0.

    dedupe - boolean

    If true, documents will be deduplicated. Deduplication can be done based on an analysis of the content, on the content of a specific field, or by a JavaScript function. If neither a field nor a script are defined, content analysis will be used.

    Default: false

    dedupeField - string

    Field to be used for dedupe. Define either a field or a dedupe script, otherwise the full raw content of each document will be used.

    dedupeScript - string

    Custom javascript to dedupe documents. The script must define a 'genSignature(content){}' function, but can use any combination of document fields. The function must return a string.

    dedupeSaveSignature - boolean

    If true,the signature used for dedupe will be stored in a 'dedupeSignature_s' field. Note this may cause errors about 'immense terms' in that field.

    Default: false

    delete - boolean

    Set to true to remove documents from the index when they can no longer be accessed as unique documents.

    Default: true

    deleteErrorsAfter - integer

    Number of fetch failures to tolerate before removing a document from the index. The default of -1 means no fetch failures will be removed.

    Default: -1

    fetchThreads - integer

    The number of threads to use during fetching. The default is 5.

    Default: 5

    emitThreads - integer

    The number of threads used to send documents from the connector to the index pipeline. The default is 5.

    Default: 5

    chunkSize - integer

    The number of items to batch for each round of fetching. A higher value can make crawling faster, but memory usage is also increased. The default is 1.

    Default: 1

    fetchDelayMS - integer

    Number of milliseconds to wait between fetch requests. The default is 0. This property can be used to throttle a crawl if necessary.

    Default: 0

    refreshAll - boolean

    Set to true to always recrawl all items found in the crawldb.

    Default: false

    refreshStartLinks - boolean

    Set to true to recrawl items specified in the list of start links.

    Default: false

    refreshErrors - boolean

    Set to true to recrawl items that failed during the last crawl.

    Default: false

    refreshOlderThan - integer

    Number of seconds to recrawl items whose last fetched date is longer ago than this value.

    Default: -1

    refreshIDPrefixes - array[string]

    A prefix to recrawl all items whose IDs begin with this value.

    refreshIDRegexes - array[string]

    A regular expression to recrawl all items whose IDs match this pattern.

    refreshScript - string

    A JavaScript function ('shouldRefresh()') to customize the items recrawled.

    forceRefresh - boolean

    Set to true to recrawl all items even if they have not changed since the last crawl.

    Default: false

    forceRefreshClearSignatures - boolean

    If true, signatures will be cleared if force recrawl is enabled.

    Default: true

    retryEmit - boolean

    Set to true for emit batch failures to be retried on a document-by-document basis.

    Default: true

    depth - integer

    Number of levels in a directory or site tree to descend for documents.

    Default: -1

    maxItems - integer

    Maximum number of documents to fetch. The default (-1) means no limit.

    Default: -1

    failFastOnStartLinkFailure - boolean

    If true, when Fusion cannot connect to any of the provided start links, the crawl is stopped and an exception logged.

    Default: true

    crawlDBType - string

    The type of crawl database to use, in-memory or on-disk.

    Default: on-disk

    Allowed values: in-memoryon-disk

    commitAfterItems - integer

    Commit the crawlDB to disk after this many items have been received. A smaller number here will result in a slower crawl because of commits to disk being more frequent; conversely, a larger number here will cause a resumed job after a crash to need to recrawl more records.

    Default: 10000

    initial_mapping - Initial field mapping

    Provides mapping of fields before documents are sent to an index pipeline.

    skip - boolean

    Set to true to skip this stage.

    Default: false

    label - string

    A unique label for this stage.

    <= 255 characters

    condition - string

    Define a conditional script that must result in true or false. This can be used to determine if the stage should process or not.

    reservedFieldsMappingAllowed - boolean

    Default: false

    mappings - array[object]

    List of mapping rules

    Default: {"source":"charSet","target":"charSet_s","operation":"move"}{"source":"fetchedDate","target":"fetchedDate_dt","operation":"move"}{"source":"lastModified","target":"lastModified_dt","operation":"move"}{"source":"signature","target":"dedupeSignature_s","operation":"move"}{"source":"length","target":"length_l","operation":"move"}{"source":"mimeType","target":"mimeType_s","operation":"move"}{"source":"parent","target":"parent_s","operation":"move"}{"source":"owner","target":"owner_s","operation":"move"}{"source":"group","target":"group_s","operation":"move"}

    object attributes:{source required : {
     display name: Source Field
     type: string
    }
    target : {
     display name: Target Field
     type: string
    }
    operation : {
     display name: Operation
     type: string
    }
    }

    unmapped - Unmapped Fields

    If fields do not match any of the field mapping rules, these rules will apply.

    source - string

    The name of the field to be mapped.

    target - string

    The name of the field to be mapped to.

    operation - string

    The type of mapping to perform: move, copy, delete, add, set, or keep.

    Default: copy

    Allowed values: copymovedeletesetaddkeep

    f.maxSizeBytes - integer

    Maximum size (in bytes) of documents to fetch or -1 for unlimited file size.

    Default: 4194304

    f.minSizeBytes - integer

    Minimum size, in bytes, of documents to fetch.

    Default: 0

    f.addFileMetadata - boolean

    Set to true to add information about documents found in the filesystem to the document, such as document owner, group, or ACL permissions.

    Default: true

    f.index_items_discarded - boolean

    Enable to index discarded document metadata

    Default: false

    enable_security_trimming - Enable Security Trimming

    isSecurityGroupTrimming - boolean

    Is the security trimming for groups included?

    Default: true

    security_filter_cache - boolean

    Cache of document access control rules.

    Default: true

    cache_expiration_time - integer

    cache_expiration_time

    Default: 7200

    retainOutlinks - boolean

    Set to true for links found during fetching to be stored in the crawldb. This increases precision in certain recrawl scenarios, but requires more memory and disk space.

    Default: false

    aliasExpiration - integer

    The number of crawls after which an alias will expire. The default is 1 crawl.

    Default: 1

    f.fs.apiKey - string

    The Box API Key.

    f.fs.apiSecret - string

    The Box API Secret.

    f.fs.refreshToken - string

    OAuth Refresh token (Not needed for JWT).

    f.fs.refreshTokenFile - string

    File that stores the refresh token for the next session.

    Default: refresh_token.txt

    f.fs.appUserId - string

    (JWT only) The JWT App User ID with access to crawl.

    f.fs.publicKeyId - string

    (JWT only) The public key prefix from the box.com public keys.

    f.fs.privateKeyBase64 - string

    (JWT only) Content of the private key. To get this value, open your key file and convert its content (including first and last line) to base64 string.

    f.fs.privateKeyPassword - string

    (JWT only) The password you entered for the private key file.

    f.fs.isGenerateSharedLinkWhenAbsent - boolean

    If this is selected, the crawler will automatically create a shared link for any non-shared documents it finds while crawling. Note: This will change all documents to 'Shared' in your Box view. Use with caution.

    f.fs.generatedSharedLinksExpireDays - integer

    Only applicable when Generate Shared Links when Absent is selected... this will control how many days the shared links stay valid for. 0 for unlimited.

    Default: 0

    f.fs.generatedSharedLinksAccess - string

    Only applicable when Generate Shared Links when Absent is selected... Sets the shared link access setting. Can be left blank (the default) or set to open, company or collaborators

    f.fs.isGenerateSharedLinkPermissionCanDownload - boolean

    Only applicable when Generate Shared Links when Absent is selected... On the box shared link, is the "can download" permission granted?

    f.fs.isGenerateSharedLinkPermissionCanPreview - boolean

    Only applicable when Generate Shared Links when Absent is selected... On the box shared link, is the "can preview" permission granted?

    f.fs.max_request_attempts - integer

    If Box API throws an error when trying to get a file, how many times do we retry before giving up?

    Default: 10

    f.fs.user_filter_term - string

    If you specify a user filter term, then a users files will only be crawled if their login starts with the user filter term. Can be comma separated list of multiple filter terms. Example: a,b,c,v would be all box users that have a login starting with a,b,c, or v. This value can be empty to return all results.

    f.fs.user_excludes - array[string]

    In addition to the user filter, you can here optionally specify regexes matching user names that should not be crawled.

    f.fs.proxyType - string

    Type of proxy to use, if any. Allowed values are 'HTTP' and 'SOCKS'. Leave empty for no proxy.

    f.fs.proxyHost - string

    The address to use when connecting through the proxy.

    f.fs.proxyPort - integer

    The port to use when connecting through the proxy.

    f.fs.distributedCrawlCollectionName - string

    The collection name of the Distributed Crawl Collection. If you do not specify one, it will use 'system_box_distributed_crawl'.

    f.fs.childrenPageSize - integer

    The number of results to get from Box.com API's children() methods. Default is the max of 1000, Range can be 1-1000.

    Default: 1000

    f.fs.batchSize - integer

    The number of requests to be wrapped into a Box batch request

    >= 1

    <= 10

    exclusiveMinimum: false

    exclusiveMaximum: false

    Default: 10

    f.fs.readTimeoutMs - integer

    The box api read timeout in milliseconds.

    Default: 240000

    f.fs.connectTimeoutMs - integer

    The box api connection timeout in milliseconds.

    Default: 240000

    f.fs.retrievalTimeoutMs - integer

    Timeout before taking items on producer/consumer queues in milliseconds. Default is 1000

    Default: 1000

    f.fs.nestedFolderDepth - integer

    Maximum depth of nested folders that will be crawled. Range: [1, int-max]. Default is int-max.

    Default: 2147483647

    f.fs.partitionBucketCount - integer

    Number of partition buckets to be used during the full crawl. Default is 5000.

    Default: 5000

    f.fs.numDistributedDatasources - integer

    Number of separate datasource jobs that will be running in this distributed crawl. In other words, how many datasources are part of this crawl? This value is needed in order to distribute work evenly amongst multiple jobs.

    Default: 1

    f.fs.distributedCrawlDatasourceIndex - integer

    Distributed Job index. Zero-based index of what distributed job index this data source represents. Must be in range [0, numDatasources]. For example, if you have 3 jobs in an distributed crawl, the index can be either 0, 1 or 2. Each data source must have a unique distributedJobIndex. Once the pre-fetch index is created, this index is used to signify the chunk of the file IDs that this node is responsible for indexing from the Distributed Crawl Collection.

    Default: 0

    f.fs.numPreFetchIndexCreationThreads - integer

    The number of concurrent threads that will create the Distributed Pre-fetch Index. Default: 16

    Default: 5

    f.fs.numSolrEmitterThreads - integer

    The number of Solr emitter threads. Default: 4

    Default: 4

    f.fs.excludedExtensions - string

    Comma separated list of extensions. No Box Files or Folders that have a filename that ends with any of these extensions will be crawled. Case will be ignored. E.g. .txt,.xls,.DS_Store

    batch_incremental_crawling - boolean

    When enabled, the re-crawl processes will retrieve just the new, modified and deleted files from Box file system. This feature only works if the user is an enterprise admin user.

    Default: true

    diagnosticMode - boolean

    Enable to print more detailed information to the logs about each request.

    Default: false