aliasExpiration - integer
The number of crawls after which an alias will expire. The default is 1 crawl.
Default: 1
batch_incremental_crawling - boolean
When enabled, the re-crawl processes will retrieve just the new, modified and deleted files from Box file system. This feature only works if the user is an enterprise admin user.
Default: true
chunkSize - integer
The number of items to batch for each round of fetching. A higher value can make crawling faster, but memory usage is also increased. The default is 1.
Default: 1
crawlDBType - string
The type of crawl database to use, in-memory or on-disk.
Default: in-memory
Allowed values: in-memoryon-disk
db - Connector DB
Type and properties for a ConnectorDB implementation to use with this datasource.
aliases - boolean
Keep track of original URI-s that resolved to the current URI. This negatively impacts performance and size of DB.
Default: false
inlinks - boolean
Keep track of incoming links. This negatively impacts performance and size of DB.
Default: false
inv_aliases - boolean
Keep track of target URI-s that the current URI resolves to. This negatively impacts performance and size of DB.
Default: false
type - string
Fully qualified class name of ConnectorDb implementation.
>= 1 characters
Default: com.lucidworks.connectors.db.impl.MapDbConnectorDb
dedupe - boolean
If true, documents will be deduplicated. Deduplication can be done based on an analysis of the content, on the content of a specific field, or by a JavaScript function. If neither a field nor a script are defined, content analysis will be used.
Default: false
dedupeField - string
Field to be used for dedupe. Define either a field or a dedupe script, otherwise the full raw content of each document will be used.
dedupeSaveSignature - boolean
If true,the signature used for dedupe will be stored in a 'dedupeSignature_s' field. Note this may cause errors about 'immense terms' in that field.
Default: false
dedupeScript - string
Custom javascript to dedupe documents. The script must define a 'genSignature(content){}' function, but can use any combination of document fields. The function must return a string.
delete - boolean
Set to true to remove documents from the index when they can no longer be accessed as unique documents.
Default: true
deleteErrorsAfter - integer
Number of fetch failures to tolerate before removing a document from the index. The default of -1 means no fetch failures will be removed.
Default: -1
depth - integer
Number of levels in a directory or site tree to descend for documents.
Default: -1
diagnosticMode - boolean
Enable to print more detailed information to the logs about each request.
Default: false
emitThreads - integer
The number of threads used to send documents from the connector to the index pipeline. The default is 5.
Default: 5
enable_security_trimming - Enable Security Trimming
cache_expiration_time - integer
cache_expiration_time
Default: 7200
isSecurityGroupTrimming - boolean
Is the security trimming for groups included?
Default: true
security_filter_cache - boolean
Cache of document access control rules.
Default: true
f.addFileMetadata - boolean
Set to true to add information about documents found in the filesystem to the document, such as document owner, group, or ACL permissions.
Default: true
f.fs.apiKey - string
The Box API Key.
f.fs.apiSecret - string
The Box API Secret.
f.fs.appUserId - string
(JWT only) The JWT App User ID with access to crawl.
f.fs.childrenPageSize - integer
The number of results to get from Box.com API's children() methods. Default is the max of 1000, Range can be 1-1000.
Default: 1000
f.fs.connectTimeoutMs - integer
The box api connection timeout in milliseconds.
Default: 240000
f.fs.distributedCrawlCollectionName - string
The collection name of the Distributed Crawl Collection. If you do not specify one, it will use 'system_box_distributed_crawl'.
f.fs.distributedCrawlDatasourceIndex - integer
Distributed Job index. Zero-based index of what distributed job index this data source represents. Must be in range [0, numDatasources]. For example, if you have 3 jobs in an distributed crawl, the index can be either 0, 1 or 2. Each data source must have a unique distributedJobIndex. Once the pre-fetch index is created, this index is used to signify the chunk of the file IDs that this node is responsible for indexing from the Distributed Crawl Collection.
Default: 0
f.fs.excludedExtensions - string
Comma separated list of extensions. No Box Files or Folders that have a filename that ends with any of these extensions will be crawled. Case will be ignored. E.g. .txt,.xls,.DS_Store
f.fs.generatedSharedLinksAccess - string
Only applicable when Generate Shared Links when Absent is selected... Sets the shared link access setting. Can be left blank (the default) or set to open, company or collaborators
f.fs.generatedSharedLinksExpireDays - integer
Only applicable when Generate Shared Links when Absent is selected... this will control how many days the shared links stay valid for. 0 for unlimited.
Default: 0
f.fs.isGenerateSharedLinkPermissionCanDownload - boolean
Only applicable when Generate Shared Links when Absent is selected... On the box shared link, is the "can download" permission granted?
f.fs.isGenerateSharedLinkPermissionCanPreview - boolean
Only applicable when Generate Shared Links when Absent is selected... On the box shared link, is the "can preview" permission granted?
f.fs.isGenerateSharedLinkWhenAbsent - boolean
If this is selected, the crawler will automatically create a shared link for any non-shared documents it finds while crawling. Note: This will change all documents to 'Shared' in your Box view. Use with caution.
f.fs.max_request_attempts - integer
If Box API throws an error when trying to get a file, how many times do we retry before giving up?
Default: 20
f.fs.nestedFolderDepth - integer
Maximum depth of nested folders that will be crawled. Range: [1, int-max]. Default is int-max.
Default: 2147483647
f.fs.numDistributedDatasources - integer
Number of separate datasource jobs that will be running in this distributed crawl. In other words, how many datasources are part of this crawl? This value is needed in order to distribute work evenly amongst multiple jobs.
Default: 1
f.fs.numPreFetchIndexCreationThreads - integer
The number of concurrent threads that will create the Distributed Pre-fetch Index. Default: 5
Default: 5
f.fs.partitionBucketCount - integer
Number of partition buckets to be used during the full crawl. Default is 5000.
Default: 5000
f.fs.privateKeyFile - string
(JWT only) Path to the private key file.
f.fs.privateKeyPassword - string
(JWT only) The password you entered for the private key file.
f.fs.proxyHost - string
The address to use when connecting through the proxy.
f.fs.proxyPort - integer
The port to use when connecting through the proxy.
f.fs.proxyType - string
Type of proxy to use, if any. Allowed values are 'HTTP' and 'SOCKS'. Leave empty for no proxy.
f.fs.publicKeyId - string
(JWT only) The public key prefix from the box.com public keys.
f.fs.readTimeoutMs - integer
The box api read timeout in milliseconds.
Default: 240000
f.fs.refreshToken - string
OAuth Refresh token (Not needed for JWT).
f.fs.refreshTokenFile - string
File that stores the refresh token for the next session.
Default: refresh_token.txt
f.fs.user_excludes - array[string]
In addition to the user filter, you can here optionally specify regexes matching user names that should not be crawled.
f.fs.user_filter_term - string
If you specify a user filter term, then a users files will only be crawled if their login starts with the user filter term. Can be comma separated list of multiple filter terms. Example: a,b,c,v would be all box users that have a login starting with a,b,c, or v. This value can be empty to return all results.
f.index_items_discarded - boolean
Enable to index discarded document metadata
Default: false
f.maxSizeBytes - integer
Maximum size (in bytes) of documents to fetch or -1 for unlimited file size.
Default: 4194304
f.minSizeBytes - integer
Minimum size, in bytes, of documents to fetch.
Default: 0
failFastOnStartLinkFailure - boolean
If true, when Fusion cannot connect to any of the provided start links, the crawl is stopped and an exception logged.
Default: true
fetchDelayMS - integer
Number of milliseconds to wait between fetch requests. The default is 0. This property can be used to throttle a crawl if necessary.
Default: 0
fetchThreads - integer
The number of threads to use during fetching. The default is 5.
Default: 5
forceRefresh - boolean
Set to true to recrawl all items even if they have not changed since the last crawl.
Default: false
indexCrawlDBToSolr - boolean
EXPERIMENTAL: Set to true to index the crawl-database into a 'crawldb_<datasource-ID>' collection in Solr.
Default: false
initial_mapping - Initial field mapping
Provides mapping of fields before documents are sent to an index pipeline.
condition - string
Define a conditional script that must result in true or false. This can be used to determine if the stage should process or not.
label - string
A unique label for this stage.
<= 255 characters
mappings - array[object]
List of mapping rules
Default: {"operation":"move","source":"charSet","target":"charSet_s"}{"operation":"move","source":"fetchedDate","target":"fetchedDate_dt"}{"operation":"move","source":"lastModified","target":"lastModified_dt"}{"operation":"move","source":"signature","target":"dedupeSignature_s"}{"operation":"move","source":"contentSignature","target":"signature_s"}{"operation":"move","source":"length","target":"length_l"}{"operation":"move","source":"mimeType","target":"mimeType_s"}{"operation":"move","source":"parent","target":"parent_s"}{"operation":"move","source":"owner","target":"owner_s"}{"operation":"move","source":"group","target":"group_s"}
object attributes:{operation
: {
display name: Operation
type: string
}source
required : {
display name: Source Field
type: string
}target
: {
display name: Target Field
type: string
}}
reservedFieldsMappingAllowed - boolean
Default: false
skip - boolean
Set to true to skip this stage.
Default: false
unmapped - Unmapped Fields
If fields do not match any of the field mapping rules, these rules will apply.
operation - string
The type of mapping to perform: move, copy, delete, add, set, or keep.
Default: copy
Allowed values: copymovedeletesetaddkeep
source - string
The name of the field to be mapped.
target - string
The name of the field to be mapped to.
maxItems - integer
Maximum number of documents to fetch. The default (-1) means no limit.
Default: -1
refreshAll - boolean
Set to true to always recrawl all items found in the crawldb.
Default: false
refreshErrors - boolean
Set to true to recrawl items that failed during the last crawl.
Default: false
refreshIDPrefixes - array[string]
A prefix to recrawl all items whose IDs begin with this value.
refreshIDRegexes - array[string]
A regular expression to recrawl all items whose IDs match this pattern.
refreshOlderThan - integer
Number of seconds to recrawl items whose last fetched date is longer ago than this value.
Default: -1
refreshScript - string
A JavaScript function ('shouldRefresh()') to customize the items recrawled.
refreshStartLinks - boolean
Set to true to recrawl items specified in the list of start links.
Default: false
retainOutlinks - boolean
Set to true for links found during fetching to be stored in the crawldb. This increases precision in certain recrawl scenarios, but requires more memory and disk space.
Default: false
retryEmit - boolean
Set to true for emit batch failures to be retried on a document-by-document basis.
Default: true
startLinks - array[string]
One or more starting URIs for this datasource.