Mass Import from a bucket

aidbox.bulk/load-from-bucket

It allows loading data from a bunch of .ndjson files on an AWS bucket directly to the Aidbox database with maximum performance.

Files content and naming requirement

  1. 1.
    The file must consist of Resources of the same type.
  2. 2.
    The file name should start with a name of the Resource type, then some postfix is possible, and extension .ndjson is required. Files can be placed in subdirectories of any level. Files with the wrong path structure will be ignored.

Valid file structure example:

1
fhir/1/Patient.ndjson
2
fhir/1/patient-01.ndjson
3
Observation.ndjson
Copied!

Invalid file structure example:

1
import.ndjson
2
01-patient.ndjson
3
fhir/Patient
Copied!

Parameters

Parameters
Result
Error
Object with the following structure:
  • bucket * defines your bucket connection string in formats3://<bucket-name>
  • thread-num defines how many threads will process the import. The default is 4.
  • account credential:
    • access-key-id * AWS key ID
    • secret-access-key * AWS secret key
    • region * AWS Bucket region
  • disable-idx? the default is false. Allows to drop all indexes for resources, which data are going to be loaded. Indexes will be restored at the end of successful import. All information about dropped indexes is stored at DisabledIndex resources.
  • drop-primary-key? the default is false. The same as the previous parameter, but drops primary key constraint for resources tables. This parameter disables all checks for duplicates for imported resources.
  • upsert? the default is false. If upsert? is false, import for files with id uniqueness constraint violation will fail with an error, if true - records in the database will be overridden with records from import. Even when upsert? is true, it's still not allowed to have more than one record with the same id in one import file. Setting this option to true will cause a decrease in performance.
  • scheduler possible values: optimal , by-last-modified, the default is optimal . Establishes the order in which the files are processed. The optimal value provides the best performance. by-last-modified should be used with thread-num = 1 to guarantee a stable order of file processing.
  • prefixes array of prefixes to specify which files should be processed. Example: with value ["fhir/1/", "fhir/2/Patient"] only files from directory "fhir/1" and Patient files from directory "fhir/2" will be processed.
Returns the string "Upload started"
Returns error message

Example

Request
Response
1
POST /rpc
2
content-type: text/yaml
3
accept: text/yaml
4
5
method: aidbox.bulk/load-from-bucket
6
params:
7
bucket: s3://your-bucket-id
8
thread-num: 4
9
account:
10
access-key-id: your-key-id
11
secret-access-key: your-secret-access-key
12
region: us-east-1
Copied!
Status: 200
1
result:
2
message: "Upload started"
Copied!

Loader File

For each file being imported via load-from-bucket method, Aidbox creates LoaderFile resource. To find out how many resources were imported from a file, check the loaded field.

Loader File Example

1
{
2
"end": "2022-04-11T14:50:27.893Z",
3
"file": "/tmp/patient.ndjson.gz",
4
"size": 100,
5
"type": "Patient",
6
"bucket": "local",
7
"loaded": 20,
8
"status": "done"
9
}
Copied!

How to reload a file one more time

On launch aidbox.bulk/load-from-bucket checks if files from the bucket were planned to import and decides what to do:
  • If ndjson file has it's related LoaderFile resource, the loader skips this file from import
  • If there is no related LoaderFile resource, Aidbox puts this file to the queue creating a LoaderFile resource
In order to import a file one more time you should delete related LoaderFile resource and relaunch aidbox.bulk/load-from-bucket.
Files are processed completely. The loader doesn't support partial re-import.