Before you can do anything with the Encord platform and cloud storage, you need to configure your cloud storage to work with Encord. Once the integration between Encord and your cloud storage is complete, you can then use your data in Encord.

There are two parts to setting up a GCP integration in Encord:

  1. Setting up a GCP bucket on the Google Cloud Platform so that it can be integrated with Encord. This includes setting up a CORS configuration for your bucket.

  2. Creating a GCP integration on the Encord platform.

See our GCP integration documentation for a detailed guide to setting up an integration.

Step 1: Setup GCP Integration

Step 2: Create Encord Integration

Once your GCP storage is set up and configured, you are ready to create the integration in the Encord platform.

In the Integrations section of the Encord platform, click +New integration to create a new integration.

Click + Add integration.

Create the integration by selecting GCP at the top of the chooser. Enter a name for the integration, and enter the name of the bucket you wish to make available in the second dropdown of the GCP integration window.

Optionally check the box to enable Strict client-only access, server-side media features will not be available if you would like Encord to sign URLs, but refrain from downloading any media files onto Encord servers. Read more about this feature here.

Click Create to create the GCP integration.

Step 3: Create Metadata Schema

Metadata schema

If you are not using Index or Active, you do not need to create a Custom Metadata Schema, because you will not be using custom metadata.

Before importing your custom metadata to Encord, we recommend that you import a metadata schema. Encord uses metadata schemas to validate custom metadata uploaded to Encord and to instruct Index and Active how to display your metadata.

Benefits of Using a Metadata Schema

Using a metadata schema provides several benefits:

  • Validation: Ensures that all custom metadata conforms to predefined data types, reducing errors during data import and processing.
  • Consistency: Maintains uniformity in data types across different datasets and projects, which simplifies data management and analysis.
  • Filtering and Sorting: Enhances the ability to filter and sort data efficiently in the Encord platform, enabling more accurate and quick data retrieval.

Metadata Schema Table

Use add_scalar to add a scalar key to your metadata schema.

Scalar KeyDescriptionDisplay Benefits
booleanBinary data type with values “true” or “false”.Filtering by binary values
datetimeISO 8601 formatted date and time.Filtering by time and date
numberNumeric data type supporting float values.Filtering by numeric values
uuidCustomer specified unique identifier for a data unit.Filtering by customer specified unique identifier
varcharTextual data type. Formally string. string can be used as an alias for varchar, but we STRONGLY RECOMMEND that you use varchar.Filtering by string.
textText data with unlimited length (example: transcripts for audio). Formally long_string. long_string can be used as an alias for text, but we STRONGLY RECOMMEND that you use text.Storing and filtering large amounts of text.

Use add_enum and add_enum_options to add an enum and enum options to your meta data schema.

KeyDescriptionDisplay Benefits
enumEnumerated type with predefined set of values.Facilitates categorical filtering and data validation

Use add_embedding to add an embedding to your metadata schema.

KeyDescriptionDisplay Benefits
embedding512 dimension embeddings for Active, 1 to 4096 for Index.Filtering by embeddings, similarity search, 2D scatter plot visualization (Coming Soon)

Incorrectly specifying a data type in the schema can cause errors when filtering your data in Index or Active. If you encounter errors while filtering, verify your schema is correct. If your schema has errors, correct the errors, re-import the schema, and then re-sync your Active Project.

Step 4: Create JSON or CSV for import

For a list of supported file formats for each data type, go here.

All types of data (videos, images, image groups, image sequences, and DICOM) from a private cloud are added to a Dataset in the same way, by using a JSON or CSV file. The file includes links to all of the images, image groups, videos and DICOM files in your cloud storage.

For a list of supported file formats for each data type, go here
Encord supports file names up to 300 characters in length for any file or video for upload.

Create JSON file for import

For detailed information about the JSON file format used for import go here.

The information provided about each of the following data types is designed to get you up and running as quickly as possible without going too deeply into the why or how. Look at the template for each data type, then the examples, and adjust the examples to suit your needs.

If skip_duplicate_urls is set to true, all object URLs that exactly match existing images/videos in the dataset are skipped.


Create CSV file for import

In the CSV file format, the column headers specify which type of data is being uploaded. You can add and single file format at a time, or combine multiple data types in a single CSV file.

Details for each data format are given in the sections below.

Encord supports up to 10,000 entries for upload in the CSV file.
  • Object URLs can’t contain whitespace.
  • For backwards compatibility reasons, a single column CSV is supported. A file with the single ObjectUrl column is interpreted as a request for video upload. If your objects are of a different type (for example, images), this error displays: “Expected a video, got a file of type XXX”.

Step 5: Upload data to Encord

To use your data in Encord, it must be uploaded to the Encord Files storage. Once uploaded, your data can be reused across multiple Projects and contain no labels or annotations themselves. The following script creates a folder in Files and uses your AWS integration to upload data to that folder.

The following script creates a new folder in Files and initiates uploads from AWS. It works for all file types.

If Upload is still in progress, try again later! is returned, use the script to check the upload status to see whether the upload has finished.

Ensure that you:

  • Replace <private_key_path> with the path to your private key.
  • Replace <integration_title> with the title of the integration you want to use.
  • Replace <folder_name> with the folder name. The scripts assume that the specified folder name is unique.
  • Replace path/to/json/file.json with the path to a JSON file specifying which cloud storage files should be uploaded.
  • Replace A folder to store my files with a meaningful description for your folder.
  • Replace "my": "folder_metadata" with any metadata you want to add to the folder.

The script has several possible outputs:

  • “Upload is still in progress, try again later!”: The upload has not finished. Run this script again later to check if the upload has finished.
  • “Upload completed”: The upload completed. If any files failed to upload, the URLs are listed.
  • “Upload failed”: The entire upload failed, and not just individual files. Ensure your JSON file is formatted correctly.

Step 6: Check data upload

If Step 5 returns "Upload is still in progress, try again later!", run the following code to query the Encord server again. Ensure that you replace <upload_job_id> with the output by the previous code. In the example above upload_job_id=c4026edb-4fw2-40a0-8f05-a1af7f465727.

The script has several possible outputs:

  • “Upload is still in progress, try again later!”: The upload has not finished. Run this script again later to check if the upload has finished.

  • “Upload completed”: The upload completed. If any files failed to upload, the URLs are listed.

  • “Upload failed”: The entire upload failed, and not just individual files. Ensure your JSON file is formatted correctly.

# Import dependencies
from encord import EncordUserClient
from encord.orm.dataset import LongPollingStatus

upload_job_id = <upload_job_id>

# Authenticate with Encord using the path to your private key. 
user_client = EncordUserClient.create_with_ssh_private_key(
    ssh_private_key_path="<private_key_path>"
    )

# Check upload status
res = dataset.add_private_data_to_dataset_get_result(upload_job_id, timeout_seconds=5)
print(f"Execution result: {res}")

if res.status == LongPollingStatus.PENDING:
    print("Upload is still in progress, try again later!")
elif res.status == LongPollingStatus.DONE:
    print("Upload completed")
    if res.data_unit_errors:
        print("The following URLs failed to upload:")
        for e in res.data_unit_errors:
            print(e.object_urls)
else:
    print(f"Upload failed: {res.errors}")
Omitting the timeout_seconds argument from the add_private_data_to_dataset_get_result() method performs status checks until the status upload has finished.

Step 7: Create a Dataset

Creating a Dataset and adding files to a Dataset are two distinct steps. Click here to learn how to add data to an existing Dataset.
Datasets cannot be deleted using the SDK or the API. Use the Encord platform to delete Datasets.

The following example creates a Dataset called “Houses” that expects data hosted on GCP.

  • Substitute <private_key_path> with the file path for your private key.
  • Replace “Houses” with the name you want your Dataset to have.
Storage locationStorageLocation method argumentRepresented by
AWS S3AWS1
GCPGCP2
Azure blobAZURE3
Open Telekom CloudOTC4
Encord storageCORD_STORAGE0

Step 8: Add your data to a Dataset

Now that you upload your data and created a Dataset, its time to add your files to the Dataset. The following scripts add all files in a specified folder to a Dataset.

  • Replace <private_key_path> with the path to your private key.
  • Replace <folder_name> with the name you want to give your Storage folder.
  • Replace <dataset_hash> with the hash of the Dataset you want to add the data units to.
Files added to the folder at a later time will not be automatically added to the Dataset.
All files
from encord import EncordUserClient

# Authentication
user_client = EncordUserClient.create_with_ssh_private_key(
    ssh_private_key_path="<private_key_path>"
)

# Find the storage folder by name
folder_name = "<folder_name>"  # Replace with your folder's name
folders = list(user_client.find_storage_folders(search=folder_name, page_size=1))

dataset = user_client.get_dataset("<dataset_hash>")

# Ensure the folder was found
if folders:
    storage_folder = folders[0]

    # List all data units
    items = list(storage_folder.list_items())

    # Collect all item UUIDs
    item_uuids = [item.uuid for item in items]

    # Output the retrieved data units
    for item in items:
        print(f"UUID: {item.uuid}, Name: {item.name}, Type: {item.item_type}")

    # Link all items at once if there are any
    if item_uuids:
        dataset.link_items(item_uuids)
else:
    print("Folder not found.")

Step 9: Verify your files are in the Dataset

After adding your files to the Dataset, verify that all the files you expect to be there made it into the Dataset.

The following script prints the URLs of all the files in a Dataset. Ensure that you:

  • Replace <private_key_path> with the path to your private key.
  • Replace <dataset_hash> with the hash of your Dataset.
Sample Code
#Import dependencies
from encord import EncordUserClient, Project,Dataset
from encord.objects.project import ProjectDataset
from encord.orm.dataset import DatasetAccessSettings

#Initiate client
user_client = EncordUserClient.create_with_ssh_private_key(
    ssh_private_key_path="<private_key_path>"
)

#Files the file links to all files in the Dataset
dataset_level_file_links = []
dataset: Dataset = user_client.get_dataset("<dataset_hash>")
for data in dataset.list_data_rows():
    dataset_level_file_links.append(data.file_link)
print(dataset_level_file_links)

Step 10: Prepare your data for label/annotation import


Step 11: Import labels/annotations