How to use dataset resources
Use the following resources listed below to wrap data from any data source. This will enable you to register your dataset's columnar data and metadata to Vectice.
Vectice stores the metadata of your datasets, not your actual datasets.
Resources | Description |
---|---|
| Wrap your dataset's columnar data and metadata from your storage location. It can be extended for any data source. (example: Redshift, RDS, etc.) |
| Wrap your dataset's columnar data and metadata from a local file. |
| Wrap your dataset's columnar data and metadata from your Google Cloud Storage (GCS) source. |
| Wrap your dataset's columnar data and its metadata from your AWS S3 source. |
| Wrap your dataset's columnar data and metadata from your BigQuery source. |
| Wrap your dataset's columnar data and metadata from your Databricks source. |
Resource Usage Examples
Below we highlight how you can use the available Resources to wrap your dataset's columnar and metadata to later register your dataset to Vectice.
A Custom Data Source
To wrap data from a custom data source, create a custom resource inherited from the base Resource
class and implement your own _build_metadata()
and _fetch_data()
methods.
View our guide How to add a custom data source for more information and examples.
Local Data Source
Use FileResource()
to wrap columnar data that you have stored in a local file.
Google Cloud Storage Data Source
Use GCSResource()
to wrap columnar data that you have stored in Google Cloud Storage to Vectice.
You have the option to retrieve file size, creation date and updated date (used for auto-versioning) up to 5000 files.
AWS S3 Data Source
Use S3Resource()
to wrap data that you have stored in AWS S3.
You have the option to retrieve file size, creation date and updated date (used for auto-versioning) up to 5000 files.
BigQuery Data Source
Use BigQueryResource()
to wrap data that you have stored in Google's BigQuery.
Databricks Data Source
Use DatabricksTableResource()
to wrap data that you have stored in Databricks.