Log dataset structure and statistics

Data scientists can document dataset structure and statistics in Vectice by using Pandas or Spark dataframes (version 3.0 and newer) when logging datasets.

Most common statistics are currently not captured with Spark Dataframes.

Statistics will be captured for the first 100 columns of your dataframe. Statistics are not captured if the numbers of rows are below 100 to keep the data anonymous. The Org Admin can adjust this threshold in organization settings.

Here are the automatically captured statistics based on column types in your dataframes.

  • If no dataframe is provided, it will retrieve schema columns and rows from the resource (if available).

  • If dataframes are provided, it will infer schema columns and rows based on the dataframe.

StatsDataframe Column TypeDescription



The count of unique values in the dataframe.

Most Common


The most recurrent value in the dataframe and percentage of occurrences.


Numeric, Date

The average value of the data points in the dataframe.


Numeric, Date

The middle value in the dataframe.



The measure of the distribution of data points in the dataframe from the mean value.

St. Deviation


The square root of the variance is a commonly used measure of the data distribution.


Numeric, Date

The smallest data point in the dataframe.


Numeric, Date

The largest data point in the dataframe.



The value in which the data falls within the 25%, 50%, and 75% percentiles and their min and max.


Text, Numeric, Date

The percentage of missing values in the data column.



The count of true values with the percentage of occurrence in the column.



The count of false values with the percentage of occurrence in the column.

Capture schema without statistic

By default, both schema and column statistics are captured. Setting capture_schema_only to True captures only schema information, excluding column stats.

resource = FileResource(paths, dataframes, capture_schema_only=True)

Column stats computation can impact processing time, so it is recommended to set capture_schema_only to True if performance is a concern or detailed stats are not needed.

Log origin and cleaned dataset statistics

To log your origin and cleaned resource's structure and statistics for each column type, pass a dataframe to your Resource class using the dataframes parameter:

df = dataframe...  # load your pandas or spark dataframe

gcs_resource = GCSResource(

# Log origin or cleaned datasets and its structure and statistics to Vectice
dataset = Dataset.clean(gcs_resource, name="Clean_Dataset")

Log modeling dataset statistics

To log the structure and statistics of your modeling resources, you need to specify which resource statistics you want. You can collect statistics for all modeling resources (training, testing, and validation) or choose specific resources for statistics.

For instance, if you only want statistics for your testing resource, use the "dataframe" parameter in the Resource class to log testing_resource statistics.

Pandas dataframe statistics logging example

# Create your pandas (or spark) dataframes for your modeling datasets  
training_dataframe = pd.DataFrame(pd.read_csv("train_clean.csv"))
testing_dataframe = pd.DataFrame(pd.read_csv("train_reduced.csv"))
validation_dataframe = pd.DataFrame(pd.read_csv("validation.csv"))

# Create your dataset resources to wrap your datasets and collect statistics
training_resource = FileResource(paths="train_clean.csv", dataframes=training_dataframe)
testing_resource = FileResource(paths="train_reduced.csv", dataframes=testing_dataframe)

# Data resource without collecting statistics
validation_resource = FileResource(paths="validation.csv")

# Log all modeling datasets and its structure and statistics to Vectice
modeling_dataset = Dataset.modeling(
                    name = "Modeling",
                    training_resource = training_resource,
                    testing_resource = testing_resource,
                    validation_resource = validation_resource,