Oveview
Run Cache is a pip installable plugin for dbt Core™ that reduces warehouse compute time without any migration. It works out of the box and runs wherever dbt Core does: locally, GitHub Actions, or an orchestrator (Dagster, Airflow etc.). Run Cache automatically tracks state, giving you the ability to skip models without fresh data, automatically defer and clone from production, all without any setup (no more manifest.json).
Quickstart
Welcome to Run Cache!
This quickstart guide will help you get started with our dbt Core™ plugin. Within minutes, you can start reducing warehouse costs and developer overhead.
Prerequisites
- Python versions: 3.10-3.13
- dbt versions: 1.7 - 1.11 (latest)
- Warehouses: Snowflake, Databricks, BigQuery
- pip
- Terminal
Step 1: Install Run Cache
Use one of the following methods:
- pip
- UV
- requirements.txt
Pip
cd to your project folder from your terminal of choice. We recommend using a virtual environment to preserve your global Python environment.
cd to/your/project
python3 -m venv .venv
source .venv/bin/activate
pip install run-cache
UV
cd to your project folder from your terminal of choice. This will require a pyproject.toml, if you do not already have one. Run the uv init command. Install dbt-core and related adapters.
cd to/your/project
uv add run-cache
uv sync
source .venv/bin/activate
requirements.txt
Open your requirements file and append the following:
run-cache
Make sure you are in your project root, activate your environment, install requirements, then run from your project root.
cd to/your/project
source .venv/bin/activate
pip install -r requirements.txt
or
uv sync
Step 2: Configure prod (Optional)
Run Cache by default looks for a target named prod to defer from. However, you may call prod something else! If so, you will need to add a flag to your dbt_project.yml or profiles.yml (see the reference).
dbt_project.yml
flags:
run_cache_defer_to: production
profiles.yml
my_project:
target: dev
outputs:
dev:
type: bigquery
project: my-project
dataset: my_dataset
run_cache_defer_to: production
Step 3: Login
Execute the dbt run command after installation. It will prompt you to log in to Run Cache.
Once authenticated, your dbt run will execute with run cache.
Step 4: Experience faster execution and less warehouse compute
Run cache does not require anything else. It will run in the background during any dbt run command execution and save you compute without any behavior or workflow change.
If for whatever reason you do not want to use it anymore, just uninstall it:
pip uninstall run-cache
Pro-tips
Freshness Tolerance
One of the best ways to save compute is to explicitly set a freshness tolerance. By default, this is set to 45 minutes, assuming models rarely need to be updated more than once an hour. But being so conservative is not always necessary, especially when developing locally.
You can do this globally, by setting a different value for run_cache_freshness_tolerance in dbt_project.yml:
flags:
run_cache_freshness_tolerance: "{{ '12 hours' if target.name == 'prod' else '7 days' }}"
You can do it on the model level, either inline:
{{
config(
materialized='table',
run_cache_freshness_tolerance='7d'
)
}}
Or on the model level using dbt_project.yml:
models:
your_project_name:
my_model_name:
+run_cache_freshness_tolerance: 7d
See more details in the config reference.
Simplify Prod Runs
Enabling run cache in production for scheduled runs has many benefits. Production runs will now skip the execution of views and seeds that have not changed. It will also skip models and tests that have not changed and do not have any new upstream data. With run cache, you can start doing dbt run more frequently (every hour), and only models that need to update will run. If your upstream data lands more frequently than you need your data refreshed, you can leverage freshness tolerances to control the cadence of your models. This enables your teams to get faster data updates without additional cost or complexity.
Production runs also make development faster. Because production populates the cache, developers can clone production data automatically without recomputation.
Disable on run
Prepend with the following environment variable:
RUN_CACHE_DISABLED=1 dbt run --target dev --select "customers"