A metadata representation of database table within the Astro ecosystem that will not be persisted after corresponding operations are complete.

View on GitHub

Last Updated: Dec. 16, 2021

Access Instructions

Install the Astro provider package into your Airflow environment.

Import the module into your DAG file and instantiate it with your desired params.


conn_idstrThe ID of the configured Airflow Connection to use to connect to a database.
databaseOptional[str]The name of the database to use when executing the SQL.
warehouseOptional[str]The name of the warehouse to use when executing the SQL.


Following the traditional dev ops concept of "pets vs. cattle", you can decide whether the result of a function is a "pet" (e.g. a named table that you would want to reference later), or a "cattle" that can be deleted at any time. If you want to ensure that the output of your task is later garbage collected, then declaring it a nameless TempTable will place it into the astro_tmp schema, which can be later bulk deleted. All @transform-decorated functions will by default output to TempTables unless a Table object is used in the output_table argument.

from astro.sql import transform
from astro.sql.table import Table, TempTable
def my_first_sql_transformation(input_table: Table):
return "SELECT * FROM {input_table}"
def my_second_sql_transformation(input_table_2: Table):
return "SELECT * FROM {input_table_2}"
with dag:
my_table = my_first_sql_transformation(
input_table=Table(table_name="foo", database="bar", conn_id="postgres_conn"),
output_table=TempTable(database="bar", conn_id="postgres_conn"),

Was this page helpful?