Release: 1.0.12 | Release Date: February 15, 2016

SQLAlchemy 1.0 Documentation

Describing Databases with MetaData

This section discusses the fundamental Table, Column and MetaData objects.

A collection of metadata entities is stored in an object aptly named MetaData:

from sqlalchemy import *

metadata = MetaData()

MetaData is a container object that keeps together many different features of a database (or multiple databases) being described.

To represent a table, use the Table class. Its two primary arguments are the table name, then the MetaData object which it will be associated with. The remaining positional arguments are mostly Column objects describing each column:

user = Table('user', metadata,
    Column('user_id', Integer, primary_key=True),
    Column('user_name', String(16), nullable=False),
    Column('email_address', String(60)),
    Column('password', String(20), nullable=False)
)

Above, a table called user is described, which contains four columns. The primary key of the table consists of the user_id column. Multiple columns may be assigned the primary_key=True flag which denotes a multi-column primary key, known as a composite primary key.

Note also that each column describes its datatype using objects corresponding to genericized types, such as Integer and String. SQLAlchemy features dozens of types of varying levels of specificity as well as the ability to create custom types. Documentation on the type system can be found at Column and Data Types.

Accessing Tables and Columns

The MetaData object contains all of the schema constructs we’ve associated with it. It supports a few methods of accessing these table objects, such as the sorted_tables accessor which returns a list of each Table object in order of foreign key dependency (that is, each table is preceded by all tables which it references):

>>> for t in metadata.sorted_tables:
...    print t.name
user
user_preference
invoice
invoice_item

In most cases, individual Table objects have been explicitly declared, and these objects are typically accessed directly as module-level variables in an application. Once a Table has been defined, it has a full set of accessors which allow inspection of its properties. Given the following Table definition:

employees = Table('employees', metadata,
    Column('employee_id', Integer, primary_key=True),
    Column('employee_name', String(60), nullable=False),
    Column('employee_dept', Integer, ForeignKey("departments.department_id"))
)

Note the ForeignKey object used in this table - this construct defines a reference to a remote table, and is fully described in Defining Foreign Keys. Methods of accessing information about this table include:

# access the column "EMPLOYEE_ID":
employees.columns.employee_id

# or just
employees.c.employee_id

# via string
employees.c['employee_id']

# iterate through all columns
for c in employees.c:
    print c

# get the table's primary key columns
for primary_key in employees.primary_key:
    print primary_key

# get the table's foreign key objects:
for fkey in employees.foreign_keys:
    print fkey

# access the table's MetaData:
employees.metadata

# access the table's bound Engine or Connection, if its MetaData is bound:
employees.bind

# access a column's name, type, nullable, primary key, foreign key
employees.c.employee_id.name
employees.c.employee_id.type
employees.c.employee_id.nullable
employees.c.employee_id.primary_key
employees.c.employee_dept.foreign_keys

# get the "key" of a column, which defaults to its name, but can
# be any user-defined string:
employees.c.employee_name.key

# access a column's table:
employees.c.employee_id.table is employees

# get the table related by a foreign key
list(employees.c.employee_dept.foreign_keys)[0].column.table

Creating and Dropping Database Tables

Once you’ve defined some Table objects, assuming you’re working with a brand new database one thing you might want to do is issue CREATE statements for those tables and their related constructs (as an aside, it’s also quite possible that you don’t want to do this, if you already have some preferred methodology such as tools included with your database or an existing scripting system - if that’s the case, feel free to skip this section - SQLAlchemy has no requirement that it be used to create your tables).

The usual way to issue CREATE is to use create_all() on the MetaData object. This method will issue queries that first check for the existence of each individual table, and if not found will issue the CREATE statements:

engine = create_engine('sqlite:///:memory:')

metadata = MetaData()

user = Table('user', metadata,
    Column('user_id', Integer, primary_key=True),
    Column('user_name', String(16), nullable=False),
    Column('email_address', String(60), key='email'),
    Column('password', String(20), nullable=False)
)

user_prefs = Table('user_prefs', metadata,
    Column('pref_id', Integer, primary_key=True),
    Column('user_id', Integer, ForeignKey("user.user_id"), nullable=False),
    Column('pref_name', String(40), nullable=False),
    Column('pref_value', String(100))
)

sqlmetadata.create_all(engine)

create_all() creates foreign key constraints between tables usually inline with the table definition itself, and for this reason it also generates the tables in order of their dependency. There are options to change this behavior such that ALTER TABLE is used instead.

Dropping all tables is similarly achieved using the drop_all() method. This method does the exact opposite of create_all() - the presence of each table is checked first, and tables are dropped in reverse order of dependency.

Creating and dropping individual tables can be done via the create() and drop() methods of Table. These methods by default issue the CREATE or DROP regardless of the table being present:

engine = create_engine('sqlite:///:memory:')

meta = MetaData()

employees = Table('employees', meta,
    Column('employee_id', Integer, primary_key=True),
    Column('employee_name', String(60), nullable=False, key='name'),
    Column('employee_dept', Integer, ForeignKey("departments.department_id"))
)
sqlemployees.create(engine)

drop() method:

sqlemployees.drop(engine)

To enable the “check first for the table existing” logic, add the checkfirst=True argument to create() or drop():

employees.create(engine, checkfirst=True)
employees.drop(engine, checkfirst=False)

Altering Schemas through Migrations

While SQLAlchemy directly supports emitting CREATE and DROP statements for schema constructs, the ability to alter those constructs, usually via the ALTER statement as well as other database-specific constructs, is outside of the scope of SQLAlchemy itself. While it’s easy enough to emit ALTER statements and similar by hand, such as by passing a string to Connection.execute() or by using the DDL construct, it’s a common practice to automate the maintenance of database schemas in relation to application code using schema migration tools.

There are two major migration tools available for SQLAlchemy:

  • Alembic - Written by the author of SQLAlchemy, Alembic features a highly customizable environment and a minimalistic usage pattern, supporting such features as transactional DDL, automatic generation of “candidate” migrations, an “offline” mode which generates SQL scripts, and support for branch resolution.
  • SQLAlchemy-Migrate - The original migration tool for SQLAlchemy, SQLAlchemy-Migrate is widely used and continues under active development. SQLAlchemy-Migrate includes features such as SQL script generation, ORM class generation, ORM model comparison, and extensive support for SQLite migrations.

Specifying the Schema Name

Some databases support the concept of multiple schemas. A Table can reference this by specifying the schema keyword argument:

financial_info = Table('financial_info', meta,
    Column('id', Integer, primary_key=True),
    Column('value', String(100), nullable=False),
    schema='remote_banks'
)

Within the MetaData collection, this table will be identified by the combination of financial_info and remote_banks. If another table called financial_info is referenced without the remote_banks schema, it will refer to a different Table. ForeignKey objects can specify references to columns in this table using the form remote_banks.financial_info.id.

The schema argument should be used for any name qualifiers required, including Oracle’s “owner” attribute and similar. It also can accommodate a dotted name for longer schemes:

schema="dbo.scott"

Backend-Specific Options

Table supports database-specific options. For example, MySQL has different table backend types, including “MyISAM” and “InnoDB”. This can be expressed with Table using mysql_engine:

addresses = Table('engine_email_addresses', meta,
    Column('address_id', Integer, primary_key=True),
    Column('remote_user_id', Integer, ForeignKey(users.c.user_id)),
    Column('email_address', String(20)),
    mysql_engine='InnoDB'
)

Other backends may support table-level options as well - these would be described in the individual documentation sections for each dialect.

Column, Table, MetaData API

class sqlalchemy.schema.Column(*args, **kwargs)

Bases: sqlalchemy.schema.SchemaItem, sqlalchemy.sql.expression.ColumnClause

Represents a column in a database table.

__eq__(other)
inherited from the __eq__() method of ColumnOperators

Implement the == operator.

In a column context, produces the clause a = b. If the target is None, produces a IS NULL.

__init__(*args, **kwargs)

Construct a new Column object.

Parameters:
  • name

    The name of this column as represented in the database. This argument may be the first positional argument, or specified via keyword.

    Names which contain no upper case characters will be treated as case insensitive names, and will not be quoted unless they are a reserved word. Names with any number of upper case characters will be quoted and sent exactly. Note that this behavior applies even for databases which standardize upper case names as case insensitive such as Oracle.

    The name field may be omitted at construction time and applied later, at any time before the Column is associated with a Table. This is to support convenient usage within the declarative extension.

  • type_

    The column’s type, indicated using an instance which subclasses TypeEngine. If no arguments are required for the type, the class of the type can be sent as well, e.g.:

    # use a type with arguments
    Column('data', String(50))
    
    # use no arguments
    Column('level', Integer)

    The type argument may be the second positional argument or specified by keyword.

    If the type is None or is omitted, it will first default to the special type NullType. If and when this Column is made to refer to another column using ForeignKey and/or ForeignKeyConstraint, the type of the remote-referenced column will be copied to this column as well, at the moment that the foreign key is resolved against that remote Column object.

    Changed in version 0.9.0: Support for propagation of type to a Column from its ForeignKey object has been improved and should be more reliable and timely.

  • *args – Additional positional arguments include various SchemaItem derived constructs which will be applied as options to the column. These include instances of Constraint, ForeignKey, ColumnDefault, and Sequence. In some cases an equivalent keyword argument is available such as server_default, default and unique.
  • autoincrement

    This flag may be set to False to indicate an integer primary key column that should not be considered to be the “autoincrement” column, that is the integer primary key column which generates values implicitly upon INSERT and whose value is usually returned via the DBAPI cursor.lastrowid attribute. It defaults to True to satisfy the common use case of a table with a single integer primary key column. If the table has a composite primary key consisting of more than one integer column, set this flag to True only on the column that should be considered “autoincrement”.

    The setting only has an effect for columns which are:

    • Integer derived (i.e. INT, SMALLINT, BIGINT).
    • Part of the primary key
    • Not refering to another column via ForeignKey, unless the value is specified as 'ignore_fk':
      # turn on autoincrement for this column despite
      # the ForeignKey()
      Column('id', ForeignKey('other.id'),
                  primary_key=True, autoincrement='ignore_fk')

      It is typically not desirable to have “autoincrement” enabled on such a column as its value intends to mirror that of a primary key column elsewhere.

    • have no server side or client side defaults (with the exception of Postgresql SERIAL).

    The setting has these two effects on columns that meet the above criteria:

    • DDL issued for the column will include database-specific keywords intended to signify this column as an “autoincrement” column, such as AUTO INCREMENT on MySQL, SERIAL on Postgresql, and IDENTITY on MS-SQL. It does not issue AUTOINCREMENT for SQLite since this is a special SQLite flag that is not required for autoincrementing behavior.
    • The column will be considered to be available as cursor.lastrowid or equivalent, for those dialects which “post fetch” newly inserted identifiers after a row has been inserted (SQLite, MySQL, MS-SQL). It does not have any effect in this regard for databases that use sequences to generate primary key identifiers (i.e. Firebird, Postgresql, Oracle).

    Changed in version 0.7.4: autoincrement accepts a special value 'ignore_fk' to indicate that autoincrementing status regardless of foreign key references. This applies to certain composite foreign key setups, such as the one demonstrated in the ORM documentation at Rows that point to themselves / Mutually Dependent Rows.

  • default

    A scalar, Python callable, or ColumnElement expression representing the default value for this column, which will be invoked upon insert if this column is otherwise not specified in the VALUES clause of the insert. This is a shortcut to using ColumnDefault as a positional argument; see that class for full detail on the structure of the argument.

    Contrast this argument to Column.server_default which creates a default generator on the database side.

  • doc – optional String that can be used by the ORM or similar to document attributes. This attribute does not render SQL comments (a future attribute ‘comment’ will achieve that).
  • key – An optional string identifier which will identify this Column object on the Table. When a key is provided, this is the only identifier referencing the Column within the application, including ORM attribute mapping; the name field is used only when rendering SQL.
  • index – When True, indicates that the column is indexed. This is a shortcut for using a Index construct on the table. To specify indexes with explicit names or indexes that contain multiple columns, use the Index construct instead.
  • info – Optional data dictionary which will be populated into the SchemaItem.info attribute of this object.
  • nullable – If set to the default of True, indicates the column will be rendered as allowing NULL, else it’s rendered as NOT NULL. This parameter is only used when issuing CREATE TABLE statements.
  • onupdate – A scalar, Python callable, or ClauseElement representing a default value to be applied to the column within UPDATE statements, which wil be invoked upon update if this column is not present in the SET clause of the update. This is a shortcut to using ColumnDefault as a positional argument with for_update=True.
  • primary_key – If True, marks this column as a primary key column. Multiple columns can have this flag set to specify composite primary keys. As an alternative, the primary key of a Table can be specified via an explicit PrimaryKeyConstraint object.
  • server_default

    A FetchedValue instance, str, Unicode or text() construct representing the DDL DEFAULT value for the column.

    String types will be emitted as-is, surrounded by single quotes:

    Column('x', Text, server_default="val")
    
    x TEXT DEFAULT 'val'

    A text() expression will be rendered as-is, without quotes:

    Column('y', DateTime, server_default=text('NOW()'))
    
    y DATETIME DEFAULT NOW()

    Strings and text() will be converted into a DefaultClause object upon initialization.

    Use FetchedValue to indicate that an already-existing column will generate a default value on the database side which will be available to SQLAlchemy for post-fetch after inserts. This construct does not specify any DDL and the implementation is left to the database, such as via a trigger.

  • server_onupdate – A FetchedValue instance representing a database-side default generation function. This indicates to SQLAlchemy that a newly generated value will be available after updates. This construct does not specify any DDL and the implementation is left to the database, such as via a trigger.
  • quote – Force quoting of this column’s name on or off, corresponding to True or False. When left at its default of None, the column identifier will be quoted according to whether the name is case sensitive (identifiers with at least one upper case character are treated as case sensitive), or if it’s a reserved word. This flag is only needed to force quoting of a reserved word which is not known by the SQLAlchemy dialect.
  • unique – When True, indicates that this column contains a unique constraint, or if index is True as well, indicates that the Index should be created with the unique flag. To specify multiple columns in the constraint/index or to specify an explicit name, use the UniqueConstraint or Index constructs explicitly.
  • system

    When True, indicates this is a “system” column, that is a column which is automatically made available by the database, and should not be included in the columns list for a CREATE TABLE statement.

    For more elaborate scenarios where columns should be conditionally rendered differently on different backends, consider custom compilation rules for CreateColumn.

    New in version 0.8.3: Added the system=True parameter to Column.

__le__(other)
inherited from the __le__() method of ColumnOperators

Implement the <= operator.

In a column context, produces the clause a <= b.

__lt__(other)
inherited from the __lt__() method of ColumnOperators

Implement the < operator.

In a column context, produces the clause a < b.

__ne__(other)
inherited from the __ne__() method of ColumnOperators

Implement the != operator.

In a column context, produces the clause a != b. If the target is None, produces a IS NOT NULL.

anon_label
inherited from the anon_label attribute of ColumnElement

provides a constant ‘anonymous label’ for this ColumnElement.

This is a label() expression which will be named at compile time. The same label() is returned each time anon_label is called so that expressions can reference anon_label multiple times, producing the same label name at compile time.

the compiler uses this function automatically at compile time for expressions that are known to be ‘unnamed’ like binary expressions and function calls.

asc()
inherited from the asc() method of ColumnOperators

Produce a asc() clause against the parent object.

between(cleft, cright, symmetric=False)
inherited from the between() method of ColumnOperators

Produce a between() clause against the parent object, given the lower and upper range.

cast(type_)
inherited from the cast() method of ColumnElement

Produce a type cast, i.e. CAST(<expression> AS <type>).

This is a shortcut to the cast() function.

New in version 1.0.7.

collate(collation)
inherited from the collate() method of ColumnOperators

Produce a collate() clause against the parent object, given the collation string.

compare(other, use_proxies=False, equivalents=None, **kw)
inherited from the compare() method of ColumnElement

Compare this ColumnElement to another.

Special arguments understood:

Parameters:
  • use_proxies – when True, consider two columns that share a common base column as equivalent (i.e. shares_lineage())
  • equivalents – a dictionary of columns as keys mapped to sets of columns. If the given “other” column is present in this dictionary, if any of the columns in the corresponding set() pass the comparison test, the result is True. This is used to expand the comparison to other columns that may be known to be equivalent to this one via foreign key or other criterion.
compile(bind=None, dialect=None, **kw)
inherited from the compile() method of ClauseElement

Compile this SQL expression.

The return value is a Compiled object. Calling str() or unicode() on the returned value will yield a string representation of the result. The Compiled object also can return a dictionary of bind parameter names and values using the params accessor.

Parameters:
  • bind – An Engine or Connection from which a Compiled will be acquired. This argument takes precedence over this ClauseElement‘s bound engine, if any.
  • column_keys – Used for INSERT and UPDATE statements, a list of column names which should be present in the VALUES clause of the compiled statement. If None, all columns from the target table object are rendered.
  • dialect – A Dialect instance from which a Compiled will be acquired. This argument takes precedence over the bind argument as well as this ClauseElement‘s bound engine, if any.
  • inline – Used for INSERT statements, for a dialect which does not support inline retrieval of newly generated primary key columns, will force the expression used to create the new primary key value to be rendered inline within the INSERT statement’s VALUES clause. This typically refers to Sequence execution but may also refer to any server-side default generation function associated with a primary key Column.
  • compile_kwargs

    optional dictionary of additional parameters that will be passed through to the compiler within all “visit” methods. This allows any custom flag to be passed through to a custom compilation construct, for example. It is also used for the case of passing the literal_binds flag through:

    from sqlalchemy.sql import table, column, select
    
    t = table('t', column('x'))
    
    s = select([t]).where(t.c.x == 5)
    
    print s.compile(compile_kwargs={"literal_binds": True})

    New in version 0.9.0.

concat(other)
inherited from the concat() method of ColumnOperators

Implement the ‘concat’ operator.

In a column context, produces the clause a || b, or uses the concat() operator on MySQL.

contains(other, **kwargs)
inherited from the contains() method of ColumnOperators

Implement the ‘contains’ operator.

In a column context, produces the clause LIKE '%<other>%'

copy(**kw)

Create a copy of this Column, unitialized.

This is used in Table.tometadata.

desc()
inherited from the desc() method of ColumnOperators

Produce a desc() clause against the parent object.

distinct()
inherited from the distinct() method of ColumnOperators

Produce a distinct() clause against the parent object.

endswith(other, **kwargs)
inherited from the endswith() method of ColumnOperators

Implement the ‘endswith’ operator.

In a column context, produces the clause LIKE '%<other>'

expression
inherited from the expression attribute of ColumnElement

Return a column expression.

Part of the inspection interface; returns self.

ilike(other, escape=None)
inherited from the ilike() method of ColumnOperators

Implement the ilike operator.

In a column context, produces the clause a ILIKE other.

E.g.:

select([sometable]).where(sometable.c.column.ilike("%foobar%"))
Parameters:
  • other – expression to be compared
  • escape

    optional escape character, renders the ESCAPE keyword, e.g.:

    somecolumn.ilike("foo/%bar", escape="/")
in_(other)
inherited from the in_() method of ColumnOperators

Implement the in operator.

In a column context, produces the clause a IN other. “other” may be a tuple/list of column expressions, or a select() construct.

info
inherited from the info attribute of SchemaItem

Info dictionary associated with the object, allowing user-defined data to be associated with this SchemaItem.

The dictionary is automatically generated when first accessed. It can also be specified in the constructor of some objects, such as Table and Column.

is_(other)
inherited from the is_() method of ColumnOperators

Implement the IS operator.

Normally, IS is generated automatically when comparing to a value of None, which resolves to NULL. However, explicit usage of IS may be desirable if comparing to boolean values on certain platforms.

New in version 0.7.9.

isnot(other)
inherited from the isnot() method of ColumnOperators

Implement the IS NOT operator.

Normally, IS NOT is generated automatically when comparing to a value of None, which resolves to NULL. However, explicit usage of IS NOT may be desirable if comparing to boolean values on certain platforms.

New in version 0.7.9.

label(name)
inherited from the label() method of ColumnElement

Produce a column label, i.e. <columnname> AS <name>.

This is a shortcut to the label() function.

if ‘name’ is None, an anonymous label name will be generated.

like(other, escape=None)
inherited from the like() method of ColumnOperators

Implement the like operator.

In a column context, produces the clause a LIKE other.

E.g.:

select([sometable]).where(sometable.c.column.like("%foobar%"))
Parameters:
  • other – expression to be compared
  • escape

    optional escape character, renders the ESCAPE keyword, e.g.:

    somecolumn.like("foo/%bar", escape="/")
match(other, **kwargs)
inherited from the match() method of ColumnOperators

Implements a database-specific ‘match’ operator.

match() attempts to resolve to a MATCH-like function or operator provided by the backend. Examples include:

  • Postgresql - renders x @@ to_tsquery(y)
  • MySQL - renders MATCH (x) AGAINST (y IN BOOLEAN MODE)
  • Oracle - renders CONTAINS(x, y)
  • other backends may provide special implementations.
  • Backends without any special implementation will emit the operator as “MATCH”. This is compatible with SQlite, for example.
notilike(other, escape=None)
inherited from the notilike() method of ColumnOperators

implement the NOT ILIKE operator.

This is equivalent to using negation with ColumnOperators.ilike(), i.e. ~x.ilike(y).

New in version 0.8.

notin_(other)
inherited from the notin_() method of ColumnOperators

implement the NOT IN operator.

This is equivalent to using negation with ColumnOperators.in_(), i.e. ~x.in_(y).

New in version 0.8.

notlike(other, escape=None)
inherited from the notlike() method of ColumnOperators

implement the NOT LIKE operator.

This is equivalent to using negation with ColumnOperators.like(), i.e. ~x.like(y).

New in version 0.8.

nullsfirst()
inherited from the nullsfirst() method of ColumnOperators

Produce a nullsfirst() clause against the parent object.

nullslast()
inherited from the nullslast() method of ColumnOperators

Produce a nullslast() clause against the parent object.

op(opstring, precedence=0, is_comparison=False)
inherited from the op() method of Operators

produce a generic operator function.

e.g.:

somecolumn.op("*")(5)

produces:

somecolumn * 5

This function can also be used to make bitwise operators explicit. For example:

somecolumn.op('&')(0xff)

is a bitwise AND of the value in somecolumn.

Parameters:
  • operator – a string which will be output as the infix operator between this element and the expression passed to the generated function.
  • precedence

    precedence to apply to the operator, when parenthesizing expressions. A lower number will cause the expression to be parenthesized when applied against another operator with higher precedence. The default value of 0 is lower than all operators except for the comma (,) and AS operators. A value of 100 will be higher or equal to all operators, and -100 will be lower than or equal to all operators.

    New in version 0.8: - added the ‘precedence’ argument.

  • is_comparison

    if True, the operator will be considered as a “comparison” operator, that is which evaluates to a boolean true/false value, like ==, >, etc. This flag should be set so that ORM relationships can establish that the operator is a comparison operator when used in a custom join condition.

    New in version 0.9.2: - added the Operators.op.is_comparison flag.

quote
inherited from the quote attribute of SchemaItem

Return the value of the quote flag passed to this schema object, for those schema items which have a name field.

Deprecated since version 0.9: Use <obj>.name.quote

references(column)

Return True if this Column references the given column via foreign key.

shares_lineage(othercolumn)
inherited from the shares_lineage() method of ColumnElement

Return True if the given ColumnElement has a common ancestor to this ColumnElement.

startswith(other, **kwargs)
inherited from the startswith() method of ColumnOperators

Implement the startwith operator.

In a column context, produces the clause LIKE '<other>%'

class sqlalchemy.schema.MetaData(bind=None, reflect=False, schema=None, quote_schema=None, naming_convention=immutabledict({'ix': 'ix_%(column_0_label)s'}), info=None)

Bases: sqlalchemy.schema.SchemaItem

A collection of Table objects and their associated schema constructs.

Holds a collection of Table objects as well as an optional binding to an Engine or Connection. If bound, the Table objects in the collection and their columns may participate in implicit SQL execution.

The Table objects themselves are stored in the MetaData.tables dictionary.

MetaData is a thread-safe object for read operations. Construction of new tables within a single MetaData object, either explicitly or via reflection, may not be completely thread-safe.

See also

Describing Databases with MetaData - Introduction to database metadata

__init__(bind=None, reflect=False, schema=None, quote_schema=None, naming_convention=immutabledict({'ix': 'ix_%(column_0_label)s'}), info=None)

Create a new MetaData object.

Parameters:
  • bind – An Engine or Connection to bind to. May also be a string or URL instance, these are passed to create_engine() and this MetaData will be bound to the resulting engine.
  • reflect

    Optional, automatically load all tables from the bound database. Defaults to False. bind is required when this option is set.

    Deprecated since version 0.8: Please use the MetaData.reflect() method.

  • schema – The default schema to use for the Table, Sequence, and other objects associated with this MetaData. Defaults to None.
  • quote_schema – Sets the quote_schema flag for those Table, Sequence, and other objects which make usage of the local schema name.
  • info

    Optional data dictionary which will be populated into the SchemaItem.info attribute of this object.

    New in version 1.0.0.

  • naming_convention

    a dictionary referring to values which will establish default naming conventions for Constraint and Index objects, for those objects which are not given a name explicitly.

    The keys of this dictionary may be:

    • a constraint or Index class, e.g. the UniqueConstraint, ForeignKeyConstraint class, the Index class
    • a string mnemonic for one of the known constraint classes; "fk", "pk", "ix", "ck", "uq" for foreign key, primary key, index, check, and unique constraint, respectively.
    • the string name of a user-defined “token” that can be used to define new naming tokens.

    The values associated with each “constraint class” or “constraint mnemonic” key are string naming templates, such as "uq_%(table_name)s_%(column_0_name)s", which describe how the name should be composed. The values associated with user-defined “token” keys should be callables of the form fn(constraint, table), which accepts the constraint/index object and Table as arguments, returning a string result.

    The built-in names are as follows, some of which may only be available for certain types of constraint:

    • %(table_name)s - the name of the Table object associated with the constraint.
    • %(referred_table_name)s - the name of the Table object associated with the referencing target of a ForeignKeyConstraint.
    • %(column_0_name)s - the name of the Column at index position “0” within the constraint.
    • %(column_0_label)s - the label of the Column at index position “0”, e.g. Column.label
    • %(column_0_key)s - the key of the Column at index position “0”, e.g. Column.key
    • %(referred_column_0_name)s - the name of a Column at index position “0” referenced by a ForeignKeyConstraint.
    • %(constraint_name)s - a special key that refers to the existing name given to the constraint. When this key is present, the Constraint object’s existing name will be replaced with one that is composed from template string that uses this token. When this token is present, it is required that the Constraint is given an expicit name ahead of time.
    • user-defined: any additional token may be implemented by passing it along with a fn(constraint, table) callable to the naming_convention dictionary.

    New in version 0.9.2.

    See also

    Configuring Constraint Naming Conventions - for detailed usage examples.

append_ddl_listener(event_name, listener)

Append a DDL event listener to this MetaData.

Deprecated since version 0.7: See DDLEvents.

bind

An Engine or Connection to which this MetaData is bound.

Typically, a Engine is assigned to this attribute so that “implicit execution” may be used, or alternatively as a means of providing engine binding information to an ORM Session object:

engine = create_engine("someurl://")
metadata.bind = engine

See also

Connectionless Execution, Implicit Execution - background on “bound metadata”

clear()

Clear all Table objects from this MetaData.

create_all(bind=None, tables=None, checkfirst=True)

Create all tables stored in this metadata.

Conditional by default, will not attempt to recreate tables already present in the target database.

Parameters:
  • bind – A Connectable used to access the database; if None, uses the existing bind on this MetaData, if any.
  • tables – Optional list of Table objects, which is a subset of the total tables in the MetaData (others are ignored).
  • checkfirst – Defaults to True, don’t issue CREATEs for tables already present in the target database.
drop_all(bind=None, tables=None, checkfirst=True)

Drop all tables stored in this metadata.

Conditional by default, will not attempt to drop tables not present in the target database.

Parameters:
  • bind – A Connectable used to access the database; if None, uses the existing bind on this MetaData, if any.
  • tables – Optional list of Table objects, which is a subset of the total tables in the MetaData (others are ignored).
  • checkfirst – Defaults to True, only issue DROPs for tables confirmed to be present in the target database.
is_bound()

True if this MetaData is bound to an Engine or Connection.

reflect(bind=None, schema=None, views=False, only=None, extend_existing=False, autoload_replace=True, **dialect_kwargs)

Load all available table definitions from the database.

Automatically creates Table entries in this MetaData for any table available in the database but not yet present in the MetaData. May be called multiple times to pick up tables recently added to the database, however no special action is taken if a table in this MetaData no longer exists in the database.

Parameters:
  • bind – A Connectable used to access the database; if None, uses the existing bind on this MetaData, if any.
  • schema – Optional, query and reflect tables from an alterate schema. If None, the schema associated with this MetaData is used, if any.
  • views – If True, also reflect views.
  • only

    Optional. Load only a sub-set of available named tables. May be specified as a sequence of names or a callable.

    If a sequence of names is provided, only those tables will be reflected. An error is raised if a table is requested but not available. Named tables already present in this MetaData are ignored.

    If a callable is provided, it will be used as a boolean predicate to filter the list of potential table names. The callable is called with a table name and this MetaData instance as positional arguments and should return a true value for any table to reflect.

  • extend_existing

    Passed along to each Table as Table.extend_existing.

    New in version 0.9.1.

  • autoload_replace

    Passed along to each Table as Table.autoload_replace.

    New in version 0.9.1.

  • **dialect_kwargs

    Additional keyword arguments not mentioned above are dialect specific, and passed in the form <dialectname>_<argname>. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.

    New in version 0.9.2: - Added MetaData.reflect.**dialect_kwargs to support dialect-level reflection options for all Table objects reflected.

remove(table)

Remove the given Table object from this MetaData.

sorted_tables

Returns a list of Table objects sorted in order of foreign key dependency.

The sorting will place Table objects that have dependencies first, before the dependencies themselves, representing the order in which they can be created. To get the order in which the tables would be dropped, use the reversed() Python built-in.

Warning

The sorted_tables accessor cannot by itself accommodate automatic resolution of dependency cycles between tables, which are usually caused by mutually dependent foreign key constraints. To resolve these cycles, either the ForeignKeyConstraint.use_alter parameter may be appled to those constraints, or use the schema.sort_tables_and_constraints() function which will break out foreign key constraints involved in cycles separately.

tables = None

A dictionary of Table objects keyed to their name or “table key”.

The exact key is that determined by the Table.key attribute; for a table with no Table.schema attribute, this is the same as Table.name. For a table with a schema, it is typically of the form schemaname.tablename.

class sqlalchemy.schema.SchemaItem

Bases: sqlalchemy.sql.expression.SchemaEventTarget, sqlalchemy.sql.visitors.Visitable

Base class for items that define a database schema.

get_children(**kwargs)

used to allow SchemaVisitor access

info

Info dictionary associated with the object, allowing user-defined data to be associated with this SchemaItem.

The dictionary is automatically generated when first accessed. It can also be specified in the constructor of some objects, such as Table and Column.

quote

Return the value of the quote flag passed to this schema object, for those schema items which have a name field.

Deprecated since version 0.9: Use <obj>.name.quote

class sqlalchemy.schema.Table(*args, **kw)

Bases: sqlalchemy.sql.base.DialectKWArgs, sqlalchemy.schema.SchemaItem, sqlalchemy.sql.expression.TableClause

Represent a table in a database.

e.g.:

mytable = Table("mytable", metadata,
                Column('mytable_id', Integer, primary_key=True),
                Column('value', String(50))
           )

The Table object constructs a unique instance of itself based on its name and optional schema name within the given MetaData object. Calling the Table constructor with the same name and same MetaData argument a second time will return the same Table object - in this way the Table constructor acts as a registry function.

See also

Describing Databases with MetaData - Introduction to database metadata

Constructor arguments are as follows:

Parameters:
  • name

    The name of this table as represented in the database.

    The table name, along with the value of the schema parameter, forms a key which uniquely identifies this Table within the owning MetaData collection. Additional calls to Table with the same name, metadata, and schema name will return the same Table object.

    Names which contain no upper case characters will be treated as case insensitive names, and will not be quoted unless they are a reserved word or contain special characters. A name with any number of upper case characters is considered to be case sensitive, and will be sent as quoted.

    To enable unconditional quoting for the table name, specify the flag quote=True to the constructor, or use the quoted_name construct to specify the name.

  • metadata – a MetaData object which will contain this table. The metadata is used as a point of association of this table with other tables which are referenced via foreign key. It also may be used to associate this table with a particular Connectable.
  • *args – Additional positional arguments are used primarily to add the list of Column objects contained within this table. Similar to the style of a CREATE TABLE statement, other SchemaItem constructs may be added here, including PrimaryKeyConstraint, and ForeignKeyConstraint.
  • autoload

    Defaults to False, unless Table.autoload_with is set in which case it defaults to True; Column objects for this table should be reflected from the database, possibly augmenting or replacing existing Column objects that were expicitly specified.

    Changed in version 1.0.0: setting the Table.autoload_with parameter implies that Table.autoload will default to True.

  • autoload_replace

    Defaults to True; when using Table.autoload in conjunction with Table.extend_existing, indicates that Column objects present in the already-existing Table object should be replaced with columns of the same name retrieved from the autoload process. When False, columns already present under existing names will be omitted from the reflection process.

    Note that this setting does not impact Column objects specified programmatically within the call to Table that also is autoloading; those Column objects will always replace existing columns of the same name when Table.extend_existing is True.

    New in version 0.7.5.

  • autoload_with

    An Engine or Connection object with which this Table object will be reflected; when set to a non-None value, it implies that Table.autoload is True. If left unset, but Table.autoload is explicitly set to True, an autoload operation will attempt to proceed by locating an Engine or Connection bound to the underlying MetaData object.

    See also

    Table.autoload

  • extend_existing

    When True, indicates that if this Table is already present in the given MetaData, apply further arguments within the constructor to the existing Table.

    If Table.extend_existing or Table.keep_existing are not set, and the given name of the new Table refers to a Table that is already present in the target MetaData collection, and this Table specifies additional columns or other constructs or flags that modify the table’s state, an error is raised. The purpose of these two mutually-exclusive flags is to specify what action should be taken when a Table is specified that matches an existing Table, yet specifies additional constructs.

    Table.extend_existing will also work in conjunction with Table.autoload to run a new reflection operation against the database, even if a Table of the same name is already present in the target MetaData; newly reflected Column objects and other options will be added into the state of the Table, potentially overwriting existing columns and options of the same name.

    Changed in version 0.7.4: Table.extend_existing will invoke a new reflection operation when combined with Table.autoload set to True.

    As is always the case with Table.autoload, Column objects can be specified in the same Table constructor, which will take precedence. Below, the existing table mytable will be augmented with Column objects both reflected from the database, as well as the given Column named “y”:

    Table("mytable", metadata,
                Column('y', Integer),
                extend_existing=True,
                autoload=True,
                autoload_with=engine
            )
  • implicit_returning – True by default - indicates that RETURNING can be used by default to fetch newly inserted primary key values, for backends which support this. Note that create_engine() also provides an implicit_returning flag.
  • include_columns – A list of strings indicating a subset of columns to be loaded via the autoload operation; table columns who aren’t present in this list will not be represented on the resulting Table object. Defaults to None which indicates all columns should be reflected.
  • info – Optional data dictionary which will be populated into the SchemaItem.info attribute of this object.
  • keep_existing

    When True, indicates that if this Table is already present in the given MetaData, ignore further arguments within the constructor to the existing Table, and return the Table object as originally created. This is to allow a function that wishes to define a new Table on first call, but on subsequent calls will return the same Table, without any of the declarations (particularly constraints) being applied a second time.

    If Table.extend_existing or Table.keep_existing are not set, and the given name of the new Table refers to a Table that is already present in the target MetaData collection, and this Table specifies additional columns or other constructs or flags that modify the table’s state, an error is raised. The purpose of these two mutually-exclusive flags is to specify what action should be taken when a Table is specified that matches an existing Table, yet specifies additional constructs.

  • listeners

    A list of tuples of the form (<eventname>, <fn>) which will be passed to event.listen() upon construction. This alternate hook to event.listen() allows the establishment of a listener function specific to this Table before the “autoload” process begins. Particularly useful for the DDLEvents.column_reflect() event:

    def listen_for_reflect(table, column_info):
        "handle the column reflection event"
        # ...
    
    t = Table(
        'sometable',
        autoload=True,
        listeners=[
            ('column_reflect', listen_for_reflect)
        ])
  • mustexist – When True, indicates that this Table must already be present in the given MetaData collection, else an exception is raised.
  • prefixes – A list of strings to insert after CREATE in the CREATE TABLE statement. They will be separated by spaces.
  • quote – Force quoting of this table’s name on or off, corresponding to True or False. When left at its default of None, the column identifier will be quoted according to whether the name is case sensitive (identifiers with at least one upper case character are treated as case sensitive), or if it’s a reserved word. This flag is only needed to force quoting of a reserved word which is not known by the SQLAlchemy dialect.
  • quote_schema – same as ‘quote’ but applies to the schema identifier.
  • schema

    The schema name for this table, which is required if the table resides in a schema other than the default selected schema for the engine’s database connection. Defaults to None.

    The quoting rules for the schema name are the same as those for the name parameter, in that quoting is applied for reserved words or case-sensitive names; to enable unconditional quoting for the schema name, specify the flag quote_schema=True to the constructor, or use the quoted_name construct to specify the name.

  • useexisting – Deprecated. Use Table.extend_existing.
  • **kw – Additional keyword arguments not mentioned above are dialect specific, and passed in the form <dialectname>_<argname>. See the documentation regarding an individual dialect at Dialects for detail on documented arguments.
__init__(*args, **kw)

Constructor for Table.

This method is a no-op. See the top-level documentation for Table for constructor arguments.

add_is_dependent_on(table)

Add a ‘dependency’ for this Table.

This is another Table object which must be created first before this one can, or dropped after this one.

Usually, dependencies between tables are determined via ForeignKey objects. However, for other situations that create dependencies outside of foreign keys (rules, inheriting), this method can manually establish such a link.

alias(name=None, flat=False)
inherited from the alias() method of FromClause

return an alias of this FromClause.

This is shorthand for calling:

from sqlalchemy import alias
a = alias(self, name=name)

See alias() for details.

append_column(column)

Append a Column to this Table.

The “key” of the newly added Column, i.e. the value of its .key attribute, will then be available in the .c collection of this Table, and the column definition will be included in any CREATE TABLE, SELECT, UPDATE, etc. statements generated from this Table construct.

Note that this does not change the definition of the table as it exists within any underlying database, assuming that table has already been created in the database. Relational databases support the addition of columns to existing tables using the SQL ALTER command, which would need to be emitted for an already-existing table that doesn’t contain the newly added column.

append_constraint(constraint)

Append a Constraint to this Table.

This has the effect of the constraint being included in any future CREATE TABLE statement, assuming specific DDL creation events have not been associated with the given Constraint object.

Note that this does not produce the constraint within the relational database automatically, for a table that already exists in the database. To add a constraint to an existing relational database table, the SQL ALTER command must be used. SQLAlchemy also provides the AddConstraint construct which can produce this SQL when invoked as an executable clause.

append_ddl_listener(event_name, listener)

Append a DDL event listener to this Table.

Deprecated since version 0.7: See DDLEvents.

argument_for(dialect_name, argument_name, default)
inherited from the argument_for() method of DialectKWArgs

Add a new kind of dialect-specific keyword argument for this class.

E.g.:

Index.argument_for("mydialect", "length", None)

some_index = Index('a', 'b', mydialect_length=5)

The DialectKWArgs.argument_for() method is a per-argument way adding extra arguments to the DefaultDialect.construct_arguments dictionary. This dictionary provides a list of argument names accepted by various schema-level constructs on behalf of a dialect.

New dialects should typically specify this dictionary all at once as a data member of the dialect class. The use case for ad-hoc addition of argument names is typically for end-user code that is also using a custom compilation scheme which consumes the additional arguments.

Parameters:
  • dialect_name – name of a dialect. The dialect must be locatable, else a NoSuchModuleError is raised. The dialect must also include an existing DefaultDialect.construct_arguments collection, indicating that it participates in the keyword-argument validation and default system, else ArgumentError is raised. If the dialect does not include this collection, then any keyword argument can be specified on behalf of this dialect already. All dialects packaged within SQLAlchemy include this collection, however for third party dialects, support may vary.
  • argument_name – name of the parameter.
  • default – default value of the parameter.

New in version 0.9.4.

bind

Return the connectable associated with this Table.

c
inherited from the c attribute of FromClause

An alias for the columns attribute.

columns
inherited from the columns attribute of FromClause

A named-based collection of ColumnElement objects maintained by this FromClause.

The columns, or c collection, is the gateway to the construction of SQL expressions using table-bound or other selectable-bound columns:

select([mytable]).where(mytable.c.somecolumn == 5)
compare(other, **kw)
inherited from the compare() method of ClauseElement

Compare this ClauseElement to the given ClauseElement.

Subclasses should override the default behavior, which is a straight identity comparison.

**kw are arguments consumed by subclass compare() methods and may be used to modify the criteria for comparison. (see ColumnElement)

compile(bind=None, dialect=None, **kw)
inherited from the compile() method of ClauseElement

Compile this SQL expression.

The return value is a Compiled object. Calling str() or unicode() on the returned value will yield a string representation of the result. The Compiled object also can return a dictionary of bind parameter names and values using the params accessor.

Parameters:
  • bind – An Engine or Connection from which a Compiled will be acquired. This argument takes precedence over this ClauseElement‘s bound engine, if any.
  • column_keys – Used for INSERT and UPDATE statements, a list of column names which should be present in the VALUES clause of the compiled statement. If None, all columns from the target table object are rendered.
  • dialect – A Dialect instance from which a Compiled will be acquired. This argument takes precedence over the bind argument as well as this ClauseElement‘s bound engine, if any.
  • inline – Used for INSERT statements, for a dialect which does not support inline retrieval of newly generated primary key columns, will force the expression used to create the new primary key value to be rendered inline within the INSERT statement’s VALUES clause. This typically refers to Sequence execution but may also refer to any server-side default generation function associated with a primary key Column.
  • compile_kwargs

    optional dictionary of additional parameters that will be passed through to the compiler within all “visit” methods. This allows any custom flag to be passed through to a custom compilation construct, for example. It is also used for the case of passing the literal_binds flag through:

    from sqlalchemy.sql import table, column, select
    
    t = table('t', column('x'))
    
    s = select([t]).where(t.c.x == 5)
    
    print s.compile(compile_kwargs={"literal_binds": True})

    New in version 0.9.0.

correspond_on_equivalents(column, equivalents)
inherited from the correspond_on_equivalents() method of FromClause

Return corresponding_column for the given column, or if None search for a match in the given dictionary.

corresponding_column(column, require_embedded=False)
inherited from the corresponding_column() method of FromClause

Given a ColumnElement, return the exported ColumnElement object from this Selectable which corresponds to that original Column via a common ancestor column.

Parameters:
  • column – the target ColumnElement to be matched
  • require_embedded – only return corresponding columns for the given ColumnElement, if the given ColumnElement is actually present within a sub-element of this FromClause. Normally the column will match if it merely shares a common ancestor with one of the exported columns of this FromClause.
count(whereclause=None, **params)
inherited from the count() method of TableClause

return a SELECT COUNT generated against this TableClause.

create(bind=None, checkfirst=False)

Issue a CREATE statement for this Table, using the given Connectable for connectivity.

delete(whereclause=None, **kwargs)
inherited from the delete() method of TableClause

Generate a delete() construct against this TableClause.

E.g.:

table.delete().where(table.c.id==7)

See delete() for argument and usage information.

dialect_kwargs
inherited from the dialect_kwargs attribute of DialectKWArgs

A collection of keyword arguments specified as dialect-specific options to this construct.

The arguments are present here in their original <dialect>_<kwarg> format. Only arguments that were actually passed are included; unlike the DialectKWArgs.dialect_options collection, which contains all options known by this dialect including defaults.

The collection is also writable; keys are accepted of the form <dialect>_<kwarg> where the value will be assembled into the list of options.

New in version 0.9.2.

Changed in version 0.9.4: The DialectKWArgs.dialect_kwargs collection is now writable.

See also

DialectKWArgs.dialect_options - nested dictionary form

dialect_options
inherited from the dialect_options attribute of DialectKWArgs

A collection of keyword arguments specified as dialect-specific options to this construct.

This is a two-level nested registry, keyed to <dialect_name> and <argument_name>. For example, the postgresql_where argument would be locatable as:

arg = my_object.dialect_options['postgresql']['where']

New in version 0.9.2.

See also

DialectKWArgs.dialect_kwargs - flat dictionary form

drop(bind=None, checkfirst=False)

Issue a DROP statement for this Table, using the given Connectable for connectivity.

exists(bind=None)

Return True if this table exists.

foreign_key_constraints

ForeignKeyConstraint objects referred to by this Table.

This list is produced from the collection of ForeignKey objects currently associated.

New in version 1.0.0.

foreign_keys
inherited from the foreign_keys attribute of FromClause

Return the collection of ForeignKey objects which this FromClause references.

info
inherited from the info attribute of SchemaItem

Info dictionary associated with the object, allowing user-defined data to be associated with this SchemaItem.

The dictionary is automatically generated when first accessed. It can also be specified in the constructor of some objects, such as Table and Column.

insert(values=None, inline=False, **kwargs)
inherited from the insert() method of TableClause

Generate an insert() construct against this TableClause.

E.g.:

table.insert().values(name='foo')

See insert() for argument and usage information.

is_derived_from(fromclause)
inherited from the is_derived_from() method of FromClause

Return True if this FromClause is ‘derived’ from the given FromClause.

An example would be an Alias of a Table is derived from that Table.

join(right, onclause=None, isouter=False)
inherited from the join() method of FromClause

Return a Join from this FromClause to another FromClause.

E.g.:

from sqlalchemy import join

j = user_table.join(address_table,
                user_table.c.id == address_table.c.user_id)
stmt = select([user_table]).select_from(j)

would emit SQL along the lines of:

SELECT user.id, user.name FROM user
JOIN address ON user.id = address.user_id
Parameters:
  • right – the right side of the join; this is any FromClause object such as a Table object, and may also be a selectable-compatible object such as an ORM-mapped class.
  • onclause – a SQL expression representing the ON clause of the join. If left at None, FromClause.join() will attempt to join the two tables based on a foreign key relationship.
  • isouter – if True, render a LEFT OUTER JOIN, instead of JOIN.

See also

join() - standalone function

Join - the type of object produced

key

Return the ‘key’ for this Table.

This value is used as the dictionary key within the MetaData.tables collection. It is typically the same as that of Table.name for a table with no Table.schema set; otherwise it is typically of the form schemaname.tablename.

kwargs
inherited from the kwargs attribute of DialectKWArgs

A synonym for DialectKWArgs.dialect_kwargs.

outerjoin(right, onclause=None)
inherited from the outerjoin() method of FromClause

Return a Join from this FromClause to another FromClause, with the “isouter” flag set to True.

E.g.:

from sqlalchemy import outerjoin

j = user_table.outerjoin(address_table,
                user_table.c.id == address_table.c.user_id)

The above is equivalent to:

j = user_table.join(
    address_table,
    user_table.c.id == address_table.c.user_id,
    isouter=True)
Parameters:
  • right – the right side of the join; this is any FromClause object such as a Table object, and may also be a selectable-compatible object such as an ORM-mapped class.
  • onclause – a SQL expression representing the ON clause of the join. If left at None, FromClause.join() will attempt to join the two tables based on a foreign key relationship.
primary_key
inherited from the primary_key attribute of FromClause

Return the collection of Column objects which comprise the primary key of this FromClause.

quote
inherited from the quote attribute of SchemaItem

Return the value of the quote flag passed to this schema object, for those schema items which have a name field.

Deprecated since version 0.9: Use <obj>.name.quote

quote_schema

Return the value of the quote_schema flag passed to this Table.

Deprecated since version 0.9: Use table.schema.quote

replace_selectable(old, alias)
inherited from the replace_selectable() method of FromClause

replace all occurrences of FromClause ‘old’ with the given Alias object, returning a copy of this FromClause.

select(whereclause=None, **params)
inherited from the select() method of FromClause

return a SELECT of this FromClause.

See also

select() - general purpose method which allows for arbitrary column lists.

self_group(against=None)
inherited from the self_group() method of ClauseElement

Apply a ‘grouping’ to this ClauseElement.

This method is overridden by subclasses to return a “grouping” construct, i.e. parenthesis. In particular it’s used by “binary” expressions to provide a grouping around themselves when placed into a larger expression, as well as by select() constructs when placed into the FROM clause of another select(). (Note that subqueries should be normally created using the Select.alias() method, as many platforms require nested SELECT statements to be named).

As expressions are composed together, the application of self_group() is automatic - end-user code should never need to use this method directly. Note that SQLAlchemy’s clause constructs take operator precedence into account - so parenthesis might not be needed, for example, in an expression like x OR (y AND z) - AND takes precedence over OR.

The base self_group() method of ClauseElement just returns self.

tometadata(metadata, schema=symbol('retain_schema'), referred_schema_fn=None, name=None)

Return a copy of this Table associated with a different MetaData.

E.g.:

m1 = MetaData()

user = Table('user', m1, Column('id', Integer, priamry_key=True))

m2 = MetaData()
user_copy = user.tometadata(m2)
Parameters:
  • metadata – Target MetaData object, into which the new Table object will be created.
  • schema

    optional string name indicating the target schema. Defaults to the special symbol RETAIN_SCHEMA which indicates that no change to the schema name should be made in the new Table. If set to a string name, the new Table will have this new name as the .schema. If set to None, the schema will be set to that of the schema set on the target MetaData, which is typically None as well, unless set explicitly:

    m2 = MetaData(schema='newschema')
    
    # user_copy_one will have "newschema" as the schema name
    user_copy_one = user.tometadata(m2, schema=None)
    
    m3 = MetaData()  # schema defaults to None
    
    # user_copy_two will have None as the schema name
    user_copy_two = user.tometadata(m3, schema=None)
  • referred_schema_fn

    optional callable which can be supplied in order to provide for the schema name that should be assigned to the referenced table of a ForeignKeyConstraint. The callable accepts this parent Table, the target schema that we are changing to, the ForeignKeyConstraint object, and the existing “target schema” of that constraint. The function should return the string schema name that should be applied. E.g.:

    def referred_schema_fn(table, to_schema,
                                    constraint, referred_schema):
        if referred_schema == 'base_tables':
            return referred_schema
        else:
            return to_schema
    
    new_table = table.tometadata(m2, schema="alt_schema",
                            referred_schema_fn=referred_schema_fn)

    New in version 0.9.2.

  • name

    optional string name indicating the target table name. If not specified or None, the table name is retained. This allows a Table to be copied to the same MetaData target with a new name.

    New in version 1.0.0.

update(whereclause=None, values=None, inline=False, **kwargs)
inherited from the update() method of TableClause

Generate an update() construct against this TableClause.

E.g.:

table.update().where(table.c.id==7).values(name='foo')

See update() for argument and usage information.

class sqlalchemy.schema.ThreadLocalMetaData

Bases: sqlalchemy.schema.MetaData

A MetaData variant that presents a different bind in every thread.

Makes the bind property of the MetaData a thread-local value, allowing this collection of tables to be bound to different Engine implementations or connections in each thread.

The ThreadLocalMetaData starts off bound to None in each thread. Binds must be made explicitly by assigning to the bind property or using connect(). You can also re-bind dynamically multiple times per thread, just like a regular MetaData.

__init__()

Construct a ThreadLocalMetaData.

bind

The bound Engine or Connection for this thread.

This property may be assigned an Engine or Connection, or assigned a string or URL to automatically create a basic Engine for this bind with create_engine().

dispose()

Dispose all bound engines, in all thread contexts.

is_bound()

True if there is a bind for this thread.