Release: 1.0.12 | Release Date: February 15, 2016

SQLAlchemy 1.0 Documentation

PostgreSQL

Support for the PostgreSQL database.

DBAPI Support

The following dialect/DBAPI options are available. Please refer to individual DBAPI sections for connect information.

Sequences/SERIAL

PostgreSQL supports sequences, and SQLAlchemy uses these as the default means of creating new primary key values for integer-based primary key columns. When creating tables, SQLAlchemy will issue the SERIAL datatype for integer-based primary key columns, which generates a sequence and server side default corresponding to the column.

To specify a specific named sequence to be used for primary key generation, use the Sequence() construct:

Table('sometable', metadata,
        Column('id', Integer, Sequence('some_id_seq'), primary_key=True)
    )

When SQLAlchemy issues a single INSERT statement, to fulfill the contract of having the “last insert identifier” available, a RETURNING clause is added to the INSERT statement which specifies the primary key columns should be returned after the statement completes. The RETURNING functionality only takes place if Postgresql 8.2 or later is in use. As a fallback approach, the sequence, whether specified explicitly or implicitly via SERIAL, is executed independently beforehand, the returned value to be used in the subsequent insert. Note that when an insert() construct is executed using “executemany” semantics, the “last inserted identifier” functionality does not apply; no RETURNING clause is emitted nor is the sequence pre-executed in this case.

To force the usage of RETURNING by default off, specify the flag implicit_returning=False to create_engine().

Transaction Isolation Level

All Postgresql dialects support setting of transaction isolation level both via a dialect-specific parameter create_engine.isolation_level accepted by create_engine(), as well as the isolation_level argument as passed to Connection.execution_options(). When using a non-psycopg2 dialect, this feature works by issuing the command SET SESSION CHARACTERISTICS AS TRANSACTION ISOLATION LEVEL <level> for each new connection.

To set isolation level using create_engine():

engine = create_engine(
    "postgresql+pg8000://scott:tiger@localhost/test",
    isolation_level="READ UNCOMMITTED"
)

To set using per-connection execution options:

connection = engine.connect()
connection = connection.execution_options(
    isolation_level="READ COMMITTED"
)

Valid values for isolation_level include:

  • READ COMMITTED
  • READ UNCOMMITTED
  • REPEATABLE READ
  • SERIALIZABLE

The psycopg2 and pg8000 dialects also offer the special level AUTOCOMMIT.

Remote-Schema Table Introspection and Postgresql search_path

The Postgresql dialect can reflect tables from any schema. The Table.schema argument, or alternatively the MetaData.reflect.schema argument determines which schema will be searched for the table or tables. The reflected Table objects will in all cases retain this .schema attribute as was specified. However, with regards to tables which these Table objects refer to via foreign key constraint, a decision must be made as to how the .schema is represented in those remote tables, in the case where that remote schema name is also a member of the current Postgresql search path.

By default, the Postgresql dialect mimics the behavior encouraged by Postgresql’s own pg_get_constraintdef() builtin procedure. This function returns a sample definition for a particular foreign key constraint, omitting the referenced schema name from that definition when the name is also in the Postgresql schema search path. The interaction below illustrates this behavior:

test=> CREATE TABLE test_schema.referred(id INTEGER PRIMARY KEY);
CREATE TABLE
test=> CREATE TABLE referring(
test(>         id INTEGER PRIMARY KEY,
test(>         referred_id INTEGER REFERENCES test_schema.referred(id));
CREATE TABLE
test=> SET search_path TO public, test_schema;
test=> SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM
test-> pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n
test-> ON n.oid = c.relnamespace
test-> JOIN pg_catalog.pg_constraint r  ON c.oid = r.conrelid
test-> WHERE c.relname='referring' AND r.contype = 'f'
test-> ;
               pg_get_constraintdef
---------------------------------------------------
 FOREIGN KEY (referred_id) REFERENCES referred(id)
(1 row)

Above, we created a table referred as a member of the remote schema test_schema, however when we added test_schema to the PG search_path and then asked pg_get_constraintdef() for the FOREIGN KEY syntax, test_schema was not included in the output of the function.

On the other hand, if we set the search path back to the typical default of public:

test=> SET search_path TO public;
SET

The same query against pg_get_constraintdef() now returns the fully schema-qualified name for us:

test=> SELECT pg_catalog.pg_get_constraintdef(r.oid, true) FROM
test-> pg_catalog.pg_class c JOIN pg_catalog.pg_namespace n
test-> ON n.oid = c.relnamespace
test-> JOIN pg_catalog.pg_constraint r  ON c.oid = r.conrelid
test-> WHERE c.relname='referring' AND r.contype = 'f';
                     pg_get_constraintdef
---------------------------------------------------------------
 FOREIGN KEY (referred_id) REFERENCES test_schema.referred(id)
(1 row)

SQLAlchemy will by default use the return value of pg_get_constraintdef() in order to determine the remote schema name. That is, if our search_path were set to include test_schema, and we invoked a table reflection process as follows:

>>> from sqlalchemy import Table, MetaData, create_engine
>>> engine = create_engine("postgresql://scott:tiger@localhost/test")
>>> with engine.connect() as conn:
...     conn.execute("SET search_path TO test_schema, public")
...     meta = MetaData()
...     referring = Table('referring', meta,
...                       autoload=True, autoload_with=conn)
...
<sqlalchemy.engine.result.ResultProxy object at 0x101612ed0>

The above process would deliver to the MetaData.tables collection referred table named without the schema:

>>> meta.tables['referred'].schema is None
True

To alter the behavior of reflection such that the referred schema is maintained regardless of the search_path setting, use the postgresql_ignore_search_path option, which can be specified as a dialect-specific argument to both Table as well as MetaData.reflect():

>>> with engine.connect() as conn:
...     conn.execute("SET search_path TO test_schema, public")
...     meta = MetaData()
...     referring = Table('referring', meta, autoload=True,
...                       autoload_with=conn,
...                       postgresql_ignore_search_path=True)
...
<sqlalchemy.engine.result.ResultProxy object at 0x1016126d0>

We will now have test_schema.referred stored as schema-qualified:

>>> meta.tables['test_schema.referred'].schema
'test_schema'

Note that in all cases, the “default” schema is always reflected as None. The “default” schema on Postgresql is that which is returned by the Postgresql current_schema() function. On a typical Postgresql installation, this is the name public. So a table that refers to another which is in the public (i.e. default) schema will always have the .schema attribute set to None.

New in version 0.9.2: Added the postgresql_ignore_search_path dialect-level option accepted by Table and MetaData.reflect().

See also

The Schema Search Path - on the Postgresql website.

INSERT/UPDATE...RETURNING

The dialect supports PG 8.2’s INSERT..RETURNING, UPDATE..RETURNING and DELETE..RETURNING syntaxes. INSERT..RETURNING is used by default for single-row INSERT statements in order to fetch newly generated primary key identifiers. To specify an explicit RETURNING clause, use the _UpdateBase.returning() method on a per-statement basis:

# INSERT..RETURNING
result = table.insert().returning(table.c.col1, table.c.col2).\
    values(name='foo')
print result.fetchall()

# UPDATE..RETURNING
result = table.update().returning(table.c.col1, table.c.col2).\
    where(table.c.name=='foo').values(name='bar')
print result.fetchall()

# DELETE..RETURNING
result = table.delete().returning(table.c.col1, table.c.col2).\
    where(table.c.name=='foo')
print result.fetchall()

FROM ONLY ...

The dialect supports PostgreSQL’s ONLY keyword for targeting only a particular table in an inheritance hierarchy. This can be used to produce the SELECT ... FROM ONLY, UPDATE ONLY ..., and DELETE FROM ONLY ... syntaxes. It uses SQLAlchemy’s hints mechanism:

# SELECT ... FROM ONLY ...
result = table.select().with_hint(table, 'ONLY', 'postgresql')
print result.fetchall()

# UPDATE ONLY ...
table.update(values=dict(foo='bar')).with_hint('ONLY',
                                               dialect_name='postgresql')

# DELETE FROM ONLY ...
table.delete().with_hint('ONLY', dialect_name='postgresql')

Postgresql-Specific Index Options

Several extensions to the Index construct are available, specific to the PostgreSQL dialect.

Partial Indexes

Partial indexes add criterion to the index definition so that the index is applied to a subset of rows. These can be specified on Index using the postgresql_where keyword argument:

Index('my_index', my_table.c.id, postgresql_where=tbl.c.value > 10)

Operator Classes

PostgreSQL allows the specification of an operator class for each column of an index (see http://www.postgresql.org/docs/8.3/interactive/indexes-opclass.html). The Index construct allows these to be specified via the postgresql_ops keyword argument:

Index('my_index', my_table.c.id, my_table.c.data,
                        postgresql_ops={
                            'data': 'text_pattern_ops',
                            'id': 'int4_ops'
                        })

New in version 0.7.2: postgresql_ops keyword argument to Index construct.

Note that the keys in the postgresql_ops dictionary are the “key” name of the Column, i.e. the name used to access it from the .c collection of Table, which can be configured to be different than the actual name of the column as expressed in the database.

Index Types

PostgreSQL provides several index types: B-Tree, Hash, GiST, and GIN, as well as the ability for users to create their own (see http://www.postgresql.org/docs/8.3/static/indexes-types.html). These can be specified on Index using the postgresql_using keyword argument:

Index('my_index', my_table.c.data, postgresql_using='gin')

The value passed to the keyword argument will be simply passed through to the underlying CREATE INDEX command, so it must be a valid index type for your version of PostgreSQL.

Index Storage Parameters

PostgreSQL allows storage parameters to be set on indexes. The storage parameters available depend on the index method used by the index. Storage parameters can be specified on Index using the postgresql_with keyword argument:

Index('my_index', my_table.c.data, postgresql_with={"fillfactor": 50})

New in version 1.0.6.

Indexes with CONCURRENTLY

The Postgresql index option CONCURRENTLY is supported by passing the flag postgresql_concurrently to the Index construct:

tbl = Table('testtbl', m, Column('data', Integer))

idx1 = Index('test_idx1', tbl.c.data, postgresql_concurrently=True)

The above index construct will render SQL as:

CREATE INDEX CONCURRENTLY test_idx1 ON testtbl (data)

New in version 0.9.9.

Postgresql Index Reflection

The Postgresql database creates a UNIQUE INDEX implicitly whenever the UNIQUE CONSTRAINT construct is used. When inspecting a table using Inspector, the Inspector.get_indexes() and the Inspector.get_unique_constraints() will report on these two constructs distinctly; in the case of the index, the key duplicates_constraint will be present in the index entry if it is detected as mirroring a constraint. When performing reflection using Table(..., autoload=True), the UNIQUE INDEX is not returned in Table.indexes when it is detected as mirroring a UniqueConstraint in the Table.constraints collection.

Changed in version 1.0.0: - Table reflection now includes UniqueConstraint objects present in the Table.constraints collection; the Postgresql backend will no longer include a “mirrored” Index construct in Table.indexes if it is detected as corresponding to a unique constraint.

Special Reflection Options

The Inspector used for the Postgresql backend is an instance of PGInspector, which offers additional methods:

from sqlalchemy import create_engine, inspect

engine = create_engine("postgresql+psycopg2://localhost/test")
insp = inspect(engine)  # will be a PGInspector

print(insp.get_enums())
class sqlalchemy.dialects.postgresql.base.PGInspector(conn)

Bases: sqlalchemy.engine.reflection.Inspector

get_enums(schema=None)

Return a list of ENUM objects.

Each member is a dictionary containing these fields:

  • name - name of the enum
  • schema - the schema name for the enum.
  • visible - boolean, whether or not this enum is visible in the default search path.
  • labels - a list of string labels that apply to the enum.
Parameters:schema – schema name. If None, the default schema (typically ‘public’) is used. May also be set to ‘*’ to indicate load enums for all schemas.

New in version 1.0.0.

get_foreign_table_names(schema=None)

Return a list of FOREIGN TABLE names.

Behavior is similar to that of Inspector.get_table_names(), except that the list is limited to those tables tha report a relkind value of f.

New in version 1.0.0.

get_table_oid(table_name, schema=None)

Return the OID for the given table name.

PostgreSQL Table Options

Several options for CREATE TABLE are supported directly by the PostgreSQL dialect in conjunction with the Table construct:

  • TABLESPACE:

    Table("some_table", metadata, ..., postgresql_tablespace='some_tablespace')
  • ON COMMIT:

    Table("some_table", metadata, ..., postgresql_on_commit='PRESERVE ROWS')
  • WITH OIDS:

    Table("some_table", metadata, ..., postgresql_with_oids=True)
  • WITHOUT OIDS:

    Table("some_table", metadata, ..., postgresql_with_oids=False)
  • INHERITS:

    Table("some_table", metadata, ..., postgresql_inherits="some_supertable")
    
    Table("some_table", metadata, ..., postgresql_inherits=("t1", "t2", ...))

New in version 1.0.0.

ENUM Types

Postgresql has an independently creatable TYPE structure which is used to implement an enumerated type. This approach introduces significant complexity on the SQLAlchemy side in terms of when this type should be CREATED and DROPPED. The type object is also an independently reflectable entity. The following sections should be consulted:

Using ENUM with ARRAY

The combination of ENUM and ARRAY is not directly supported by backend DBAPIs at this time. In order to send and receive an ARRAY of ENUM, use the following workaround type:

class ArrayOfEnum(ARRAY):

    def bind_expression(self, bindvalue):
        return sa.cast(bindvalue, self)

    def result_processor(self, dialect, coltype):
        super_rp = super(ArrayOfEnum, self).result_processor(
            dialect, coltype)

        def handle_raw_string(value):
            inner = re.match(r"^{(.*)}$", value).group(1)
            return inner.split(",") if inner else []

        def process(value):
            if value is None:
                return None
            return super_rp(handle_raw_string(value))
        return process

E.g.:

Table(
    'mydata', metadata,
    Column('id', Integer, primary_key=True),
    Column('data', ArrayOfEnum(ENUM('a', 'b, 'c', name='myenum')))

)

This type is not included as a built-in type as it would be incompatible with a DBAPI that suddenly decides to support ARRAY of ENUM directly in a new version.

PostgreSQL Data Types

As with all SQLAlchemy dialects, all UPPERCASE types that are known to be valid with Postgresql are importable from the top level dialect, whether they originate from sqlalchemy.types or from the local dialect:

from sqlalchemy.dialects.postgresql import \
    ARRAY, BIGINT, BIT, BOOLEAN, BYTEA, CHAR, CIDR, DATE, \
    DOUBLE_PRECISION, ENUM, FLOAT, HSTORE, INET, INTEGER, \
    INTERVAL, JSON, JSONB, MACADDR, NUMERIC, OID, REAL, SMALLINT, TEXT, \
    TIME, TIMESTAMP, UUID, VARCHAR, INT4RANGE, INT8RANGE, NUMRANGE, \
    DATERANGE, TSRANGE, TSTZRANGE, TSVECTOR

Types which are specific to PostgreSQL, or have PostgreSQL-specific construction arguments, are as follows:

class sqlalchemy.dialects.postgresql.array(clauses, **kw)

Bases: sqlalchemy.sql.expression.Tuple

A Postgresql ARRAY literal.

This is used to produce ARRAY literals in SQL expressions, e.g.:

from sqlalchemy.dialects.postgresql import array
from sqlalchemy.dialects import postgresql
from sqlalchemy import select, func

stmt = select([
                array([1,2]) + array([3,4,5])
            ])

print stmt.compile(dialect=postgresql.dialect())

Produces the SQL:

SELECT ARRAY[%(param_1)s, %(param_2)s] ||
    ARRAY[%(param_3)s, %(param_4)s, %(param_5)s]) AS anon_1

An instance of array will always have the datatype ARRAY. The “inner” type of the array is inferred from the values present, unless the type_ keyword argument is passed:

array(['foo', 'bar'], type_=CHAR)

New in version 0.8: Added the array literal type.

See also:

postgresql.ARRAY

class sqlalchemy.dialects.postgresql.ARRAY(item_type, as_tuple=False, dimensions=None, zero_indexes=False)

Bases: sqlalchemy.types.Concatenable, sqlalchemy.types.TypeEngine

Postgresql ARRAY type.

Represents values as Python lists.

An ARRAY type is constructed given the “type” of element:

mytable = Table("mytable", metadata,
        Column("data", ARRAY(Integer))
    )

The above type represents an N-dimensional array, meaning Postgresql will interpret values with any number of dimensions automatically. To produce an INSERT construct that passes in a 1-dimensional array of integers:

connection.execute(
        mytable.insert(),
        data=[1,2,3]
)

The ARRAY type can be constructed given a fixed number of dimensions:

mytable = Table("mytable", metadata,
        Column("data", ARRAY(Integer, dimensions=2))
    )

This has the effect of the ARRAY type specifying that number of bracketed blocks when a Table is used in a CREATE TABLE statement, or when the type is used within a expression.cast() construct; it also causes the bind parameter and result set processing of the type to optimize itself to expect exactly that number of dimensions. Note that Postgresql itself still allows N dimensions with such a type.

SQL expressions of type ARRAY have support for “index” and “slice” behavior. The Python [] operator works normally here, given integer indexes or slices. Note that Postgresql arrays default to 1-based indexing. The operator produces binary expression constructs which will produce the appropriate SQL, both for SELECT statements:

select([mytable.c.data[5], mytable.c.data[2:7]])

as well as UPDATE statements when the Update.values() method is used:

mytable.update().values({
    mytable.c.data[5]: 7,
    mytable.c.data[2:7]: [1, 2, 3]
})

Note

Multi-dimensional support for the [] operator is not supported in SQLAlchemy 1.0. Please use the type_coerce() function to cast an intermediary expression to ARRAY again as a workaround:

expr = type_coerce(my_array_column[5], ARRAY(Integer))[6]

Multi-dimensional support will be provided in a future release.

ARRAY provides special methods for containment operations, e.g.:

mytable.c.data.contains([1, 2])

For a full list of special methods see ARRAY.Comparator.

New in version 0.8: Added support for index and slice operations to the ARRAY type, including support for UPDATE statements, and special array containment operations.

The ARRAY type may not be supported on all DBAPIs. It is known to work on psycopg2 and not pg8000.

Additionally, the ARRAY type does not work directly in conjunction with the ENUM type. For a workaround, see the special type at Using ENUM with ARRAY.

See also:

postgresql.array - produce a literal array value.

class Comparator(expr)

Bases: sqlalchemy.types.Comparator

Define comparison operations for ARRAY.

all(other, operator=<built-in function eq>)

Return other operator ALL (array) clause.

Argument places are switched, because ALL requires array expression to be on the right hand-side.

E.g.:

from sqlalchemy.sql import operators

conn.execute(
    select([table.c.data]).where(
            table.c.data.all(7, operator=operators.lt)
        )
)
Parameters:
  • other – expression to be compared
  • operator – an operator object from the sqlalchemy.sql.operators package, defaults to operators.eq().
any(other, operator=<built-in function eq>)

Return other operator ANY (array) clause.

Argument places are switched, because ANY requires array expression to be on the right hand-side.

E.g.:

from sqlalchemy.sql import operators

conn.execute(
    select([table.c.data]).where(
            table.c.data.any(7, operator=operators.lt)
        )
)
Parameters:
  • other – expression to be compared
  • operator – an operator object from the sqlalchemy.sql.operators package, defaults to operators.eq().
contained_by(other)

Boolean expression. Test if elements are a proper subset of the elements of the argument array expression.

contains(other, **kwargs)

Boolean expression. Test if elements are a superset of the elements of the argument array expression.

overlap(other)

Boolean expression. Test if array has elements in common with an argument array expression.

ARRAY.__init__(item_type, as_tuple=False, dimensions=None, zero_indexes=False)

Construct an ARRAY.

E.g.:

Column('myarray', ARRAY(Integer))

Arguments are:

Parameters:
  • item_type – The data type of items of this array. Note that dimensionality is irrelevant here, so multi-dimensional arrays like INTEGER[][], are constructed as ARRAY(Integer), not as ARRAY(ARRAY(Integer)) or such.
  • as_tuple=False – Specify whether return results should be converted to tuples from lists. DBAPIs such as psycopg2 return lists by default. When tuples are returned, the results are hashable.
  • dimensions – if non-None, the ARRAY will assume a fixed number of dimensions. This will cause the DDL emitted for this ARRAY to include the exact number of bracket clauses [], and will also optimize the performance of the type overall. Note that PG arrays are always implicitly “non-dimensioned”, meaning they can store any number of dimensions no matter how they were declared.
  • zero_indexes=False

    when True, index values will be converted between Python zero-based and Postgresql one-based indexes, e.g. a value of one will be added to all index values before passing to the database.

    New in version 0.9.5.

class sqlalchemy.dialects.postgresql.Any(left, right, operator=<built-in function eq>)

Bases: sqlalchemy.sql.expression.ColumnElement

Represent the clause left operator ANY (right). right must be an array expression.

See also

postgresql.ARRAY

postgresql.ARRAY.Comparator.any() - ARRAY-bound method

class sqlalchemy.dialects.postgresql.All(left, right, operator=<built-in function eq>)

Bases: sqlalchemy.sql.expression.ColumnElement

Represent the clause left operator ALL (right). right must be an array expression.

See also

postgresql.ARRAY

postgresql.ARRAY.Comparator.all() - ARRAY-bound method

class sqlalchemy.dialects.postgresql.BIT(length=None, varying=False)

Bases: sqlalchemy.types.TypeEngine

class sqlalchemy.dialects.postgresql.BYTEA(length=None)

Bases: sqlalchemy.types.LargeBinary

__init__(length=None)
inherited from the __init__() method of LargeBinary

Construct a LargeBinary type.

Parameters:length – optional, a length for the column for use in DDL statements, for those binary types that accept a length, such as the MySQL BLOB type.
class sqlalchemy.dialects.postgresql.CIDR

Bases: sqlalchemy.types.TypeEngine

class sqlalchemy.dialects.postgresql.DOUBLE_PRECISION(precision=None, asdecimal=False, decimal_return_scale=None, **kwargs)

Bases: sqlalchemy.types.Float

__init__(precision=None, asdecimal=False, decimal_return_scale=None, **kwargs)
inherited from the __init__() method of Float

Construct a Float.

Parameters:
  • precision – the numeric precision for use in DDL CREATE TABLE.
  • asdecimal – the same flag as that of Numeric, but defaults to False. Note that setting this flag to True results in floating point conversion.
  • decimal_return_scale

    Default scale to use when converting from floats to Python decimals. Floating point values will typically be much longer due to decimal inaccuracy, and most floating point database types don’t have a notion of “scale”, so by default the float type looks for the first ten decimal places when converting. Specfiying this value will override that length. Note that the MySQL float types, which do include “scale”, will use “scale” as the default for decimal_return_scale, if not otherwise specified.

    New in version 0.9.0.

  • **kwargs – deprecated. Additional arguments here are ignored by the default Float type. For database specific floats that support additional arguments, see that dialect’s documentation for details, such as sqlalchemy.dialects.mysql.FLOAT.
class sqlalchemy.dialects.postgresql.ENUM(*enums, **kw)

Bases: sqlalchemy.types.Enum

Postgresql ENUM type.

This is a subclass of types.Enum which includes support for PG’s CREATE TYPE and DROP TYPE.

When the builtin type types.Enum is used and the Enum.native_enum flag is left at its default of True, the Postgresql backend will use a postgresql.ENUM type as the implementation, so the special create/drop rules will be used.

The create/drop behavior of ENUM is necessarily intricate, due to the awkward relationship the ENUM type has in relationship to the parent table, in that it may be “owned” by just a single table, or may be shared among many tables.

When using types.Enum or postgresql.ENUM in an “inline” fashion, the CREATE TYPE and DROP TYPE is emitted corresponding to when the Table.create() and Table.drop() methods are called:

table = Table('sometable', metadata,
    Column('some_enum', ENUM('a', 'b', 'c', name='myenum'))
)

table.create(engine)  # will emit CREATE ENUM and CREATE TABLE
table.drop(engine)  # will emit DROP TABLE and DROP ENUM

To use a common enumerated type between multiple tables, the best practice is to declare the types.Enum or postgresql.ENUM independently, and associate it with the MetaData object itself:

my_enum = ENUM('a', 'b', 'c', name='myenum', metadata=metadata)

t1 = Table('sometable_one', metadata,
    Column('some_enum', myenum)
)

t2 = Table('sometable_two', metadata,
    Column('some_enum', myenum)
)

When this pattern is used, care must still be taken at the level of individual table creates. Emitting CREATE TABLE without also specifying checkfirst=True will still cause issues:

t1.create(engine) # will fail: no such type 'myenum'

If we specify checkfirst=True, the individual table-level create operation will check for the ENUM and create if not exists:

# will check if enum exists, and emit CREATE TYPE if not
t1.create(engine, checkfirst=True)

When using a metadata-level ENUM type, the type will always be created and dropped if either the metadata-wide create/drop is called:

metadata.create_all(engine)  # will emit CREATE TYPE
metadata.drop_all(engine)  # will emit DROP TYPE

The type can also be created and dropped directly:

my_enum.create(engine)
my_enum.drop(engine)

Changed in version 1.0.0: The Postgresql postgresql.ENUM type now behaves more strictly with regards to CREATE/DROP. A metadata-level ENUM type will only be created and dropped at the metadata level, not the table level, with the exception of table.create(checkfirst=True). The table.drop() call will now emit a DROP TYPE for a table-level enumerated type.

__init__(*enums, **kw)

Construct an ENUM.

Arguments are the same as that of types.Enum, but also including the following parameters.

Parameters:create_type

Defaults to True. Indicates that CREATE TYPE should be emitted, after optionally checking for the presence of the type, when the parent table is being created; and additionally that DROP TYPE is called when the table is dropped. When False, no check will be performed and no CREATE TYPE or DROP TYPE is emitted, unless create() or drop() are called directly. Setting to False is helpful when invoking a creation scheme to a SQL file without access to the actual database - the create() and drop() methods can be used to emit SQL to a target bind.

New in version 0.7.4.

create(bind=None, checkfirst=True)

Emit CREATE TYPE for this ENUM.

If the underlying dialect does not support Postgresql CREATE TYPE, no action is taken.

Parameters:
  • bind – a connectable Engine, Connection, or similar object to emit SQL.
  • checkfirst – if True, a query against the PG catalog will be first performed to see if the type does not exist already before creating.
drop(bind=None, checkfirst=True)

Emit DROP TYPE for this ENUM.

If the underlying dialect does not support Postgresql DROP TYPE, no action is taken.

Parameters:
  • bind – a connectable Engine, Connection, or similar object to emit SQL.
  • checkfirst – if True, a query against the PG catalog will be first performed to see if the type actually exists before dropping.
class sqlalchemy.dialects.postgresql.HSTORE

Bases: sqlalchemy.types.Concatenable, sqlalchemy.types.TypeEngine

Represent the Postgresql HSTORE type.

The HSTORE type stores dictionaries containing strings, e.g.:

data_table = Table('data_table', metadata,
    Column('id', Integer, primary_key=True),
    Column('data', HSTORE)
)

with engine.connect() as conn:
    conn.execute(
        data_table.insert(),
        data = {"key1": "value1", "key2": "value2"}
    )

HSTORE provides for a wide range of operations, including:

  • Index operations:

    data_table.c.data['some key'] == 'some value'
  • Containment operations:

    data_table.c.data.has_key('some key')
    
    data_table.c.data.has_all(['one', 'two', 'three'])
  • Concatenation:

    data_table.c.data + {"k1": "v1"}

For a full list of special methods see HSTORE.comparator_factory.

For usage with the SQLAlchemy ORM, it may be desirable to combine the usage of HSTORE with MutableDict dictionary now part of the sqlalchemy.ext.mutable extension. This extension will allow “in-place” changes to the dictionary, e.g. addition of new keys or replacement/removal of existing keys to/from the current dictionary, to produce events which will be detected by the unit of work:

from sqlalchemy.ext.mutable import MutableDict

class MyClass(Base):
    __tablename__ = 'data_table'

    id = Column(Integer, primary_key=True)
    data = Column(MutableDict.as_mutable(HSTORE))

my_object = session.query(MyClass).one()

# in-place mutation, requires Mutable extension
# in order for the ORM to detect
my_object.data['some_key'] = 'some value'

session.commit()

When the sqlalchemy.ext.mutable extension is not used, the ORM will not be alerted to any changes to the contents of an existing dictionary, unless that dictionary value is re-assigned to the HSTORE-attribute itself, thus generating a change event.

New in version 0.8.

See also

hstore - render the Postgresql hstore() function.

class comparator_factory(expr)

Bases: sqlalchemy.types.Comparator

Define comparison operations for HSTORE.

array()

Text array expression. Returns array of alternating keys and values.

contained_by(other)

Boolean expression. Test if keys are a proper subset of the keys of the argument hstore expression.

contains(other, **kwargs)

Boolean expression. Test if keys are a superset of the keys of the argument hstore expression.

defined(key)

Boolean expression. Test for presence of a non-NULL value for the key. Note that the key may be a SQLA expression.

delete(key)

HStore expression. Returns the contents of this hstore with the given key deleted. Note that the key may be a SQLA expression.

has_all(other)

Boolean expression. Test for presence of all keys in the PG array.

has_any(other)

Boolean expression. Test for presence of any key in the PG array.

has_key(other)

Boolean expression. Test for presence of a key. Note that the key may be a SQLA expression.

keys()

Text array expression. Returns array of keys.

matrix()

Text array expression. Returns array of [key, value] pairs.

slice(array)

HStore expression. Returns a subset of an hstore defined by array of keys.

vals()

Text array expression. Returns array of values.

class sqlalchemy.dialects.postgresql.hstore(*args, **kwargs)

Bases: sqlalchemy.sql.functions.GenericFunction

Construct an hstore value within a SQL expression using the Postgresql hstore() function.

The hstore function accepts one or two arguments as described in the Postgresql documentation.

E.g.:

from sqlalchemy.dialects.postgresql import array, hstore

select([hstore('key1', 'value1')])

select([
        hstore(
            array(['key1', 'key2', 'key3']),
            array(['value1', 'value2', 'value3'])
        )
    ])

New in version 0.8.

See also

HSTORE - the Postgresql HSTORE datatype.

type

alias of HSTORE

class sqlalchemy.dialects.postgresql.INET

Bases: sqlalchemy.types.TypeEngine

__init__
inherited from the __init__ attribute of object

x.__init__(...) initializes x; see help(type(x)) for signature

class sqlalchemy.dialects.postgresql.INTERVAL(precision=None)

Bases: sqlalchemy.types.TypeEngine

Postgresql INTERVAL type.

The INTERVAL type may not be supported on all DBAPIs. It is known to work on psycopg2 and not pg8000 or zxjdbc.

class sqlalchemy.dialects.postgresql.JSON(none_as_null=False)

Bases: sqlalchemy.types.TypeEngine

Represent the Postgresql JSON type.

The JSON type stores arbitrary JSON format data, e.g.:

data_table = Table('data_table', metadata,
    Column('id', Integer, primary_key=True),
    Column('data', JSON)
)

with engine.connect() as conn:
    conn.execute(
        data_table.insert(),
        data = {"key1": "value1", "key2": "value2"}
    )

JSON provides several operations:

  • Index operations:

    data_table.c.data['some key']
  • Index operations returning text (required for text comparison):

    data_table.c.data['some key'].astext == 'some value'
  • Index operations with a built-in CAST call:

    data_table.c.data['some key'].cast(Integer) == 5
  • Path index operations:

    data_table.c.data[('key_1', 'key_2', ..., 'key_n')]
  • Path index operations returning text (required for text comparison):

    data_table.c.data[('key_1', 'key_2', ..., 'key_n')].astext == \
        'some value'

Index operations return an instance of JSONElement, which represents an expression such as column -> index. This element then defines methods such as JSONElement.astext and JSONElement.cast() for setting up type behavior.

The JSON type, when used with the SQLAlchemy ORM, does not detect in-place mutations to the structure. In order to detect these, the sqlalchemy.ext.mutable extension must be used. This extension will allow “in-place” changes to the datastructure to produce events which will be detected by the unit of work. See the example at HSTORE for a simple example involving a dictionary.

Custom serializers and deserializers are specified at the dialect level, that is using create_engine(). The reason for this is that when using psycopg2, the DBAPI only allows serializers at the per-cursor or per-connection level. E.g.:

engine = create_engine("postgresql://scott:tiger@localhost/test",
                        json_serializer=my_serialize_fn,
                        json_deserializer=my_deserialize_fn
                )

When using the psycopg2 dialect, the json_deserializer is registered against the database using psycopg2.extras.register_default_json.

New in version 0.9.

__init__(none_as_null=False)

Construct a JSON type.

Parameters:none_as_null

if True, persist the value None as a SQL NULL value, not the JSON encoding of null. Note that when this flag is False, the null() construct can still be used to persist a NULL value:

from sqlalchemy import null
conn.execute(table.insert(), data=null())

Changed in version 0.9.8: - Added none_as_null, and null() is now supported in order to persist a NULL value.

class comparator_factory(expr)

Bases: sqlalchemy.types.Comparator

Define comparison operations for JSON.

class sqlalchemy.dialects.postgresql.JSONB(none_as_null=False)

Bases: sqlalchemy.dialects.postgresql.json.JSON

Represent the Postgresql JSONB type.

The JSONB type stores arbitrary JSONB format data, e.g.:

data_table = Table('data_table', metadata,
    Column('id', Integer, primary_key=True),
    Column('data', JSONB)
)

with engine.connect() as conn:
    conn.execute(
        data_table.insert(),
        data = {"key1": "value1", "key2": "value2"}
    )

JSONB provides several operations:

  • Index operations:

    data_table.c.data['some key']
  • Index operations returning text (required for text comparison):

    data_table.c.data['some key'].astext == 'some value'
  • Index operations with a built-in CAST call:

    data_table.c.data['some key'].cast(Integer) == 5
  • Path index operations:

    data_table.c.data[('key_1', 'key_2', ..., 'key_n')]
  • Path index operations returning text (required for text comparison):

    data_table.c.data[('key_1', 'key_2', ..., 'key_n')].astext == \
        'some value'

Index operations return an instance of JSONElement, which represents an expression such as column -> index. This element then defines methods such as JSONElement.astext and JSONElement.cast() for setting up type behavior.

The JSON type, when used with the SQLAlchemy ORM, does not detect in-place mutations to the structure. In order to detect these, the sqlalchemy.ext.mutable extension must be used. This extension will allow “in-place” changes to the datastructure to produce events which will be detected by the unit of work. See the example at HSTORE for a simple example involving a dictionary.

Custom serializers and deserializers are specified at the dialect level, that is using create_engine(). The reason for this is that when using psycopg2, the DBAPI only allows serializers at the per-cursor or per-connection level. E.g.:

engine = create_engine("postgresql://scott:tiger@localhost/test",
                        json_serializer=my_serialize_fn,
                        json_deserializer=my_deserialize_fn
                )

When using the psycopg2 dialect, the json_deserializer is registered against the database using psycopg2.extras.register_default_json.

New in version 0.9.7.

class comparator_factory(expr)

Bases: sqlalchemy.types.Comparator

Define comparison operations for JSON.

contained_by(other)

Boolean expression. Test if keys are a proper subset of the keys of the argument jsonb expression.

contains(other, **kwargs)

Boolean expression. Test if keys (or array) are a superset of/contained the keys of the argument jsonb expression.

has_all(other)

Boolean expression. Test for presence of all keys in jsonb

has_any(other)

Boolean expression. Test for presence of any key in jsonb

has_key(other)

Boolean expression. Test for presence of a key. Note that the key may be a SQLA expression.

class sqlalchemy.dialects.postgresql.JSONElement(left, right, astext=False, opstring=None, result_type=None)

Bases: sqlalchemy.sql.expression.BinaryExpression

Represents accessing an element of a JSON value.

The JSONElement is produced whenever using the Python index operator on an expression that has the type JSON:

expr = mytable.c.json_data['some_key']

The expression typically compiles to a JSON access such as col -> key. Modifiers are then available for typing behavior, including JSONElement.cast() and JSONElement.astext.

astext

Convert this JSONElement to use the ‘astext’ operator when evaluated.

E.g.:

select([data_table.c.data['some key'].astext])
cast(type_)

Convert this JSONElement to apply both the ‘astext’ operator as well as an explicit type cast when evaluated.

E.g.:

select([data_table.c.data['some key'].cast(Integer)])
class sqlalchemy.dialects.postgresql.MACADDR

Bases: sqlalchemy.types.TypeEngine

__init__
inherited from the __init__ attribute of object

x.__init__(...) initializes x; see help(type(x)) for signature

class sqlalchemy.dialects.postgresql.OID

Bases: sqlalchemy.types.TypeEngine

Provide the Postgresql OID type.

New in version 0.9.5.

__init__
inherited from the __init__ attribute of object

x.__init__(...) initializes x; see help(type(x)) for signature

class sqlalchemy.dialects.postgresql.REAL(precision=None, asdecimal=False, decimal_return_scale=None, **kwargs)

Bases: sqlalchemy.types.Float

The SQL REAL type.

__init__(precision=None, asdecimal=False, decimal_return_scale=None, **kwargs)
inherited from the __init__() method of Float

Construct a Float.

Parameters:
  • precision – the numeric precision for use in DDL CREATE TABLE.
  • asdecimal – the same flag as that of Numeric, but defaults to False. Note that setting this flag to True results in floating point conversion.
  • decimal_return_scale

    Default scale to use when converting from floats to Python decimals. Floating point values will typically be much longer due to decimal inaccuracy, and most floating point database types don’t have a notion of “scale”, so by default the float type looks for the first ten decimal places when converting. Specfiying this value will override that length. Note that the MySQL float types, which do include “scale”, will use “scale” as the default for decimal_return_scale, if not otherwise specified.

    New in version 0.9.0.

  • **kwargs – deprecated. Additional arguments here are ignored by the default Float type. For database specific floats that support additional arguments, see that dialect’s documentation for details, such as sqlalchemy.dialects.mysql.FLOAT.
class sqlalchemy.dialects.postgresql.TSVECTOR

Bases: sqlalchemy.types.TypeEngine

The postgresql.TSVECTOR type implements the Postgresql text search type TSVECTOR.

It can be used to do full text queries on natural language documents.

New in version 0.9.0.

See also

Full Text Search

__init__
inherited from the __init__ attribute of object

x.__init__(...) initializes x; see help(type(x)) for signature

class sqlalchemy.dialects.postgresql.UUID(as_uuid=False)

Bases: sqlalchemy.types.TypeEngine

Postgresql UUID type.

Represents the UUID column type, interpreting data either as natively returned by the DBAPI or as Python uuid objects.

The UUID type may not be supported on all DBAPIs. It is known to work on psycopg2 and not pg8000.

__init__(as_uuid=False)

Construct a UUID type.

Parameters:as_uuid=False – if True, values will be interpreted as Python uuid objects, converting to/from string via the DBAPI.

Range Types

The new range column types found in PostgreSQL 9.2 onwards are catered for by the following types:

class sqlalchemy.dialects.postgresql.INT4RANGE

Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine

Represent the Postgresql INT4RANGE type.

New in version 0.8.2.

class sqlalchemy.dialects.postgresql.INT8RANGE

Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine

Represent the Postgresql INT8RANGE type.

New in version 0.8.2.

class sqlalchemy.dialects.postgresql.NUMRANGE

Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine

Represent the Postgresql NUMRANGE type.

New in version 0.8.2.

class sqlalchemy.dialects.postgresql.DATERANGE

Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine

Represent the Postgresql DATERANGE type.

New in version 0.8.2.

class sqlalchemy.dialects.postgresql.TSRANGE

Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine

Represent the Postgresql TSRANGE type.

New in version 0.8.2.

class sqlalchemy.dialects.postgresql.TSTZRANGE

Bases: sqlalchemy.dialects.postgresql.ranges.RangeOperators, sqlalchemy.types.TypeEngine

Represent the Postgresql TSTZRANGE type.

New in version 0.8.2.

The types above get most of their functionality from the following mixin:

class sqlalchemy.dialects.postgresql.ranges.RangeOperators

This mixin provides functionality for the Range Operators listed in Table 9-44 of the postgres documentation for Range Functions and Operators. It is used by all the range types provided in the postgres dialect and can likely be used for any range types you create yourself.

No extra support is provided for the Range Functions listed in Table 9-45 of the postgres documentation. For these, the normal func() object should be used.

New in version 0.8.2: Support for Postgresql RANGE operations.

class comparator_factory(expr)

Bases: sqlalchemy.types.Comparator

Define comparison operations for range types.

__ne__(other)

Boolean expression. Returns true if two ranges are not equal

adjacent_to(other)

Boolean expression. Returns true if the range in the column is adjacent to the range in the operand.

contained_by(other)

Boolean expression. Returns true if the column is contained within the right hand operand.

contains(other, **kw)

Boolean expression. Returns true if the right hand operand, which can be an element or a range, is contained within the column.

not_extend_left_of(other)

Boolean expression. Returns true if the range in the column does not extend left of the range in the operand.

not_extend_right_of(other)

Boolean expression. Returns true if the range in the column does not extend right of the range in the operand.

overlaps(other)

Boolean expression. Returns true if the column overlaps (has points in common with) the right hand operand.

strictly_left_of(other)

Boolean expression. Returns true if the column is strictly left of the right hand operand.

strictly_right_of(other)

Boolean expression. Returns true if the column is strictly right of the right hand operand.

Warning

The range type DDL support should work with any Postgres DBAPI driver, however the data types returned may vary. If you are using psycopg2, it’s recommended to upgrade to version 2.5 or later before using these column types.

When instantiating models that use these column types, you should pass whatever data type is expected by the DBAPI driver you’re using for the column type. For psycopg2 these are NumericRange, DateRange, DateTimeRange and DateTimeTZRange or the class you’ve registered with register_range().

For example:

from psycopg2.extras import DateTimeRange
from sqlalchemy.dialects.postgresql import TSRANGE

class RoomBooking(Base):

    __tablename__ = 'room_booking'

    room = Column(Integer(), primary_key=True)
    during = Column(TSRANGE())

booking = RoomBooking(
    room=101,
    during=DateTimeRange(datetime(2013, 3, 23), None)
)

PostgreSQL Constraint Types

SQLAlchemy supports Postgresql EXCLUDE constraints via the ExcludeConstraint class:

class sqlalchemy.dialects.postgresql.ExcludeConstraint(*elements, **kw)

Bases: sqlalchemy.schema.ColumnCollectionConstraint

A table-level EXCLUDE constraint.

Defines an EXCLUDE constraint as described in the postgres documentation.

__init__(*elements, **kw)
Parameters:
  • *elements – A sequence of two tuples of the form (column, operator) where column must be a column name or Column object and operator must be a string containing the operator to use.
  • name – Optional, the in-database name of this constraint.
  • deferrable – Optional bool. If set, emit DEFERRABLE or NOT DEFERRABLE when issuing DDL for this constraint.
  • initially – Optional string. If set, emit INITIALLY <value> when issuing DDL for this constraint.
  • using – Optional string. If set, emit USING <index_method> when issuing DDL for this constraint. Defaults to ‘gist’.
  • where – Optional string. If set, emit WHERE <predicate> when issuing DDL for this constraint.

For example:

from sqlalchemy.dialects.postgresql import ExcludeConstraint, TSRANGE

class RoomBooking(Base):

    __tablename__ = 'room_booking'

    room = Column(Integer(), primary_key=True)
    during = Column(TSRANGE())

    __table_args__ = (
        ExcludeConstraint(('room', '='), ('during', '&&')),
    )

psycopg2

Support for the PostgreSQL database via the psycopg2 driver.

DBAPI

Documentation and download information (if applicable) for psycopg2 is available at: http://pypi.python.org/pypi/psycopg2/

Connecting

Connect String:

postgresql+psycopg2://user:password@host:port/dbname[?key=value&key=value...]

psycopg2 Connect Arguments

psycopg2-specific keyword arguments which are accepted by create_engine() are:

  • server_side_cursors: Enable the usage of “server side cursors” for SQL statements which support this feature. What this essentially means from a psycopg2 point of view is that the cursor is created using a name, e.g. connection.cursor('some name'), which has the effect that result rows are not immediately pre-fetched and buffered after statement execution, but are instead left on the server and only retrieved as needed. SQLAlchemy’s ResultProxy uses special row-buffering behavior when this feature is enabled, such that groups of 100 rows at a time are fetched over the wire to reduce conversational overhead. Note that the stream_results=True execution option is a more targeted way of enabling this mode on a per-execution basis.

  • use_native_unicode: Enable the usage of Psycopg2 “native unicode” mode per connection. True by default.

  • isolation_level: This option, available for all PostgreSQL dialects, includes the AUTOCOMMIT isolation level when using the psycopg2 dialect.

  • client_encoding: sets the client encoding in a libpq-agnostic way, using psycopg2’s set_client_encoding() method.

Unix Domain Connections

psycopg2 supports connecting via Unix domain connections. When the host portion of the URL is omitted, SQLAlchemy passes None to psycopg2, which specifies Unix-domain communication rather than TCP/IP communication:

create_engine("postgresql+psycopg2://user:password@/dbname")

By default, the socket file used is to connect to a Unix-domain socket in /tmp, or whatever socket directory was specified when PostgreSQL was built. This value can be overridden by passing a pathname to psycopg2, using host as an additional keyword argument:

create_engine("postgresql+psycopg2://user:password@/dbname?host=/var/lib/postgresql")

See also:

PQconnectdbParams

Per-Statement/Connection Execution Options

The following DBAPI-specific options are respected when used with Connection.execution_options(), Executable.execution_options(), Query.execution_options(), in addition to those not specific to DBAPIs:

  • isolation_level - Set the transaction isolation level for the lifespan of a Connection (can only be set on a connection, not a statement or query). See Psycopg2 Transaction Isolation Level.

  • stream_results - Enable or disable usage of psycopg2 server side cursors - this feature makes use of “named” cursors in combination with special result handling methods so that result rows are not fully buffered. If None or not set, the server_side_cursors option of the Engine is used.

  • max_row_buffer - when using stream_results, an integer value that specifies the maximum number of rows to buffer at a time. This is interpreted by the BufferedRowResultProxy, and if omitted the buffer will grow to ultimately store 1000 rows at a time.

    New in version 1.0.6.

Unicode with Psycopg2

By default, the psycopg2 driver uses the psycopg2.extensions.UNICODE extension, such that the DBAPI receives and returns all strings as Python Unicode objects directly - SQLAlchemy passes these values through without change. Psycopg2 here will encode/decode string values based on the current “client encoding” setting; by default this is the value in the postgresql.conf file, which often defaults to SQL_ASCII. Typically, this can be changed to utf8, as a more useful default:

# postgresql.conf file

# client_encoding = sql_ascii # actually, defaults to database
                             # encoding
client_encoding = utf8

A second way to affect the client encoding is to set it within Psycopg2 locally. SQLAlchemy will call psycopg2’s connection.set_client_encoding() method on all new connections based on the value passed to create_engine() using the client_encoding parameter:

# set_client_encoding() setting;
# works for *all* Postgresql versions
engine = create_engine("postgresql://user:pass@host/dbname",
                       client_encoding='utf8')

This overrides the encoding specified in the Postgresql client configuration. When using the parameter in this way, the psycopg2 driver emits SET client_encoding TO 'utf8' on the connection explicitly, and works in all Postgresql versions.

Note that the client_encoding setting as passed to create_engine() is not the same as the more recently added client_encoding parameter now supported by libpq directly. This is enabled when client_encoding is passed directly to psycopg2.connect(), and from SQLAlchemy is passed using the create_engine.connect_args parameter:

# libpq direct parameter setting;
# only works for Postgresql **9.1 and above**
engine = create_engine("postgresql://user:pass@host/dbname",
                       connect_args={'client_encoding': 'utf8'})

# using the query string is equivalent
engine = create_engine("postgresql://user:pass@host/dbname?client_encoding=utf8")

The above parameter was only added to libpq as of version 9.1 of Postgresql, so using the previous method is better for cross-version support.

Disabling Native Unicode

SQLAlchemy can also be instructed to skip the usage of the psycopg2 UNICODE extension and to instead utilize its own unicode encode/decode services, which are normally reserved only for those DBAPIs that don’t fully support unicode directly. Passing use_native_unicode=False to create_engine() will disable usage of psycopg2.extensions.UNICODE. SQLAlchemy will instead encode data itself into Python bytestrings on the way in and coerce from bytes on the way back, using the value of the create_engine() encoding parameter, which defaults to utf-8. SQLAlchemy’s own unicode encode/decode functionality is steadily becoming obsolete as most DBAPIs now support unicode fully.

Bound Parameter Styles

The default parameter style for the psycopg2 dialect is “pyformat”, where SQL is rendered using %(paramname)s style. This format has the limitation that it does not accommodate the unusual case of parameter names that actually contain percent or parenthesis symbols; as SQLAlchemy in many cases generates bound parameter names based on the name of a column, the presence of these characters in a column name can lead to problems.

There are two solutions to the issue of a schema.Column that contains one of these characters in its name. One is to specify the schema.Column.key for columns that have such names:

measurement = Table('measurement', metadata,
    Column('Size (meters)', Integer, key='size_meters')
)

Above, an INSERT statement such as measurement.insert() will use size_meters as the parameter name, and a SQL expression such as measurement.c.size_meters > 10 will derive the bound parameter name from the size_meters key as well.

Changed in version 1.0.0: - SQL expressions will use Column.key as the source of naming when anonymous bound parameters are created in SQL expressions; previously, this behavior only applied to Table.insert() and Table.update() parameter names.

The other solution is to use a positional format; psycopg2 allows use of the “format” paramstyle, which can be passed to create_engine.paramstyle:

engine = create_engine(
    'postgresql://scott:tiger@localhost:5432/test', paramstyle='format')

With the above engine, instead of a statement like:

INSERT INTO measurement ("Size (meters)") VALUES (%(Size (meters))s)
{'Size (meters)': 1}

we instead see:

INSERT INTO measurement ("Size (meters)") VALUES (%s)
(1, )

Where above, the dictionary style is converted into a tuple with positional style.

Transactions

The psycopg2 dialect fully supports SAVEPOINT and two-phase commit operations.

Psycopg2 Transaction Isolation Level

As discussed in Transaction Isolation Level, all Postgresql dialects support setting of transaction isolation level both via the isolation_level parameter passed to create_engine(), as well as the isolation_level argument used by Connection.execution_options(). When using the psycopg2 dialect, these options make use of psycopg2’s set_isolation_level() connection method, rather than emitting a Postgresql directive; this is because psycopg2’s API-level setting is always emitted at the start of each transaction in any case.

The psycopg2 dialect supports these constants for isolation level:

  • READ COMMITTED
  • READ UNCOMMITTED
  • REPEATABLE READ
  • SERIALIZABLE
  • AUTOCOMMIT

New in version 0.8.2: support for AUTOCOMMIT isolation level when using psycopg2.

NOTICE logging

The psycopg2 dialect will log Postgresql NOTICE messages via the sqlalchemy.dialects.postgresql logger:

import logging
logging.getLogger('sqlalchemy.dialects.postgresql').setLevel(logging.INFO)

HSTORE type

The psycopg2 DBAPI includes an extension to natively handle marshalling of the HSTORE type. The SQLAlchemy psycopg2 dialect will enable this extension by default when psycopg2 version 2.4 or greater is used, and it is detected that the target database has the HSTORE type set up for use. In other words, when the dialect makes the first connection, a sequence like the following is performed:

  1. Request the available HSTORE oids using psycopg2.extras.HstoreAdapter.get_oids(). If this function returns a list of HSTORE identifiers, we then determine that the HSTORE extension is present. This function is skipped if the version of psycopg2 installed is less than version 2.4.
  2. If the use_native_hstore flag is at its default of True, and we’ve detected that HSTORE oids are available, the psycopg2.extensions.register_hstore() extension is invoked for all connections.

The register_hstore() extension has the effect of all Python dictionaries being accepted as parameters regardless of the type of target column in SQL. The dictionaries are converted by this extension into a textual HSTORE expression. If this behavior is not desired, disable the use of the hstore extension by setting use_native_hstore to False as follows:

engine = create_engine("postgresql+psycopg2://scott:tiger@localhost/test",
            use_native_hstore=False)

The HSTORE type is still supported when the psycopg2.extensions.register_hstore() extension is not used. It merely means that the coercion between Python dictionaries and the HSTORE string format, on both the parameter side and the result side, will take place within SQLAlchemy’s own marshalling logic, and not that of psycopg2 which may be more performant.

pg8000

Support for the PostgreSQL database via the pg8000 driver.

DBAPI

Documentation and download information (if applicable) for pg8000 is available at: https://pythonhosted.org/pg8000/

Connecting

Connect String:

postgresql+pg8000://user:password@host:port/dbname[?key=value&key=value...]

Unicode

pg8000 will encode / decode string values between it and the server using the PostgreSQL client_encoding parameter; by default this is the value in the postgresql.conf file, which often defaults to SQL_ASCII. Typically, this can be changed to utf-8, as a more useful default:

#client_encoding = sql_ascii # actually, defaults to database
                             # encoding
client_encoding = utf8

The client_encoding can be overriden for a session by executing the SQL:

SET CLIENT_ENCODING TO ‘utf8’;

SQLAlchemy will execute this SQL on all new connections based on the value passed to create_engine() using the client_encoding parameter:

engine = create_engine(
    "postgresql+pg8000://user:pass@host/dbname", client_encoding='utf8')

pg8000 Transaction Isolation Level

The pg8000 dialect offers the same isolation level settings as that of the psycopg2 dialect:

  • READ COMMITTED
  • READ UNCOMMITTED
  • REPEATABLE READ
  • SERIALIZABLE
  • AUTOCOMMIT

New in version 0.9.5: support for AUTOCOMMIT isolation level when using pg8000.

psycopg2cffi

Support for the PostgreSQL database via the psycopg2cffi driver.

DBAPI

Documentation and download information (if applicable) for psycopg2cffi is available at: http://pypi.python.org/pypi/psycopg2cffi/

Connecting

Connect String:

postgresql+psycopg2cffi://user:password@host:port/dbname[?key=value&key=value...]

psycopg2cffi is an adaptation of psycopg2, using CFFI for the C layer. This makes it suitable for use in e.g. PyPy. Documentation is as per psycopg2.

New in version 1.0.0.

py-postgresql

Support for the PostgreSQL database via the py-postgresql driver.

DBAPI

Documentation and download information (if applicable) for py-postgresql is available at: http://python.projects.pgfoundry.org/

Connecting

Connect String:

postgresql+pypostgresql://user:password@host:port/dbname[?key=value&key=value...]

zxjdbc

Support for the PostgreSQL database via the zxJDBC for Jython driver.

DBAPI

Drivers for this database are available at: http://jdbc.postgresql.org/

Connecting

Connect String:

postgresql+zxjdbc://scott:tiger@localhost/db