Microsoft SQL Server¶
Support for the Microsoft SQL Server database.
DBAPI Support¶
The following dialect/DBAPI options are available. Please refer to individual DBAPI sections for connect information.
Auto Increment Behavior¶
SQL Server provides so-called “auto incrementing” behavior using the
IDENTITY
construct, which can be placed on an integer primary key.
SQLAlchemy considers IDENTITY
within its default “autoincrement” behavior,
described at Column.autoincrement
; this means
that by default, the first integer primary key column in a Table
will be considered to be the identity column and will generate DDL as such:
from sqlalchemy import Table, MetaData, Column, Integer
m = MetaData()
t = Table('t', m,
Column('id', Integer, primary_key=True),
Column('x', Integer))
m.create_all(engine)
The above example will generate DDL as:
CREATE TABLE t (
id INTEGER NOT NULL IDENTITY(1,1),
x INTEGER NULL,
PRIMARY KEY (id)
)
For the case where this default generation of IDENTITY
is not desired,
specify autoincrement=False
on all integer primary key columns:
m = MetaData()
t = Table('t', m,
Column('id', Integer, primary_key=True, autoincrement=False),
Column('x', Integer))
m.create_all(engine)
Note
An INSERT statement which refers to an explicit value for such
a column is prohibited by SQL Server, however SQLAlchemy will detect this
and modify the IDENTITY_INSERT
flag accordingly at statement execution
time. As this is not a high performing process, care should be taken to
set the autoincrement
flag appropriately for columns that will not
actually require IDENTITY behavior.
Controlling “Start” and “Increment”¶
Specific control over the parameters of the IDENTITY
value is supported
using the schema.Sequence
object. While this object normally
represents an explicit “sequence” for supporting backends, on SQL Server it is
re-purposed to specify behavior regarding the identity column, including
support of the “start” and “increment” values:
from sqlalchemy import Table, Integer, Sequence, Column
Table('test', metadata,
Column('id', Integer,
Sequence('blah', start=100, increment=10),
primary_key=True),
Column('name', String(20))
).create(some_engine)
would yield:
CREATE TABLE test (
id INTEGER NOT NULL IDENTITY(100,10) PRIMARY KEY,
name VARCHAR(20) NULL,
)
Note that the start
and increment
values for sequences are
optional and will default to 1,1.
INSERT behavior¶
Handling of the IDENTITY
column at INSERT time involves two key
techniques. The most common is being able to fetch the “last inserted value”
for a given IDENTITY
column, a process which SQLAlchemy performs
implicitly in many cases, most importantly within the ORM.
The process for fetching this value has several variants:
In the vast majority of cases, RETURNING is used in conjunction with INSERT statements on SQL Server in order to get newly generated primary key values:
INSERT INTO t (x) OUTPUT inserted.id VALUES (?)
When RETURNING is not available or has been disabled via
implicit_returning=False
, either thescope_identity()
function or the@@identity
variable is used; behavior varies by backend:when using PyODBC, the phrase
; select scope_identity()
will be appended to the end of the INSERT statement; a second result set will be fetched in order to receive the value. Given a table as:t = Table('t', m, Column('id', Integer, primary_key=True), Column('x', Integer), implicit_returning=False)
an INSERT will look like:
INSERT INTO t (x) VALUES (?); select scope_identity()
Other dialects such as pymssql will call upon
SELECT scope_identity() AS lastrowid
subsequent to an INSERT statement. If the flaguse_scope_identity=False
is passed tocreate_engine()
, the statementSELECT @@identity AS lastrowid
is used instead.
A table that contains an IDENTITY
column will prohibit an INSERT statement
that refers to the identity column explicitly. The SQLAlchemy dialect will
detect when an INSERT construct, created using a core insert()
construct (not a plain string SQL), refers to the identity column, and
in this case will emit SET IDENTITY_INSERT ON
prior to the insert
statement proceeding, and SET IDENTITY_INSERT OFF
subsequent to the
execution. Given this example:
m = MetaData()
t = Table('t', m, Column('id', Integer, primary_key=True),
Column('x', Integer))
m.create_all(engine)
engine.execute(t.insert(), {'id': 1, 'x':1}, {'id':2, 'x':2})
The above column will be created with IDENTITY, however the INSERT statement we emit is specifying explicit values. In the echo output we can see how SQLAlchemy handles this:
CREATE TABLE t (
id INTEGER NOT NULL IDENTITY(1,1),
x INTEGER NULL,
PRIMARY KEY (id)
)
COMMIT
SET IDENTITY_INSERT t ON
INSERT INTO t (id, x) VALUES (?, ?)
((1, 1), (2, 2))
SET IDENTITY_INSERT t OFF
COMMIT
This is an auxilliary use case suitable for testing and bulk insert scenarios.
Rendering of SQL statements that include schema qualifiers¶
When using Table
metadata that includes a “schema” qualifier,
such as:
account_table = Table(
'account', metadata,
Column('id', Integer, primary_key=True),
Column('info', String(100)),
schema="customer_schema"
)
The SQL Server dialect has a long-standing behavior that it will attempt to turn a schema-qualified table name into an alias, such as:
>>> eng = create_engine("mssql+pymssql://mydsn")
>>> print(account_table.select().compile(eng))
SELECT account_1.id, account_1.info
FROM customer_schema.account AS account_1
This behavior is legacy, does not function correctly for many forms of SQL statements, and will be disabled by default in the 1.1 series of SQLAlchemy. As of 1.0.5, the above statement will produce the following warning:
SAWarning: legacy_schema_aliasing flag is defaulted to True;
some schema-qualified queries may not function correctly.
Consider setting this flag to False for modern SQL Server versions;
this flag will default to False in version 1.1
This warning encourages the Engine
to be created as follows:
>>> eng = create_engine("mssql+pymssql://mydsn", legacy_schema_aliasing=False)
Where the above SELECT statement will produce:
>>> print(account_table.select().compile(eng))
SELECT customer_schema.account.id, customer_schema.account.info
FROM customer_schema.account
The warning will not emit if the legacy_schema_aliasing
flag is set
to either True or False.
New in version 1.0.5: - Added the legacy_schema_aliasing
flag to disable
the SQL Server dialect’s legacy behavior with schema-qualified table
names. This flag will default to False in version 1.1.
Collation Support¶
Character collations are supported by the base string types, specified by the string argument “collation”:
from sqlalchemy import VARCHAR
Column('login', VARCHAR(32, collation='Latin1_General_CI_AS'))
When such a column is associated with a Table
, the
CREATE TABLE statement for this column will yield:
login VARCHAR(32) COLLATE Latin1_General_CI_AS NULL
New in version 0.8: Character collations are now part of the base string types.
LIMIT/OFFSET Support¶
MSSQL has no support for the LIMIT or OFFSET keywords. LIMIT is
supported directly through the TOP
Transact SQL keyword:
select.limit
will yield:
SELECT TOP n
If using SQL Server 2005 or above, LIMIT with OFFSET
support is available through the ROW_NUMBER OVER
construct.
For versions below 2005, LIMIT with OFFSET usage will fail.
Nullability¶
MSSQL has support for three levels of column nullability. The default nullability allows nulls and is explicit in the CREATE TABLE construct:
name VARCHAR(20) NULL
If nullable=None
is specified then no specification is made. In
other words the database’s configured default is used. This will
render:
name VARCHAR(20)
If nullable
is True
or False
then the column will be
NULL` or ``NOT NULL
respectively.
Date / Time Handling¶
DATE and TIME are supported. Bind parameters are converted to datetime.datetime() objects as required by most MSSQL drivers, and results are processed from strings if needed. The DATE and TIME types are not available for MSSQL 2005 and previous - if a server version below 2008 is detected, DDL for these types will be issued as DATETIME.
Large Text/Binary Type Deprecation¶
Per SQL Server 2012/2014 Documentation,
the NTEXT
, TEXT
and IMAGE
datatypes are to be removed from SQL Server
in a future release. SQLAlchemy normally relates these types to the
UnicodeText
, Text
and LargeBinary
datatypes.
In order to accommodate this change, a new flag deprecate_large_types
is added to the dialect, which will be automatically set based on detection
of the server version in use, if not otherwise set by the user. The
behavior of this flag is as follows:
When this flag is
True
, theUnicodeText
,Text
andLargeBinary
datatypes, when used to render DDL, will render the typesNVARCHAR(max)
,VARCHAR(max)
, andVARBINARY(max)
, respectively. This is a new behavior as of the addition of this flag.When this flag is
False
, theUnicodeText
,Text
andLargeBinary
datatypes, when used to render DDL, will render the typesNTEXT
,TEXT
, andIMAGE
, respectively. This is the long-standing behavior of these types.The flag begins with the value
None
, before a database connection is established. If the dialect is used to render DDL without the flag being set, it is interpreted the same asFalse
.On first connection, the dialect detects if SQL Server version 2012 or greater is in use; if the flag is still at
None
, it sets it toTrue
orFalse
based on whether 2012 or greater is detected.The flag can be set to either
True
orFalse
when the dialect is created, typically viacreate_engine()
:eng = create_engine("mssql+pymssql://user:pass@host/db", deprecate_large_types=True)
Complete control over whether the “old” or “new” types are rendered is available in all SQLAlchemy versions by using the UPPERCASE type objects instead:
NVARCHAR
,VARCHAR
,types.VARBINARY
,TEXT
,mssql.NTEXT
,mssql.IMAGE
will always remain fixed and always output exactly that type.
New in version 1.0.0.
Clustered Index Support¶
The MSSQL dialect supports clustered indexes (and primary keys) via the
mssql_clustered
option. This option is available to Index
,
UniqueConstraint
. and PrimaryKeyConstraint
.
To generate a clustered index:
Index("my_index", table.c.x, mssql_clustered=True)
which renders the index as CREATE CLUSTERED INDEX my_index ON table (x)
.
New in version 0.8.
To generate a clustered primary key use:
Table('my_table', metadata,
Column('x', ...),
Column('y', ...),
PrimaryKeyConstraint("x", "y", mssql_clustered=True))
which will render the table, for example, as:
CREATE TABLE my_table (x INTEGER NOT NULL, y INTEGER NOT NULL,
PRIMARY KEY CLUSTERED (x, y))
Similarly, we can generate a clustered unique constraint using:
Table('my_table', metadata,
Column('x', ...),
Column('y', ...),
PrimaryKeyConstraint("x"),
UniqueConstraint("y", mssql_clustered=True),
)
.. versionadded:: 0.9.2
MSSQL-Specific Index Options¶
In addition to clustering, the MSSQL dialect supports other special options
for Index
.
INCLUDE¶
The mssql_include
option renders INCLUDE(colname) for the given string
names:
Index("my_index", table.c.x, mssql_include=['y'])
would render the index as CREATE INDEX my_index ON table (x) INCLUDE (y)
New in version 0.8.
Index ordering¶
Index ordering is available via functional expressions, such as:
Index("my_index", table.c.x.desc())
would render the index as CREATE INDEX my_index ON table (x DESC)
New in version 0.8.
See also
Compatibility Levels¶
MSSQL supports the notion of setting compatibility levels at the
database level. This allows, for instance, to run a database that
is compatible with SQL2000 while running on a SQL2005 database
server. server_version_info
will always return the database
server version information (in this case SQL2005) and not the
compatibility level information. Because of this, if running under
a backwards compatibility mode SQAlchemy may attempt to use T-SQL
statements that are unable to be parsed by the database server.
Triggers¶
SQLAlchemy by default uses OUTPUT INSERTED to get at newly
generated primary key values via IDENTITY columns or other
server side defaults. MS-SQL does not
allow the usage of OUTPUT INSERTED on tables that have triggers.
To disable the usage of OUTPUT INSERTED on a per-table basis,
specify implicit_returning=False
for each Table
which has triggers:
Table('mytable', metadata,
Column('id', Integer, primary_key=True),
# ...,
implicit_returning=False
)
Declarative form:
class MyClass(Base):
# ...
__table_args__ = {'implicit_returning':False}
This option can also be specified engine-wide using the
implicit_returning=False
argument on create_engine()
.
Enabling Snapshot Isolation¶
Not necessarily specific to SQLAlchemy, SQL Server has a default transaction isolation mode that locks entire tables, and causes even mildly concurrent applications to have long held locks and frequent deadlocks. Enabling snapshot isolation for the database as a whole is recommended for modern levels of concurrency support. This is accomplished via the following ALTER DATABASE commands executed at the SQL prompt:
ALTER DATABASE MyDatabase SET ALLOW_SNAPSHOT_ISOLATION ON
ALTER DATABASE MyDatabase SET READ_COMMITTED_SNAPSHOT ON
Background on SQL Server snapshot isolation is available at http://msdn.microsoft.com/en-us/library/ms175095.aspx.
Known Issues¶
- No support for more than one
IDENTITY
column per table - reflection of indexes does not work with versions older than SQL Server 2005
SQL Server Data Types¶
As with all SQLAlchemy dialects, all UPPERCASE types that are known to be
valid with SQL server are importable from the top level dialect, whether
they originate from sqlalchemy.types
or from the local dialect:
from sqlalchemy.dialects.mssql import \
BIGINT, BINARY, BIT, CHAR, DATE, DATETIME, DATETIME2, \
DATETIMEOFFSET, DECIMAL, FLOAT, IMAGE, INTEGER, MONEY, \
NCHAR, NTEXT, NUMERIC, NVARCHAR, REAL, SMALLDATETIME, \
SMALLINT, SMALLMONEY, SQL_VARIANT, TEXT, TIME, \
TIMESTAMP, TINYINT, UNIQUEIDENTIFIER, VARBINARY, VARCHAR
Types which are specific to SQL Server, or have SQL Server-specific construction arguments, are as follows:
-
class
sqlalchemy.dialects.mssql.
BIT
¶ Bases:
sqlalchemy.types.TypeEngine
-
__init__
¶ - inherited from the
__init__
attribute ofobject
x.__init__(...) initializes x; see help(type(x)) for signature
-
-
class
sqlalchemy.dialects.mssql.
CHAR
(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)¶ Bases:
sqlalchemy.types.String
The SQL CHAR type.
-
__init__
(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)¶ - inherited from the
__init__()
method ofString
Create a string-holding type.
Parameters: - length¶ – optional, a length for the column for use in
DDL and CAST expressions. May be safely omitted if no
CREATE TABLE
will be issued. Certain databases may require alength
for use in DDL, and will raise an exception when theCREATE TABLE
DDL is issued if aVARCHAR
with no length is included. Whether the value is interpreted as bytes or characters is database specific. - collation¶ –
Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and Postgresql. E.g.:
>>> from sqlalchemy import cast, select, String >>> print select([cast('some string', String(collation='utf8'))]) SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
New in version 0.8: Added support for COLLATE to all string types.
- convert_unicode¶ –
When set to
True
, theString
type will assume that input is to be passed as Pythonunicode
objects, and results returned as Pythonunicode
objects. If the DBAPI in use does not support Python unicode (which is fewer and fewer these days), SQLAlchemy will encode/decode the value, using the value of theencoding
parameter passed tocreate_engine()
as the encoding.When using a DBAPI that natively supports Python unicode objects, this flag generally does not need to be set. For columns that are explicitly intended to store non-ASCII data, the
Unicode
orUnicodeText
types should be used regardless, which feature the same behavior ofconvert_unicode
but also indicate an underlying column type that directly supports unicode, such asNVARCHAR
.For the extremely rare case that Python
unicode
is to be encoded/decoded by SQLAlchemy on a backend that does natively support Pythonunicode
, the valueforce
can be passed here which will cause SQLAlchemy’s encode/decode services to be used unconditionally. - unicode_error¶ – Optional, a method to use to handle Unicode
conversion errors. Behaves like the
errors
keyword argument to the standard library’sstring.decode()
functions. This flag requires thatconvert_unicode
is set toforce
- otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this flag adds significant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This flag should only be used as a last resort for reading strings from a column with varied or corrupted encodings.
- length¶ – optional, a length for the column for use in
DDL and CAST expressions. May be safely omitted if no
-
-
class
sqlalchemy.dialects.mssql.
DATETIME2
(precision=None, **kw)¶ Bases:
sqlalchemy.dialects.mssql.base._DateTimeBase
,sqlalchemy.types.DateTime
-
class
sqlalchemy.dialects.mssql.
DATETIMEOFFSET
(precision=None, **kwargs)¶ Bases:
sqlalchemy.types.TypeEngine
-
class
sqlalchemy.dialects.mssql.
IMAGE
(length=None)¶ Bases:
sqlalchemy.types.LargeBinary
-
__init__
(length=None)¶ - inherited from the
__init__()
method ofLargeBinary
Construct a LargeBinary type.
Parameters: length¶ – optional, a length for the column for use in DDL statements, for those binary types that accept a length, such as the MySQL BLOB type.
-
-
class
sqlalchemy.dialects.mssql.
MONEY
¶ Bases:
sqlalchemy.types.TypeEngine
-
__init__
¶ - inherited from the
__init__
attribute ofobject
x.__init__(...) initializes x; see help(type(x)) for signature
-
-
class
sqlalchemy.dialects.mssql.
NCHAR
(length=None, **kwargs)¶ Bases:
sqlalchemy.types.Unicode
The SQL NCHAR type.
-
__init__
(length=None, **kwargs)¶ - inherited from the
__init__()
method ofUnicode
Create a
Unicode
object.Parameters are the same as that of
String
, with the exception thatconvert_unicode
defaults toTrue
.
-
-
class
sqlalchemy.dialects.mssql.
NTEXT
(length=None, **kwargs)¶ Bases:
sqlalchemy.types.UnicodeText
MSSQL NTEXT type, for variable-length unicode text up to 2^30 characters.
-
__init__
(length=None, **kwargs)¶ - inherited from the
__init__()
method ofUnicodeText
Create a Unicode-converting Text type.
Parameters are the same as that of
Text
, with the exception thatconvert_unicode
defaults toTrue
.
-
-
class
sqlalchemy.dialects.mssql.
NVARCHAR
(length=None, **kwargs)¶ Bases:
sqlalchemy.types.Unicode
The SQL NVARCHAR type.
-
__init__
(length=None, **kwargs)¶ - inherited from the
__init__()
method ofUnicode
Create a
Unicode
object.Parameters are the same as that of
String
, with the exception thatconvert_unicode
defaults toTrue
.
-
-
class
sqlalchemy.dialects.mssql.
REAL
(**kw)¶ Bases:
sqlalchemy.types.REAL
-
class
sqlalchemy.dialects.mssql.
SMALLDATETIME
(timezone=False)¶ Bases:
sqlalchemy.dialects.mssql.base._DateTimeBase
,sqlalchemy.types.DateTime
-
__init__
(timezone=False)¶ - inherited from the
__init__()
method ofDateTime
Construct a new
DateTime
.Parameters: timezone¶ – boolean. If True, and supported by the backend, will produce ‘TIMESTAMP WITH TIMEZONE’. For backends that don’t support timezone aware timestamps, has no effect.
-
-
class
sqlalchemy.dialects.mssql.
SMALLMONEY
¶ Bases:
sqlalchemy.types.TypeEngine
-
__init__
¶ - inherited from the
__init__
attribute ofobject
x.__init__(...) initializes x; see help(type(x)) for signature
-
-
class
sqlalchemy.dialects.mssql.
SQL_VARIANT
¶ Bases:
sqlalchemy.types.TypeEngine
-
__init__
¶ - inherited from the
__init__
attribute ofobject
x.__init__(...) initializes x; see help(type(x)) for signature
-
-
class
sqlalchemy.dialects.mssql.
TEXT
(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)¶ Bases:
sqlalchemy.types.Text
The SQL TEXT type.
-
__init__
(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)¶ - inherited from the
__init__()
method ofString
Create a string-holding type.
Parameters: - length¶ – optional, a length for the column for use in
DDL and CAST expressions. May be safely omitted if no
CREATE TABLE
will be issued. Certain databases may require alength
for use in DDL, and will raise an exception when theCREATE TABLE
DDL is issued if aVARCHAR
with no length is included. Whether the value is interpreted as bytes or characters is database specific. - collation¶ –
Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and Postgresql. E.g.:
>>> from sqlalchemy import cast, select, String >>> print select([cast('some string', String(collation='utf8'))]) SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
New in version 0.8: Added support for COLLATE to all string types.
- convert_unicode¶ –
When set to
True
, theString
type will assume that input is to be passed as Pythonunicode
objects, and results returned as Pythonunicode
objects. If the DBAPI in use does not support Python unicode (which is fewer and fewer these days), SQLAlchemy will encode/decode the value, using the value of theencoding
parameter passed tocreate_engine()
as the encoding.When using a DBAPI that natively supports Python unicode objects, this flag generally does not need to be set. For columns that are explicitly intended to store non-ASCII data, the
Unicode
orUnicodeText
types should be used regardless, which feature the same behavior ofconvert_unicode
but also indicate an underlying column type that directly supports unicode, such asNVARCHAR
.For the extremely rare case that Python
unicode
is to be encoded/decoded by SQLAlchemy on a backend that does natively support Pythonunicode
, the valueforce
can be passed here which will cause SQLAlchemy’s encode/decode services to be used unconditionally. - unicode_error¶ – Optional, a method to use to handle Unicode
conversion errors. Behaves like the
errors
keyword argument to the standard library’sstring.decode()
functions. This flag requires thatconvert_unicode
is set toforce
- otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this flag adds significant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This flag should only be used as a last resort for reading strings from a column with varied or corrupted encodings.
- length¶ – optional, a length for the column for use in
DDL and CAST expressions. May be safely omitted if no
-
-
class
sqlalchemy.dialects.mssql.
TIME
(precision=None, **kwargs)¶ Bases:
sqlalchemy.types.TIME
-
class
sqlalchemy.dialects.mssql.
TINYINT
¶ Bases:
sqlalchemy.types.Integer
-
__init__
¶ - inherited from the
__init__
attribute ofobject
x.__init__(...) initializes x; see help(type(x)) for signature
-
-
class
sqlalchemy.dialects.mssql.
UNIQUEIDENTIFIER
¶ Bases:
sqlalchemy.types.TypeEngine
-
__init__
¶ - inherited from the
__init__
attribute ofobject
x.__init__(...) initializes x; see help(type(x)) for signature
-
-
class
sqlalchemy.dialects.mssql.
VARCHAR
(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)¶ Bases:
sqlalchemy.types.String
The SQL VARCHAR type.
-
__init__
(length=None, collation=None, convert_unicode=False, unicode_error=None, _warn_on_bytestring=False)¶ - inherited from the
__init__()
method ofString
Create a string-holding type.
Parameters: - length¶ – optional, a length for the column for use in
DDL and CAST expressions. May be safely omitted if no
CREATE TABLE
will be issued. Certain databases may require alength
for use in DDL, and will raise an exception when theCREATE TABLE
DDL is issued if aVARCHAR
with no length is included. Whether the value is interpreted as bytes or characters is database specific. - collation¶ –
Optional, a column-level collation for use in DDL and CAST expressions. Renders using the COLLATE keyword supported by SQLite, MySQL, and Postgresql. E.g.:
>>> from sqlalchemy import cast, select, String >>> print select([cast('some string', String(collation='utf8'))]) SELECT CAST(:param_1 AS VARCHAR COLLATE utf8) AS anon_1
New in version 0.8: Added support for COLLATE to all string types.
- convert_unicode¶ –
When set to
True
, theString
type will assume that input is to be passed as Pythonunicode
objects, and results returned as Pythonunicode
objects. If the DBAPI in use does not support Python unicode (which is fewer and fewer these days), SQLAlchemy will encode/decode the value, using the value of theencoding
parameter passed tocreate_engine()
as the encoding.When using a DBAPI that natively supports Python unicode objects, this flag generally does not need to be set. For columns that are explicitly intended to store non-ASCII data, the
Unicode
orUnicodeText
types should be used regardless, which feature the same behavior ofconvert_unicode
but also indicate an underlying column type that directly supports unicode, such asNVARCHAR
.For the extremely rare case that Python
unicode
is to be encoded/decoded by SQLAlchemy on a backend that does natively support Pythonunicode
, the valueforce
can be passed here which will cause SQLAlchemy’s encode/decode services to be used unconditionally. - unicode_error¶ – Optional, a method to use to handle Unicode
conversion errors. Behaves like the
errors
keyword argument to the standard library’sstring.decode()
functions. This flag requires thatconvert_unicode
is set toforce
- otherwise, SQLAlchemy is not guaranteed to handle the task of unicode conversion. Note that this flag adds significant performance overhead to row-fetching operations for backends that already return unicode objects natively (which most DBAPIs do). This flag should only be used as a last resort for reading strings from a column with varied or corrupted encodings.
- length¶ – optional, a length for the column for use in
DDL and CAST expressions. May be safely omitted if no
-
PyODBC¶
Support for the Microsoft SQL Server database via the PyODBC driver.
DBAPI¶
Documentation and download information (if applicable) for PyODBC is available at: http://pypi.python.org/pypi/pyodbc/
Connecting to PyODBC¶
The URL here is to be translated to PyODBC connection strings, as detailed in ConnectionStrings.
DSN Connections¶
A DSN-based connection is preferred overall when using ODBC. A basic DSN-based connection looks like:
engine = create_engine("mssql+pyodbc://scott:tiger@some_dsn")
Which above, will pass the following connection string to PyODBC:
dsn=mydsn;UID=user;PWD=pass
If the username and password are omitted, the DSN form will also add
the Trusted_Connection=yes
directive to the ODBC string.
Hostname Connections¶
Hostname-based connections are not preferred, however are supported. The ODBC driver name must be explicitly specified:
engine = create_engine("mssql+pyodbc://scott:tiger@myhost:port/databasename?driver=SQL+Server+Native+Client+10.0")
Changed in version 1.0.0: Hostname-based PyODBC connections now require the SQL Server driver name specified explicitly. SQLAlchemy cannot choose an optimal default here as it varies based on platform and installed drivers.
Other keywords interpreted by the Pyodbc dialect to be passed to
pyodbc.connect()
in both the DSN and hostname cases include:
odbc_autotranslate
, ansi
, unicode_results
, autocommit
.
Pass through exact Pyodbc string¶
A PyODBC connection string can also be sent exactly as specified in
ConnectionStrings
into the driver using the parameter odbc_connect
. The delimeters must be URL escaped, however,
as illustrated below using urllib.quote_plus
:
import urllib
params = urllib.quote_plus("DRIVER={SQL Server Native Client 10.0};SERVER=dagger;DATABASE=test;UID=user;PWD=password")
engine = create_engine("mssql+pyodbc:///?odbc_connect=%s" % params)
Unicode Binds¶
The current state of PyODBC on a unix backend with FreeTDS and/or EasySoft is poor regarding unicode; different OS platforms and versions of UnixODBC versus IODBC versus FreeTDS/EasySoft versus PyODBC itself dramatically alter how strings are received. The PyODBC dialect attempts to use all the information it knows to determine whether or not a Python unicode literal can be passed directly to the PyODBC driver or not; while SQLAlchemy can encode these to bytestrings first, some users have reported that PyODBC mis-handles bytestrings for certain encodings and requires a Python unicode object, while the author has observed widespread cases where a Python unicode is completely misinterpreted by PyODBC, particularly when dealing with the information schema tables used in table reflection, and the value must first be encoded to a bytestring.
It is for this reason that whether or not unicode literals for bound
parameters be sent to PyODBC can be controlled using the
supports_unicode_binds
parameter to create_engine()
. When
left at its default of None
, the PyODBC dialect will use its
best guess as to whether or not the driver deals with unicode literals
well. When False
, unicode literals will be encoded first, and when
True
unicode literals will be passed straight through. This is an interim
flag that hopefully should not be needed when the unicode situation stabilizes
for unix + PyODBC.
New in version 0.7.7: supports_unicode_binds
parameter to create_engine()
.
mxODBC¶
Support for the Microsoft SQL Server database via the mxODBC driver.
DBAPI¶
Documentation and download information (if applicable) for mxODBC is available at: http://www.egenix.com/
Execution Modes¶
mxODBC features two styles of statement execution, using the
cursor.execute()
and cursor.executedirect()
methods (the second being
an extension to the DBAPI specification). The former makes use of a particular
API call specific to the SQL Server Native Client ODBC driver known
SQLDescribeParam, while the latter does not.
mxODBC apparently only makes repeated use of a single prepared statement when SQLDescribeParam is used. The advantage to prepared statement reuse is one of performance. The disadvantage is that SQLDescribeParam has a limited set of scenarios in which bind parameters are understood, including that they cannot be placed within the argument lists of function calls, anywhere outside the FROM, or even within subqueries within the FROM clause - making the usage of bind parameters within SELECT statements impossible for all but the most simplistic statements.
For this reason, the mxODBC dialect uses the “native” mode by default only for INSERT, UPDATE, and DELETE statements, and uses the escaped string mode for all other statements.
This behavior can be controlled via
execution_options()
using the
native_odbc_execute
flag with a value of True
or False
, where a
value of True
will unconditionally use native bind parameters and a value
of False
will unconditionally use string-escaped parameters.
pymssql¶
Support for the Microsoft SQL Server database via the pymssql driver.
DBAPI¶
Documentation and download information (if applicable) for pymssql is available at: http://pymssql.org/
pymssql is a Python module that provides a Python DBAPI interface around FreeTDS. Compatible builds are available for Linux, MacOSX and Windows platforms.
zxjdbc¶
Support for the Microsoft SQL Server database via the zxJDBC for Jython driver.
Note
Jython is not supported by current versions of SQLAlchemy. The zxjdbc dialect should be considered as experimental.
DBAPI¶
Drivers for this database are available at: http://jtds.sourceforge.net/
AdoDBAPI¶
Support for the Microsoft SQL Server database via the adodbapi driver.
DBAPI¶
Documentation and download information (if applicable) for adodbapi is available at: http://adodbapi.sourceforge.net/
Note
The adodbapi dialect is not implemented SQLAlchemy versions 0.6 and above at this time.