diff --git a/README.md b/README.md index 63f07543..5f518d1f 100644 --- a/README.md +++ b/README.md @@ -2,34 +2,39 @@ [![PyPI](https://img.shields.io/pypi/v/psqlpy?style=for-the-badge)](https://pypi.org/project/psqlpy/) [![PyPI - Downloads](https://img.shields.io/pypi/dm/psqlpy?style=for-the-badge)](https://pypistats.org/packages/psqlpy) - # PSQLPy - Async PostgreSQL driver for Python written in Rust. Driver for PostgreSQL written fully in Rust and exposed to Python. The project is under active development and _**we cannot confirm that it's ready for production**_. Anyway, We will be grateful for the bugs found and open issues. Stay tuned. -*Normal documentation is in development.* +_Normal documentation is in development._ ## Installation You can install package with `pip` or `poetry`. poetry: + ```bash > poetry add psqlpy ``` + pip: + ```bash > pip install psqlpy ``` Or you can build it by yourself. To do it, install stable rust and [maturin](/~https://github.com/PyO3/maturin). + ``` > maturin develop --release ``` ## Usage + Usage is as easy as possible. Create new instance of PSQLPool, startup it and start querying. + ```python from typing import Any @@ -57,11 +62,14 @@ async def main() -> None: # rust does it instead. ``` + Please take into account that each new execute gets new connection from connection pool. ### DSN support + You can separate specify `host`, `port`, `username`, etc or specify everything in one `DSN`. **Please note that if you specify DSN any other argument doesn't take into account.** + ```py from typing import Any @@ -86,31 +94,33 @@ async def main() -> None: ``` ### Control connection recycling + There are 3 available options to control how a connection is recycled - `Fast`, `Verified` and `Clean`. As connection can be closed in different situations on various sides you can select preferable behavior of how a connection is recycled. - `Fast`: Only run `is_closed()` when recycling existing connections. - `Verified`: Run `is_closed()` and execute a test query. This is slower, but guarantees that the database connection is ready to - be used. Normally, `is_closed()` should be enough to filter - out bad connections, but under some circumstances (i.e. hard-closed - network connections) it's possible that `is_closed()` - returns `false` while the connection is dead. You will receive an error - on your first query then. + be used. Normally, `is_closed()` should be enough to filter + out bad connections, but under some circumstances (i.e. hard-closed + network connections) it's possible that `is_closed()` + returns `false` while the connection is dead. You will receive an error + on your first query then. - `Clean`: Like [`Verified`] query method, but instead use the following sequence of statements which guarantees a pristine connection: - ```sql - CLOSE ALL; - SET SESSION AUTHORIZATION DEFAULT; - RESET ALL; - UNLISTEN *; - SELECT pg_advisory_unlock_all(); - DISCARD TEMP; - DISCARD SEQUENCES; - ``` - This is similar to calling `DISCARD ALL`. but doesn't call - `DEALLOCATE ALL` and `DISCARD PLAN`, so that the statement cache is not - rendered ineffective. + ```sql + CLOSE ALL; + SET SESSION AUTHORIZATION DEFAULT; + RESET ALL; + UNLISTEN *; + SELECT pg_advisory_unlock_all(); + DISCARD TEMP; + DISCARD SEQUENCES; + ``` + This is similar to calling `DISCARD ALL`. but doesn't call + `DEALLOCATE ALL` and `DISCARD PLAN`, so that the statement cache is not + rendered ineffective. ## Query parameters + You can pass parameters into queries. Parameters can be passed in any `execute` method as the second parameter, it must be a list. Any placeholder must be marked with `$< num>`. @@ -123,7 +133,9 @@ Any placeholder must be marked with `$< num>`. ``` ## Connection + You can work with connection instead of DatabasePool. + ```python from typing import Any @@ -154,17 +166,22 @@ async def main() -> None: ``` ## Transactions + Of course it's possible to use transactions with this driver. It's as easy as possible and sometimes it copies common functionality from PsycoPG and AsyncPG. ### Transaction parameters + In process of transaction creation it is possible to specify some arguments to configure transaction. - `isolation_level`: level of the isolation. By default - `None`. - `read_variant`: read option. By default - `None`. +- `deferable`: deferable option. By default - `None`. ### You can use transactions as async context managers + By default async context manager only begins and commits transaction automatically. + ```python from typing import Any @@ -188,6 +205,7 @@ async def main() -> None: ``` ### Or you can control transaction fully on your own. + ```python from typing import Any @@ -219,9 +237,11 @@ async def main() -> None: ``` ### Transactions can be rolled back + You must understand that rollback can be executed only once per transaction. After it's execution transaction state changes to `done`. If you want to use `ROLLBACK TO SAVEPOINT`, see below. + ```python from typing import Any @@ -247,6 +267,7 @@ async def main() -> None: ``` ### Transaction ROLLBACK TO SAVEPOINT + You can rollback your transaction to the specified savepoint, but before it you must create it. ```python @@ -280,6 +301,7 @@ async def main() -> None: ``` ### Transaction RELEASE SAVEPOINT + It's possible to release savepoint ```python @@ -308,12 +330,15 @@ async def main() -> None: ``` ## Cursors + Library supports PostgreSQL cursors. Cursors can be created only in transaction. In addition, cursor supports async iteration. ### Cursor parameters + In process of cursor creation you can specify some configuration parameters. + - `querystring`: query for the cursor. Required. - `parameters`: parameters for the query. Not Required. - `fetch_number`: number of records per fetch if cursor is used as an async iterator. If you are using `.fetch()` method you can pass different fetch number. Not required. Default - 10. @@ -357,7 +382,9 @@ async def main() -> None: ``` ### Cursor operations + Available cursor operations: + - FETCH count - `cursor.fetch(fetch_number=)` - FETCH NEXT - `cursor.fetch_next()` - FETCH PRIOR - `cursor.fetch_prior()` @@ -370,15 +397,16 @@ Available cursor operations: - FETCH BACKWARD ALL - `cursor.fetch_backward_all()` ## Extra Types + Sometimes it's impossible to identify which type user tries to pass as a argument. But Rust is a strongly typed programming language so we have to help. -| Extra Type in Python | Type in PostgreSQL | Type in Rust | -| ------------- | ------------- | ------------- -| SmallInt | SmallInt | i16 | -| Integer | Integer | i32 | -| BigInt | BigInt | i64 | -| PyUUID | UUID | Uuid | -| PyJSON | JSON, JSONB | Value | +| Extra Type in Python | Type in PostgreSQL | Type in Rust | +| -------------------- | ------------------ | ------------ | +| SmallInt | SmallInt | i16 | +| Integer | Integer | i32 | +| BigInt | BigInt | i64 | +| PyUUID | UUID | Uuid | +| PyJSON | JSON, JSONB | Value | ```python from typing import Any @@ -425,15 +453,17 @@ async def main() -> None: ``` ## Benchmarks + We have made some benchmark to compare `PSQLPy`, `AsyncPG`, `Psycopg3`. Main idea is do not compare clear drivers because there are a few situations in which you need to use only driver without any other dependencies. **So infrastructure consists of:** -1) AioHTTP -2) PostgreSQL driver (`PSQLPy`, `AsyncPG`, `Psycopg3`) -3) PostgreSQL v15. Server is located in other part of the world, because we want to simulate network problems. -4) Grafana (dashboards) -5) InfluxDB -6) JMeter (for load testing) - -The results are very promising! `PSQLPy` is faster than `AsyncPG` at best by 2 times, at worst by 45%. `PsycoPG` is 3.5 times slower than `PSQLPy` in the worst case, 60% in the best case. \ No newline at end of file + +1. AioHTTP +2. PostgreSQL driver (`PSQLPy`, `AsyncPG`, `Psycopg3`) +3. PostgreSQL v15. Server is located in other part of the world, because we want to simulate network problems. +4. Grafana (dashboards) +5. InfluxDB +6. JMeter (for load testing) + +The results are very promising! `PSQLPy` is faster than `AsyncPG` at best by 2 times, at worst by 45%. `PsycoPG` is 3.5 times slower than `PSQLPy` in the worst case, 60% in the best case. diff --git a/pyproject.toml b/pyproject.toml index 720aeea8..f1a583a0 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -127,3 +127,9 @@ line-length = 79 [tool.ruff.pydocstyle] convention = "pep257" ignore-decorators = ["typing.overload"] + + +[tool.ruff.isort] +lines-after-imports = 2 +no-lines-before = ["standard-library", "local-folder"] +known-first-party = ["psqlpy"] diff --git a/python/psqlpy/__init__.py b/python/psqlpy/__init__.py index 42477791..c3523159 100644 --- a/python/psqlpy/__init__.py +++ b/python/psqlpy/__init__.py @@ -9,6 +9,7 @@ Transaction, ) + __all__ = [ "PSQLPool", "QueryResult", diff --git a/python/psqlpy/_internal/__init__.pyi b/python/psqlpy/_internal/__init__.pyi index 4e5254a0..a873459a 100644 --- a/python/psqlpy/_internal/__init__.pyi +++ b/python/psqlpy/_internal/__init__.pyi @@ -1,13 +1,13 @@ import types from enum import Enum -from typing import Any, Dict, List, Optional +from typing import Any, Optional from typing_extensions import Self class QueryResult: """Result.""" - def result(self: Self) -> List[Dict[Any, Any]]: + def result(self: Self) -> list[dict[Any, Any]]: """Return result from database as a list of dicts.""" class IsolationLevel(Enum): @@ -221,7 +221,7 @@ class Transaction: async def execute( self: Self, querystring: str, - parameters: List[Any] | None = None, + parameters: list[Any] | None = None, ) -> QueryResult: """Execute the query. @@ -377,7 +377,7 @@ class Transaction: async def cursor( self: Self, querystring: str, - parameters: List[Any] | None = None, + parameters: list[Any] | None = None, fetch_number: int | None = None, scroll: bool | None = None, ) -> Cursor: @@ -429,7 +429,7 @@ class Connection: async def execute( self: Self, querystring: str, - parameters: List[Any] | None = None, + parameters: list[Any] | None = None, ) -> QueryResult: """Execute the query. @@ -466,6 +466,7 @@ class Connection: self, isolation_level: IsolationLevel | None = None, read_variant: ReadVariant | None = None, + deferable: bool | None = None, ) -> Transaction: """Create new transaction. @@ -522,7 +523,7 @@ class PSQLPool: async def execute( self: Self, querystring: str, - parameters: List[Any] | None = None, + parameters: list[Any] | None = None, ) -> QueryResult: """Execute the query. diff --git a/python/tests/conftest.py b/python/tests/conftest.py index 16236d51..a00c19f3 100644 --- a/python/tests/conftest.py +++ b/python/tests/conftest.py @@ -64,7 +64,7 @@ async def psql_pool( postgres_password: str, postgres_port: int, postgres_dbname: str, -) -> AsyncGenerator[PSQLPool, None]: +) -> PSQLPool: pg_pool = PSQLPool( username=postgres_user, password=postgres_password, @@ -73,7 +73,7 @@ async def psql_pool( db_name=postgres_dbname, ) await pg_pool.startup() - yield pg_pool + return pg_pool @pytest.fixture(autouse=True) diff --git a/python/tests/test_connection.py b/python/tests/test_connection.py index 01084da8..5b54d5bd 100644 --- a/python/tests/test_connection.py +++ b/python/tests/test_connection.py @@ -3,7 +3,7 @@ from psqlpy import PSQLPool, QueryResult, Transaction -@pytest.mark.anyio +@pytest.mark.anyio() async def test_connection_execute( psql_pool: PSQLPool, table_name: str, @@ -19,7 +19,7 @@ async def test_connection_execute( assert len(conn_result.result()) == number_database_records -@pytest.mark.anyio +@pytest.mark.anyio() async def test_connection_transaction( psql_pool: PSQLPool, ) -> None: diff --git a/python/tests/test_connection_pool.py b/python/tests/test_connection_pool.py index 70613899..d1f58261 100644 --- a/python/tests/test_connection_pool.py +++ b/python/tests/test_connection_pool.py @@ -3,7 +3,7 @@ from psqlpy import Connection, ConnRecyclingMethod, PSQLPool, QueryResult -@pytest.mark.anyio +@pytest.mark.anyio() async def test_pool_dsn_startup() -> None: """Test that connection pool can startup with dsn.""" pg_pool = PSQLPool( @@ -14,7 +14,7 @@ async def test_pool_dsn_startup() -> None: await pg_pool.execute("SELECT 1") -@pytest.mark.anyio +@pytest.mark.anyio() async def test_pool_execute( psql_pool: PSQLPool, table_name: str, @@ -32,7 +32,7 @@ async def test_pool_execute( assert len(inner_result) == number_database_records -@pytest.mark.anyio +@pytest.mark.anyio() async def test_pool_connection( psql_pool: PSQLPool, ) -> None: @@ -41,7 +41,7 @@ async def test_pool_connection( assert isinstance(connection, Connection) -@pytest.mark.anyio +@pytest.mark.anyio() @pytest.mark.parametrize( "conn_recycling_method", [ diff --git a/python/tests/test_cursor.py b/python/tests/test_cursor.py index 29442da8..ea85621e 100644 --- a/python/tests/test_cursor.py +++ b/python/tests/test_cursor.py @@ -3,7 +3,7 @@ from psqlpy import Cursor -@pytest.mark.anyio +@pytest.mark.anyio() async def test_cursor_fetch( number_database_records: int, test_cursor: Cursor, @@ -13,7 +13,7 @@ async def test_cursor_fetch( assert len(result.result()) == number_database_records // 2 -@pytest.mark.anyio +@pytest.mark.anyio() async def test_cursor_fetch_next( test_cursor: Cursor, ) -> None: @@ -22,7 +22,7 @@ async def test_cursor_fetch_next( assert len(result.result()) == 1 -@pytest.mark.anyio +@pytest.mark.anyio() async def test_cursor_fetch_prior( test_cursor: Cursor, ) -> None: @@ -35,7 +35,7 @@ async def test_cursor_fetch_prior( assert len(result.result()) == 1 -@pytest.mark.anyio +@pytest.mark.anyio() async def test_cursor_fetch_first( test_cursor: Cursor, ) -> None: @@ -49,7 +49,7 @@ async def test_cursor_fetch_first( assert fetch_first.result() == first.result() -@pytest.mark.anyio +@pytest.mark.anyio() async def test_cursor_fetch_last( test_cursor: Cursor, number_database_records: int, @@ -64,7 +64,7 @@ async def test_cursor_fetch_last( assert all_res.result()[-1] == last_res.result()[0] -@pytest.mark.anyio +@pytest.mark.anyio() async def test_cursor_fetch_absolute( test_cursor: Cursor, number_database_records: int, @@ -85,7 +85,7 @@ async def test_cursor_fetch_absolute( assert all_res.result()[-1] == last_record.result()[0] -@pytest.mark.anyio +@pytest.mark.anyio() async def test_cursor_fetch_relative( test_cursor: Cursor, number_database_records: int, @@ -107,7 +107,7 @@ async def test_cursor_fetch_relative( assert not (records.result()) -@pytest.mark.anyio +@pytest.mark.anyio() async def test_cursor_fetch_forward_all( test_cursor: Cursor, number_database_records: int, @@ -124,7 +124,7 @@ async def test_cursor_fetch_forward_all( ) -@pytest.mark.anyio +@pytest.mark.anyio() async def test_cursor_fetch_backward( test_cursor: Cursor, ) -> None: @@ -142,7 +142,7 @@ async def test_cursor_fetch_backward( assert len(must_not_be_empty.result()) == expected_number_of_results -@pytest.mark.anyio +@pytest.mark.anyio() async def test_cursor_fetch_backward_all( test_cursor: Cursor, ) -> None: diff --git a/python/tests/test_transaction.py b/python/tests/test_transaction.py index 9bc1890d..5e592964 100644 --- a/python/tests/test_transaction.py +++ b/python/tests/test_transaction.py @@ -1,10 +1,49 @@ +from __future__ import annotations +import typing + import pytest -from psqlpy import Cursor, PSQLPool +from psqlpy import Cursor, IsolationLevel, PSQLPool, ReadVariant from psqlpy.exceptions import DBTransactionError -@pytest.mark.anyio +@pytest.mark.anyio() +async def test_transaction_init_parameters(psql_pool: PSQLPool) -> None: + connection = await psql_pool.connection() + + test_init_parameters: typing.Final[list[dict[str, typing.Any]]] = [ + {"isolation_level": None, "deferable": None, "read_variant": None}, + { + "isolation_level": IsolationLevel.ReadCommitted, + "deferable": True, + "read_variant": ReadVariant.ReadOnly, + }, + { + "isolation_level": IsolationLevel.ReadUncommitted, + "deferable": False, + "read_variant": ReadVariant.ReadWrite, + }, + { + "isolation_level": IsolationLevel.RepeatableRead, + "deferable": True, + "read_variant": ReadVariant.ReadOnly, + }, + { + "isolation_level": IsolationLevel.Serializable, + "deferable": False, + "read_variant": ReadVariant.ReadWrite, + }, + ] + + for init_parameters in test_init_parameters: + connection.transaction( + isolation_level=init_parameters.get("isolation_level"), + deferable=init_parameters.get("deferable"), + read_variant=init_parameters.get("read_variant"), + ) + + +@pytest.mark.anyio() async def test_transaction_begin( psql_pool: PSQLPool, table_name: str, @@ -28,7 +67,7 @@ async def test_transaction_begin( assert len(result.result()) == number_database_records -@pytest.mark.anyio +@pytest.mark.anyio() async def test_transaction_commit( psql_pool: PSQLPool, table_name: str, @@ -62,7 +101,7 @@ async def test_transaction_commit( assert len(result.result()) -@pytest.mark.anyio +@pytest.mark.anyio() async def test_transaction_savepoint( psql_pool: PSQLPool, table_name: str, @@ -95,7 +134,7 @@ async def test_transaction_savepoint( await transaction.commit() -@pytest.mark.anyio +@pytest.mark.anyio() async def test_transaction_rollback( psql_pool: PSQLPool, table_name: str, @@ -133,7 +172,7 @@ async def test_transaction_rollback( assert not (result_from_conn.result()) -@pytest.mark.anyio +@pytest.mark.anyio() async def test_transaction_release_savepoint( psql_pool: PSQLPool, ) -> None: @@ -156,7 +195,7 @@ async def test_transaction_release_savepoint( await transaction.savepoint(sp_name_1) -@pytest.mark.anyio +@pytest.mark.anyio() async def test_transaction_cursor( psql_pool: PSQLPool, table_name: str, diff --git a/src/driver/connection.rs b/src/driver/connection.rs index 9172b31c..34a01090 100644 --- a/src/driver/connection.rs +++ b/src/driver/connection.rs @@ -66,6 +66,7 @@ impl Connection { &self, isolation_level: Option, read_variant: Option, + deferable: Option, ) -> Transaction { let inner_transaction = RustTransaction::new( self.db_client.clone(), @@ -74,6 +75,7 @@ impl Connection { Arc::new(tokio::sync::RwLock::new(HashSet::new())), isolation_level, read_variant, + deferable, Default::default(), ); diff --git a/src/driver/transaction.rs b/src/driver/transaction.rs index 29a20538..96519c62 100644 --- a/src/driver/transaction.rs +++ b/src/driver/transaction.rs @@ -27,10 +27,12 @@ pub struct RustTransaction { isolation_level: Option, read_variant: Option, + deferable: Option, cursor_num: usize, } impl RustTransaction { + #[allow(clippy::too_many_arguments)] pub fn new( db_client: Arc>, is_started: Arc>, @@ -38,6 +40,7 @@ impl RustTransaction { rollback_savepoint: Arc>>, isolation_level: Option, read_variant: Option, + deferable: Option, cursor_num: usize, ) -> Self { Self { @@ -47,6 +50,7 @@ impl RustTransaction { rollback_savepoint, isolation_level, read_variant, + deferable, cursor_num, } } @@ -102,7 +106,9 @@ impl RustTransaction { Ok(PSQLDriverPyQueryResult::new(result)) } - /// Start transaction with isolation level if specified + /// Start transaction + /// Set up isolation level if specified + /// Set up deferable if specified /// /// # Errors /// May return Err Result if cannot execute querystring. @@ -118,6 +124,10 @@ impl RustTransaction { querystring.push_str(format!(" {}", &read_var.to_str_option()).as_str()); } + if self.deferable.is_some() { + querystring.push_str("SET CONSTRAINTS ALL DEFERRED"); + } + let db_client_arc = self.db_client.clone(); let db_client_guard = db_client_arc.read().await;