Python: A SQLAlchemy Wrapper Component That Works With Both Flask and FastAPI Frameworks

In late 2022, I developed a database wrapper component for SQLAlchemy. Initially designed for use with the Flask framework, it was discovered that this component also seamlessly integrates with the FastAPI framework. In this post, I will describe this component and provide examples of how it is used within both frameworks.

This component is based on a concept I previously implemented using Delphi and later PHP. Essentially, it consists of base classes representing database tables, equipped with the ability to interact with databases and implement generic functionalities for CRUD operations.

While learning the Flask framework, I found the conventional approach of accessing the database layer through the Flask application instance uncomfortable. In my previous projects, the database layer has always remained independent of other layers. The business layer ensures data validity and then delegates CRUD operations to the database layer. The UI layer, whether a desktop application or web client, communicates solely with the business layer, never directly accessing the database layer.

In SQLAlchemy, models representing database tables typically subclass sqlalchemy.orm.DeclarativeBase (this class supersedes the sqlalchemy.orm.declarative_base function). Accordingly, the abstract base class in this database wrapper component is a sqlalchemy.orm.DeclarativeBase subclass, accompanied by another custom base class providing additional dunder methods.

Further subclasses of this abstract base class implement additional functionalities. Application models inherit from either the ReadOnlyTable or the WriteCapableTable base classes.

Application models are required to implement their own specific database reading methods. For example, selecting all customers with the surname Nguyแป…n.

The Database class is responsible for establishing connections to the target database. Once the database connection is established, application models can interact with the target database.

๐Ÿš€ The full documentation can be found at https://bh-database.readthedocs.io/en/latest/.

Next, we will explore some examples. ๐Ÿ’ฅ The first two are simple, single-module web server applications where the web layer directly accesses the database layer. Although not ideal, it simplifies usage illustration.

The latter two examples include complete business layers, where submitted data is validated before being passed to the database layer for CRUD operations.

โถ example.py: A Simple Single-Module Flask Application.

โ— Windows 10: F:\bh_database\examples\flaskr\example.py
โ— Ubuntu 22.10: /home/behai/bh_database/examples/flaskr/example.py

from sqlalchemy import (
    Column,
    Integer,
    Date,
    String,
)

import flask

from bh_database.core import Database
from bh_database.base_table import WriteCapableTable

from bh_apistatus.result_status import ResultStatus

SQLALCHEMY_DATABASE_SCHEMA = 'employees'
SQLALCHEMY_DATABASE_URI = 'mysql+mysqlconnector://root:pcb.2176310315865259@localhost:3306/employees'
# Enable this for PostgreSQL.
# SQLALCHEMY_DATABASE_URI = 'postgresql+psycopg2://postgres:pcb.2176310315865259@localhost/employees'

class Employees(WriteCapableTable):
    __tablename__ = 'employees'

    emp_no = Column(Integer, primary_key=True)
    birth_date = Column(Date, nullable=False)
    first_name = Column(String(14), nullable=False)
    last_name = Column(String(16), nullable=False)
    gender = Column(String(1), nullable=False)
    hire_date = Column(Date, nullable=False)

    def select_by_partial_last_name_and_first_name(self, 
            last_name: str, first_name: str) -> ResultStatus:
        
        return self.run_stored_proc('get_employees', [last_name, first_name], True)

def create_app(config=None):
    """Construct the core application."""

    app = flask.Flask(__name__, instance_relative_config=False)

    init_extensions(app)
    
    init_app_database(app)

    return app
    
def init_extensions(app):
    app.url_map.strict_slashes = False

def init_app_database(app):    
    Database.disconnect()
    Database.connect(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_DATABASE_SCHEMA)

app = create_app()

@app.get('/employees/search/<last_name>/<first_name>')
def search_employees(last_name: str, first_name: str) -> dict:
    """ last_name and first_name are partial using %.

    An example of a valid route: http://localhost:5000/employees/search/%nas%/%An
    """

    return Employees() \
        .select_by_partial_last_name_and_first_name(last_name, first_name) \
        .as_dict()

if __name__ == '__main__':  
   app.run()

To execute the example.py application:

โ–ถ๏ธWindows 10: (venv) F:\bh_database\examples\flaskr>venv\Scripts\flask.exe --app example run --host 0.0.0.0 --port 5000
โ–ถ๏ธUbuntu 22.10: (venv) behai@hp-pavilion-15:~/bh_database/examples/flaskr$ venv/bin/flask --app example run --host 0.0.0.0 --port 5000

Accessing the example.py application running locally from Windows 10:

http://localhost:5000/employees/search/%nas%/%An

Accessing the example.py application running on Ubuntu 22.10 from Windows 10:

http://192.168.0.16:5000/employees/search/%nas%/%An

โท example.py: A Simple Single-Module FastAPI Application.

โ— Windows 10: F:\bh_database\examples\fastapir\example.py
โ— Ubuntu 22.10: /home/behai/bh_database/examples/fastapir/example.py

from sqlalchemy import (
    Column,
    Integer,
    Date,
    String,
)

from fastapi import FastAPI
from fastapi.responses import JSONResponse

from bh_database.core import Database
from bh_database.base_table import WriteCapableTable

from bh_apistatus.result_status import ResultStatus

SQLALCHEMY_DATABASE_SCHEMA = 'employees'
SQLALCHEMY_DATABASE_URI = 'mysql+mysqlconnector://root:pcb.2176310315865259@localhost:3306/employees'
# Enable this for PostgreSQL.
# SQLALCHEMY_DATABASE_URI = 'postgresql+psycopg2://postgres:pcb.2176310315865259@localhost/employees'

class Employees(WriteCapableTable):
    __tablename__ = 'employees'

    emp_no = Column(Integer, primary_key=True)
    birth_date = Column(Date, nullable=False)
    first_name = Column(String(14), nullable=False)
    last_name = Column(String(16), nullable=False)
    gender = Column(String(1), nullable=False)
    hire_date = Column(Date, nullable=False)

    def select_by_partial_last_name_and_first_name(self, 
            last_name: str, first_name: str) -> ResultStatus:
        
        return self.run_stored_proc('get_employees', [last_name, first_name], True)

app = FastAPI()

Database.disconnect()
Database.connect(SQLALCHEMY_DATABASE_URI, SQLALCHEMY_DATABASE_SCHEMA)

@app.get("/employees/search/{last_name}/{first_name}", response_class=JSONResponse)
async def search_employees(last_name: str, first_name: str):
    """ last_name and first_name are partial using %.

    An example of a valid route: http://localhost:5000/employees/search/%nas%/%An
    """

    return Employees() \
        .select_by_partial_last_name_and_first_name(last_name, first_name) \
        .as_dict()

To execute the example.py application:

โ–ถ๏ธWindows 10: (venv) F:\bh_database\examples\fastapir>venv\Scripts\uvicorn.exe example:app --host 0.0.0.0 --port 5000
โ–ถ๏ธUbuntu 22.10: (venv) behai@hp-pavilion-15:~/bh_database/examples/fastapir$ venv/bin/uvicorn example:app --host 0.0.0.0 --port 5000

Accessing the example.py application running locally from Windows 10:

http://localhost:5000/employees/search/%nas%/%An

Accessing the example.py application running on Ubuntu 22.10 from Windows 10:

http://192.168.0.16:5000/employees/search/%nas%/%An

โธ A more comprehensive Flask application: a fully documented web server example with CRUD operations.

Please refer to https://github.com/behai-nguyen/bh_database/tree/main/examples/flaskr for the full source code, instructions on setting up the environment, installing packages, running tests, and finally running the application.

The layout of the example project is as follows:

/home/behai/bh_database/examples/flaskr
โ”œโ”€โ”€ app.py
โ”œโ”€โ”€ .env
โ”œโ”€โ”€ pyproject.toml
โ”œโ”€โ”€ pytest.ini
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ src
โ”‚ โ””โ”€โ”€ flaskr
โ”‚     โ”œโ”€โ”€ business
โ”‚     โ”‚ โ”œโ”€โ”€ app_business.py
โ”‚     โ”‚ โ”œโ”€โ”€ base_business.py
โ”‚     โ”‚ โ”œโ”€โ”€ base_validation.py
โ”‚     โ”‚ โ”œโ”€โ”€ employees_mgr.py
โ”‚     โ”‚ โ””โ”€โ”€ employees_validation.py
โ”‚     โ”œโ”€โ”€ config.py
โ”‚     โ”œโ”€โ”€ controllers
โ”‚     โ”‚ โ””โ”€โ”€ employees_admin.py
โ”‚     โ”œโ”€โ”€ __init__.py
โ”‚     โ”œโ”€โ”€ models
โ”‚     โ”‚ โ””โ”€โ”€ employees.py
โ”‚     โ”œโ”€โ”€ static
โ”‚     โ”‚ โ””โ”€โ”€ styles.css
โ”‚     โ””โ”€โ”€ templates
โ”‚         โ”œโ”€โ”€ admin
โ”‚         โ”‚ โ”œโ”€โ”€ emp_edit.html
โ”‚         โ”‚ โ”œโ”€โ”€ emp_search.html
โ”‚         โ”‚ โ””โ”€โ”€ emp_search_result.html
โ”‚         โ””โ”€โ”€ base.html
โ””โ”€โ”€ tests
    โ”œโ”€โ”€ business
    โ”‚ โ””โ”€โ”€ test_employees_mgr.py
    โ”œโ”€โ”€ conftest.py
    โ”œโ”€โ”€ __init__.py
    โ”œโ”€โ”€ integration
    โ”‚ โ””โ”€โ”€ test_employees_itgt.py
    โ””โ”€โ”€ unit
        โ””โ”€โ”€ test_employees.py

โน A more comprehensive FastAPI application: a fully documented web server example with CRUD operations.

Please refer to https://github.com/behai-nguyen/bh_database/tree/main/examples/fastapir for the full source code, instructions on setting up the environment, installing packages, running tests, and finally running the application.

The layout of the example project is as follows:

/home/behai/bh_database/examples/fastapir
โ”œโ”€โ”€ .env
โ”œโ”€โ”€ main.py
โ”œโ”€โ”€ pyproject.toml
โ”œโ”€โ”€ pytest.ini
โ”œโ”€โ”€ README.md
โ”œโ”€โ”€ src
โ”‚ โ””โ”€โ”€ fastapir
โ”‚     โ”œโ”€โ”€ business
โ”‚     โ”‚ โ”œโ”€โ”€ app_business.py
โ”‚     โ”‚ โ”œโ”€โ”€ base_business.py
โ”‚     โ”‚ โ”œโ”€โ”€ base_validation.py
โ”‚     โ”‚ โ”œโ”€โ”€ employees_mgr.py
โ”‚     โ”‚ โ””โ”€โ”€ employees_validation.py
โ”‚     โ”œโ”€โ”€ config.py
โ”‚     โ”œโ”€โ”€ controllers
โ”‚     โ”‚ โ”œโ”€โ”€ employees_admin.py
โ”‚     โ”‚ โ””โ”€โ”€ __init__.py
โ”‚     โ”œโ”€โ”€ __init__.py
โ”‚     โ”œโ”€โ”€ models
โ”‚     โ”‚ โ””โ”€โ”€ employees.py
โ”‚     โ”œโ”€โ”€ static
โ”‚     โ”‚ โ””โ”€โ”€ styles.css
โ”‚     โ””โ”€โ”€ templates
โ”‚         โ”œโ”€โ”€ admin
โ”‚         โ”‚ โ”œโ”€โ”€ emp_edit.html
โ”‚         โ”‚ โ”œโ”€โ”€ emp_search.html
โ”‚         โ”‚ โ””โ”€โ”€ emp_search_result.html
โ”‚         โ””โ”€โ”€ base.html
โ””โ”€โ”€ tests
    โ”œโ”€โ”€ business
    โ”‚ โ””โ”€โ”€ test_employees_mgr.py
    โ”œโ”€โ”€ conftest.py
    โ”œโ”€โ”€ __init__.py
    โ”œโ”€โ”€ integration
    โ”‚ โ””โ”€โ”€ test_employees_itgt.py
    โ””โ”€โ”€ unit
        โ””โ”€โ”€ test_employees.py

๐Ÿ’ฅ Except for the framework-specific layer code, the remaining code in these two examples is very similar.

Let’s briefly discuss their similarities:

  • /models and /business code are identical. They could be shared across both examples, but I prefer to keep each example self-contained.
  • /tests/unit and /tests/business code are identical.

And there are differences in the following areas:

  • /controllers: This is the web layer, which is framework-specific, so understandably they are different.
  • /tests/integration: The sole difference is framework-specific: how the HTTP response value is extracted:
    • Flask: response.get_data(as_text=True)
    • FastAPI: response.text
  • /tests/conftest.py: This file is framework-dependent. Both modules return the same fixtures, but the code has nothing in common.
  • /templates/base.html: There is one difference:
    • Flask: <link rel="stylesheet" href="{{ url_for('static', filename='styles.css') }}">
    • FastAPI: <link rel="stylesheet" href="{{ url_for('static', path='/styles.css') }}">

    That is, Flask uses filename, while FastAPI uses path.

The /controllers layer is thin in the sense that the code is fairly short; it simply takes the client-submitted data and passes it to the business layer to handle the work. The business layer then forwards the validated data to the database layer, and so on. The differences between the two implementations are minor.

It has been an interesting exercise developing this wrapper component. The fact that it seamlessly integrates with the FastAPI framework is just a bonus for me; I didn’t plan for it since I hadn’t learned FastAPI at the time. I hope you find this post useful. Thank you for reading, and stay safe as always.

โœฟโœฟโœฟ

Feature image source:

Rust: actix-web endpoints which accept both application/x-www-form-urlencoded and application/json content types.

We’re implementing a login process for our actix-web learning application. We undertake some general updates to get ready to support login. We then implement a new /api/login route, which supports both application/x-www-form-urlencoded and application/json content types. In this post, we only implement deserialising the submitted request data, then echo some response. We also add a login page via route /ui/login.

๐Ÿฆ€ Index of the Complete Series.

๐Ÿš€ Please note, complete code for this post can be downloaded from GitHub with:

git clone -b v0.5.0 https://github.com/behai-nguyen/rust_web_01.git

The actix-web learning application mentioned above has been discussed in the following four (4) previous posts:

  1. Rust web application: MySQL server, sqlx, actix-web and tera.
  2. Rust: learning actix-web middleware 01.
  3. Rust: retrofit integration tests to an existing actix-web application.
  4. Rust: adding actix-session and actix-identity to an existing actix-web application.

The code we’re developing in this post is a continuation of the code from the fourth post above. ๐Ÿš€ To get the code of this fourth post, please use the following command:

git clone -b v0.4.0 https://github.com/behai-nguyen/rust_web_01.git

— Note the tag v0.4.0.

As already mentioned in the introduction above, in this post, our main focus of the login process is deserialising both application/x-www-form-urlencoded and application/json into a struct ready to support login. I struggle with this issue a little, I document it as part of my Rust learning journey.

This post introduces a few new modules, some MySQL migration scripts, and a new login HTML page. The updated directory layout for the project is in the screenshot below:

Table of contents

โถ Update Rust to the latest version. At the time of this post, the latest version is 1.75.0. The command to update:

โ–ถ๏ธWindows 10: rustup update
โ–ถ๏ธUbuntu 22.10: $ rustup update

We’ve taken CORS into account when we started out this project in this first post.

I’m not quite certain what’d happened, but all of a sudden, it just rejects requests with message Origin is not allowed to make this request.

— Browsers have been updated, perhaps?

Failing to troubleshoot the problem, and seeing that actix-cors is at version 0.7.0. I update it.

— It does not work with Rust version 1.74.0. This new version of actix-cors seems to fix the above request rejection issue.

โท Update the employees table, adding new fields email and password.

Using the migration tool SQLx CLI, which we’ve covered in Rust SQLx CLI: database migration with MySQL and PostgreSQL, to update the employees table.

While inside the new directory migrations/mysql/, see project directory layout above, create empty migration files 99999999999999_emp_email_pwd.up.sql and 99999999999999_emp_email_pwd.down.sql using the command:

โ–ถ๏ธWindows 10: sqlx migrate add -r emp_email_pwd
โ–ถ๏ธUbuntu 22.10: $ sqlx migrate add -r emp_email_pwd

Populate the two script files with what we would like to do. Please see their contents on GitHub. To apply, run the below command, it’ll take a little while to complete:

โ–ถ๏ธWindows 10: sqlx migrate add -r emp_email_pwd
โ–ถ๏ธUbuntu 22.10: $ sqlx migrate add -r emp_email_pwd

โธ Update src/models.rs to manage new fields employees.email and employees.password.

If we run cargo test now, all integration tests should fail. All integration tests eventually call to get_employees(...), which does a select * from employees.... Since the two new fields’ve been added to a specific order, field indexes in get_employees(...) are out of order.

Module src/models.rs gets the following updates:

  1. pub email: String field added to struct Employee.
  2. pub async fn get_employees(...) updated to read Employee.email field. Other fields’ indexes also get updated.
  3. New pub struct EmployeeLogin.
  4. New pub async fn select_employee(...), which optionally selects an employee base on exact email match.
  5. New pub struct LoginSuccess.
  6. Add "email": "siamak.bernardeschi.67115@gmail.com" to existing tests.

Please see the updated src/models.rs on GitHub. The documentation should be sufficient to help reading the code.

โน New module src/auth_handlers.rs, where new login routes /ui/login and /api/login are implemented.

โ— http://0.0.0.0:5000/ui/login is a GET route, which just returns the login.html page as HTML.

โ— http://0.0.0.0:5000/api/login is a POST route. This is effectively the application login handler.

๐Ÿ’ฅ This http://0.0.0.0:5000/api/login route is the main focus of this post:

— Its handler method accepts both application/x-www-form-urlencoded and application/json content types, and deserialises the byte stream to struct EmployeeLogin mentioned above.

๐Ÿ’ฅ Please also note that, as already mentioned, in this post, the login process does not do login, if successfully deserialised the submitted data, it’d just echo a confirmation response in the format of the request content type. If failed to deserialise, it’d send back a JSON response which has an error code and a text message.

Examples of valid submitted data for each content type:

โœ”๏ธ Content type: application/x-www-form-urlencoded; data: email=chirstian.koblick.10004@gmail.com&password=password.

โœ”๏ธ Content type: application/json; data: {"email": "chirstian.koblick.10004@gmail.com", "password": "password"}.

#[post("/login")]
pub async fn login(
    request: HttpRequest,
    body: Bytes
) -> HttpResponse {
...
    // Attempts to extract -- deserialising -- request body into EmployeeLogin.
    let api_status = extract_employee_login(&body, request.content_type());
    // Failed to deserialise request body. Returns the error as is.
    if api_status.is_err() {
        return HttpResponse::Ok()
            .content_type(ContentType::json())
            .body(serde_json::to_string(&api_status.err().unwrap()).unwrap());
    }

    // Succeeded to deserialise request body.
    let emp_login: EmployeeLogin = api_status.unwrap();
...	

Note the second parameter body, which is actix_web::web::Bytes, this is the byte stream presentation of the request body.

As an extractor, actix_web::web::Bytes has been mentioned in section Type-safe information extraction | Other. We’re providing our own implementation to do the deserialisation, method extract_employee_login(...) in new module src/helper/endpoint.rs.

pub fn extract_employee_login(
    body: &Bytes, 
    content_type: &str
) -> Result<EmployeeLogin, ApiStatus> {
...
    extractors.push(Extractor { 
        content_type: mime::APPLICATION_WWW_FORM_URLENCODED.to_string(), 
        handler: |body: &Bytes| -> Result<EmployeeLogin, ApiStatus> {
            match from_bytes::<EmployeeLogin>(&body.to_owned().to_vec()) {
                Ok(e) => Ok(e),
                Err(e) => Err(ApiStatus::new(err_code_500()).set_text(&e.to_string()))
            }
        }
    });
...
    extractors.push(Extractor {
        content_type: mime::APPLICATION_JSON.to_string(),
        handler: |body: &Bytes| -> Result<EmployeeLogin, ApiStatus> {
            // From https://stackoverflow.com/a/67340858
            match serde_json::from_slice(&body.to_owned()) {
                Ok(e) => Ok(e),
                Err(e) => Err(ApiStatus::new(err_code_500()).set_text(&e.to_string()))
            }
        }
    });

For application/x-www-form-urlencoded content type, we call method serde_html_form::from_bytes(…) from (new) crate serde_html_form to deserialise the byte stream to EmployeeLogin.

Cargo.toml has been updated to include crate serde_html_form.

And for application/json content type, we call to serde_json::from_slice(…) from the already included serde_json crate to do the work.

These’re the essential details of the code. The rest is fairly straightforward, and there’s also sufficient documentation to aid the reading of the code.

๐Ÿ’ฅ Please also note that there’re also some more new modules, such as src/bh_libs/api_status.rs and src/helper/messages.rs, they’re very small, self-explanatory and have sufficient documentation where appropriate.

โบ Register new login routes /ui/login and /api/login.

Updated src/lib.rs:
pub async fn run(listener: TcpListener) -> Result<Server, std::io::Error> {
...
            .service(
                web::scope("/ui")
                    .service(handlers::employees_html1)
                    .service(handlers::employees_html2)
                    .service(auth_handlers::login_page)
                    // .service(auth_handlers::home_page),
            )
            .service(
                web::scope("/api")
                    .service(auth_handlers::login)
            )
            .service(
                web::resource("/helloemployee/{last_name}/{first_name}")
                    .wrap(middleware::SayHi)
                    .route(web::get().to(handlers::hi_first_employee_found))
            )
...			

โป The last addition, the new templates/auth/login.html.

Please note, this login page has only HTML. There is no CSS at all. It looks like a dog’s breakfast, but it does work. There is no client-side validations either.

The Login button POSTs login requests to http://0.0.0.0:5000/api/login, the content type then is application/x-www-form-urlencoded.

For application/json content type, we can use Testfully. (We could also write our own AJAX requests to test.)

โผ As this is not yet the final version of the login process, we’re not writing any integration tests for it yet. We’ll do so in due course…

โ“ต For the time being, we’ve written some new code and their associated unit tests. We have also written some documentation examples. The full test with the command cargo test should have all tests pass.

โ“ถ Manual tests of the new routes.

In the following two successful tests, I run the application server on an Ubuntu 22.10 machine, and run both the login page and Testfully on Windows 10.

Test application/x-www-form-urlencoded submission via login page:

Test application/json submission using Testfully:

In this failure test, I run the application server and Testfully on Windows 10. The submitted application/json data does not have an email field:

It’s been an interesting exercise for me. My understanding of Rust’s improved a little. I hope you find the information in this post useful. Thank you for reading and stay safe as always.

โœฟโœฟโœฟ

Feature image source:

๐Ÿฆ€ Index of the Complete Series.

Rust web application: MySQL server, sqlx, actix-web and tera.

We write a Rust web application using a MySQL database. We use the already familiar crate sqlx for database works. The web framework we’re using is crate actix-web. For Cross-Origin Resource Sharing (CORS) controls, we use crate actix-cors. For HTML template processing, we use crate tera, which implements Jinja2 template syntax.

๐Ÿฆ€ Index of the Complete Series.

The test project built in this post will have the following routes:

  • JSON response route http://0.0.0.0:5000/data/employees — method: POST; content type: application/json; request body: {"last_name": "%chi", "first_name": "%ak"}.
  • JSON response route http://0.0.0.0:5000/data/employees/%chi/%ak — method GET.
  • HTML response route http://0.0.0.0:5000/ui/employees — method: POST; content type: application/x-www-form-urlencoded; charset=UTF-8; request body: last_name=%chi&first_name=%ak.
  • HTML response route http://0.0.0.0:5000/ui/employees/%chi/%ak — method: GET.

This post does not discuss authentication, i.e. login, as this’s a learning journey for me, I’m deliberately avoiding this potentially complex subject, perhaps we’ll look into it in the future.

Table of contents

The Database and MySQL Database Server

โถ We’ll use the same Oracle Corporation MySQL test database, the same employees table and the same get_employees stored procedure; which we’ve used in other Rust and none-Rust posts.

To recap, the employees table has the following structure:

CREATE TABLE `employees` (
  `emp_no` int NOT NULL,
  `birth_date` date NOT NULL,
  `first_name` varchar(14) COLLATE utf8mb4_unicode_ci NOT NULL,
  `last_name` varchar(16) COLLATE utf8mb4_unicode_ci NOT NULL,
  `gender` enum('M','F') COLLATE utf8mb4_unicode_ci NOT NULL,
  `hire_date` date NOT NULL,
  PRIMARY KEY (`emp_no`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

And the get_employees stored procedure is:

DELIMITER $$
CREATE DEFINER=`root`@`%` PROCEDURE `get_employees`( pmLastName varchar(16), pmFirstName varchar(14) )
    READS SQL DATA
begin
  select * from employees e where (e.last_name like pmLastName)
    and (e.first_name like pmFirstName) order by e.emp_no;
end$$
DELIMITER ;

โท The MySQL server used is a Docker container discussed in the following posts:

Crates Used In This Post

Let’s discuss the crates, to get them out of the way.

Crates Which We Have Not Covered Before

โ— Crate actix-web — this’s the web development framework that I choose to learn. I’m aware of several others. I choose this one due to the comparison presented by the following sites:

Crate actix-web ranks top based on popularity and supported features. For beginners, the official tutorial can be a bit daunting, at least for me. But after completing all tutorials, we should’ve an overall understanding of this crate. It’s a good investment.

โ— The example code in this post applies Cross-Origin Resource Sharing (CORS) controls. This Amazon Web Services article What is Cross-Origin Resource Sharing? offers a very good explanation of what CORS is.

Crate actix-cors — this’s a middleware which implements CORS controls for actix-web. Take note of the given Example, the example code will copy this implementation as is, and we’ll also call Cors” target=”_blank”>pub fn supports_credentials(self) -> Cors to make the implementation a bit more secured:

        let cors = Cors::default()
            .allowed_origin(&config.allowed_origin)
            .allowed_methods(vec!["GET", "POST"])
            .allowed_headers(vec![
                header::CONTENT_TYPE,
                header::AUTHORIZATION,
                header::ACCEPT,
            ])
            .max_age(config.max_age)
            .supports_credentials();

โ— Crate tera — this’s a template processing engine middleware. It’s based on the Jinja2 engine, which I’am familiar with. There’re several crates which were Jinja2-based. I choose this one due to the comparison presented by the following site Compare askama and tera’s popularity and activity, and tera seems to be more popular, and the documentation page offers a usage example, although it’s a bit weak.

This page Day 15 – tera offers a more comprehensive example, whereby a Rust vector is passed to the template. Please note, the code is a bit outdated, but together with the official example, we can make it works with little effort.

This GitHub page Tera has an example of how a template renders data passed in.

โ— Crate dotenv — supports .env file. We’ll use the code presented in the Example page.

We’ve used the .env file before, in this post Rust SQLx CLI: database migration with MySQL and PostgreSQL.

Crates Which We Have Covered Before

In addition to the above new crates, we also use some of the crates which we have used before: time, sqlx, async-std, serde and serde_json.

Among other previous posts, we’ve covered these crates in the following posts in chronological order of most recent to least:

The Example Code

The complete source code for this post is on GitHub. The code for this post has been tagged with v0.1.0. To get the code at this tag, i.e. the code for this post, run the command:

git clone -b v0.1.0 https://github.com/behai-nguyen/rust_web_01.git

The layout of the project is shown the screenshot below:

To keep it simple, all modules live in the src/ directory.

The Cargo.toml File

The Cargo.toml file includes all crates we’ve discussed in the previous section. View the content of Cargo.toml on GitHub. We’ve covered crate features in some other previous posts.

The .env File

We store some configuration items in the .env file. Its content is reproduced below:

MAX_CONNECTIONS=15
DATABASE_URL=mysql://root:pcb.2176310315865259@localhost:3306/employees

ALLOWED_ORIGIN=http://localhost
MAX_AGE=3600

  • MAX_CONNECTIONS: the maximum total number of database connections in the pool.
  • DATABASE_URL: MySQL database connection string. We’ve seen this connection string in other Rust posts.
  • ALLOWED_ORIGIN: CORS’ Access-Control-Allow-Origin. This value can be a list, but to keep the example simple, we use only one.
  • MAX_AGE: CORS’ Access-Control-Max-Age. This value specifies the duration, in seconds, that the preflight results can be cached in the browser. When this duration elapses, the browser’ll need to send another preflight request.

The templates/employees.html File

The templates/employees.html on GitHub. This’s a Jinja2 template. Rust code passes to it the employees vector, where each element is a JSON object.

If there is at least one (1) element in the vector, we’ll display the list of employees in an HTML table. We first render the header row, then enter the for loop to display each employee. It’s a simple template, there isn’t any CSS.

The Rust Code

To recap, please note that: the code for this post has been tagged with v0.1.0.

Except for src/main.rs, all other module files should have sufficient documentation to explain the code. To view the documentation in a browser, on the terminal command line, just change to the project root directory, i.e. where Cargo.toml file is, and run the following command:

cargo doc --open

But I think it’s easier just to open the file and read it!

src/config.rs

This’s the run time representation of the .env file. We define a struct with fields that match the corresponding entries in the .env file, and a function to load field values from the file to the struct.

src/utils.rs

It has only a short, single mod australian_date_format, which serialises MySQL date into an Australian date format dd/mm/yyyy. It’s also in Rust & MySQL: JSON serialisation of result sets retrieved using crate sqlx, where its name is mod my_date_format.

src/database.rs

The intention is, this module is responsible for database connection. In a proper application, I imagine it would be a substantial module. But for this project, there’s only a single method get_mysql_pool, it attempts to connect to the target MySQL database, if successful, it prints a success message to the console, and returns the pool. Otherwise, it prints some failure messages, and terminates the application.

src/models.rs

This module is about the employees table. Please note, struct Employee as defined in this module, has also been used in the previously mentioned post Rust & MySQL: JSON serialisation of result sets retrieved using crate sqlx.

I imagine that, in a proper application, it would be a models/ sub-directory instead of a single file. And underneath this models/ directory, there would be the actual employees.rs module among other modules for other tables. And each module would’ve all relevant functions that operate on the target database table.

For this project, we have only a single pub async fn get_employees(...) which attempts to retrieve data from the employees table based on partial last name and partial first name. This function is a refactored version of the function async fn do_run_stored_proc(), which is also in the previously mentioned post Rust & MySQL: JSON serialisation of result sets retrieved using crate sqlx, and also in some other earlier posts.

src/handlers.rs

This’s where all the HTTP request handler methods are. In a Modelโ€“Viewโ€“Controller pattern, I think this’s the controller? Again, in a proper application, there’d be several of these modules, and handlers/ or controllers/ would be a project sub-directory where these modules live. In Python, I’ve seen others use controllers/, and I follow, too.

We’ve four (4) methods to handle POST and GET requests. These methods demonstrate the most common and basic tasks any web application usually implements. โ“ต Accepting POST requests in JSON and x-www-form-urlencoded, and returning a response in either JSON or HTML. โ“ถ Accepting GET requests where data are in request paths and returning a response in either JSON or HTML.

  1. pub async fn employees_json1(...): accepting POST requests in JSON, and returning responses in JSON. See also:
  2. pub async fn employees_json2(...): accepting GET requests where data are in request paths and returning responses in JSON. See also:
  3. pub async fn employees_html1(...): accepting POST requests in x-www-form-urlencoded, and returning responses in HTML. See also:
  4. pub async fn employees_html2(...): accepting GET requests where data are in request paths and returning responses in HTML. See also:

We have not covered query string requests, it’s not much too different to others that we’ve covered above.

On the later two (2) HTML response methods, we could’ve written the final response as:

    HttpResponse::Ok()
        .body(render_employees_template(&query_result))    

Instead of:

    HttpResponse::Ok()
        .content_type("text/html; charset=utf-8")
        .body(render_employees_template(&query_result))

The Content-Type header might get ignored. I have it in just to remember how we can set it in case we need to.

Finally, the private worker fn render_employees_template(...) is just a copy version of examples from other articles in the section where we discuss crate tera.

src/main.rs

We use the application state to persist database connection across the application:

...
pub struct AppState {
    db: Pool<MySql>,
}
...
            .app_data(web::Data::new(AppState {
                db: pool.clone()
            }))
...

This code is just about an exact copy of the code from the above official documentation ๐Ÿ˜‚.

The CORS code’s been discussed in section crate actix-cors.

In src/handlers.rs module, the root route for all handler methods is /employees. We use route scoping to route methods which return JSON to /data/employees, and methods which return HTML to /ui/employees:

...
            .service(
                web::scope("/data")
                    .service(handlers::employees_json1)
                    .service(handlers::employees_json2),
            )
            .service(
                web::scope("/ui")
                    .service(handlers::employees_html1)
                    .service(handlers::employees_html2),
            )
...

I’d like to be able to visually test this project across my home network, so I bind it to 0.0.0.0. Port 5000 just happens to be the port that I allocate to test work in progress projects.

Some Visual Test Runs

For some reasons, Postman reports 403 Forbidden for some of the routes… While these routes are okay on browsers. I use Testfully instead.

192.168.0.16 is the address of my Ubuntu 22.10 machine, I run the tests from my Windows 10 Pro machine.

โถ http://192.168.0.16:5000/data/employees

โท http://192.168.0.16:5000/data/employees/%chi/%ak

โธ http://192.168.0.16:5000/ui/employees

โน http://192.168.0.16:5000/ui/employees/%chi/%ak

โบ http://192.168.0.16:5000/ui/employees/%yแป…n/%Hai%

Concluding Remarks

We’ve not written any test for any of the modules in this project. It’s my intention to do so in the near future, but I’m not too certain if I can actually see it through. I’ve written tests before, for example, in the first Rust post, and some later ones.

We’ve covered some only very basic features crate actix-web has. It’s worth studying the tutorials provided by the official documentation.

I hope you find this post useful. Thank you for reading and stay safe as always.

โœฟโœฟโœฟ

Feature image source:

๐Ÿฆ€ Index of the Complete Series.

Rust SQLx CLI: database migration with MySQL and PostgreSQL.

Database migration is the process of developing, and refactoring the application database as part of the application development process. SQLx CLI is Rust crate sqlx‘s associated command-line utility for managing this migration process. In this post, we’re looking at how to use this CLI for both MySQL and PostgreSQL, on both Windows 10 and Ubuntu 22.10.

After trying SQLx CLI out, I’d say that we could actually use this CLI as a generic tool for managing database development for applications written in languages other than Rust. I feel really fond of this CLI.

Table of contents

SQLx CLI Installation Requires OpenSSL

The SQLx CLI documentation does not mention it, but it seems that it does require OpenSSL. I only find this one out after some failed attempt to install it.

Prior to installing SQLx CLI on Windows 10, I had a problem with not having OpenSSL installed for some other Rust Actix Web project.

This GitHub issue Ubuntu 18: failed to run custom build command for openssl-sys v0.9.39, answer by riteshkumargiitian helps, to install, run:

C:\>choco install openssl

The installation directory is C:\Program Files\OpenSSL-Win64. For full detail logs, please see Windows 10 OpenSSL Installation Logs.

The next step is to set environment variable OPENSSL_DIR. Note that there’re no double quotes, ("") around the directory path:

C:\>set OPENSSL_DIR=C:\Program Files\OpenSSL-Win64

This is the state of my Windows 10 machine when I install SQLx CLI. The installation just succeeds the first time.

On Ubuntu 22.10, the first installation attempt does not go through, it needs OpenSSL. For full detail logs, please see Ubuntu 22.10 SQLx CLI Failure Installation Logs.

To install OpenSSL, run:

$ sudo apt install pkg-config

It should succeed. For full detail logs, please see Ubuntu 22.10 OpenSSL Installation Logs.

SQLx CLI Installation

We install SQLx CLI with cargo. I already have Rust installed on both OSes. It does not matter under which directory we run the installation command from.

On Windows 10:

C:\>cargo install sqlx-cli

On Ubuntu 22.10:

$ cargo install sqlx-cli

— For the full Ubuntu 22.10 successful installation logs, please see Ubuntu 22.10 SQLx CLI Success Installation Logs.

Only after the installation, we get informed where the CLI executables are. The documentation does not mention it beforehand:

โ— On Windows 10: C:\Users\behai\.cargo\bin\. And C:\Users\behai\.cargo size on disk is about 1.32GB.

โ— On Ubuntu 22.10: /home/behai/.cargo/bin/. And /home/behai/.cargo is about 1.00GB.

Database Migration with SQLx CLI

We’ll look at both MySQL and PostgreSQL, illustrated examples are on Ubuntu 22.10. On Windows 10, commands are the same.

The MySQL server used is a Docker container discussed in Docker on Ubuntu 22.10: running mysql:8.0.34-debian with custom config, socket, database and log files reside on host machine.

The PostgreSQL server is also a Docker container, discussed in Using PostgreSQL Official Docker image on Windows 10 and Ubuntu 22.10 kinetic.

Database Connection

SQLx CLI Usage discusses both .env file and command line option --database-url. We’ll use the .env, later on, we could add more run time info to this file as required by the application. I think the .env file should sit in the same directory as the Cargo.toml file.

MySQL Server

โถ We don’t actually need an existing Rust project to start a database migration process. But I create one for this run. So I’ll repeat it in this post.

While under /home/behai/rust, create a new project sqlx-mysql-migration, and change to the project directory:

$ cargo new sqlx-mysql-migration
$ cd sqlx-mysql-migration/

โท Create the .env file.

Content of /home/behai/rust/sqlx-mysql-migration/.env:
DATABASE_URL=mysql://root:pcb.2176310315865259@localhost:3306/membership

We’ve seen the value of DATABASE_URL in other Rust code before, for example, Rust & MySQL: connect, execute SQL statements and stored procs using crate sqlx, where it is mysql://root:pcb.2176310315865259@localhost:3306/employees. In this example, we’re working with a membership database.

โธ Now that we have the DATABASE_URL ready, and of course the target MySQL server is running, we create the database with:

$ sqlx database create

If there was no problem, there would be no output. We should use MySQL Workbench to verify that the database has been created.

We can drop the database with:

$ sqlx database drop

It’ll ask for confirmation. Again, we can use MySQL Workbench to verify that the database has been dropped.

Please note, from this point on, we need the database to exist.

โน Create the first database script. The command takes the format:

$ sqlx migrate add -r <name>

For <name>, let’s call it init. The command is then:

$ sqlx migrate add -r init

The output is:

Creating migrations/20231008021418_init.up.sql
Creating migrations/20231008021418_init.down.sql

Congratulations on creating your first migration!

Did you know you can embed your migrations in your application binary?
On startup, after creating your database connection or pool, add:

sqlx::migrate!().run(<&your_pool OR &mut your_connection>).await?;

Note that the compiler won't pick up new migrations if no Rust source files have changed.
You can create a Cargo build script to work around this with `sqlx migrate build-script`.

See: https://docs.rs/sqlx/0.5/sqlx/macro.migrate.html

A sub-directory named migrations/ has been created, in it there’re two (2) empty files 20231008021418_init.up.sql and 20231008021418_init.down.sql.

After we made some changes to the application database, we might want to revert these changes for some reason. The 20231008021418_init.up.sql script file is where we write the SQL statements to update the database. The script file 20231008021418_init.down.sql is where we write SQL statements to undo what 20231008021418_init.up.sql does.

Content of migrations/20231008021418_init.up.sql:
-- 08/10/2023.

ALTER DATABASE `membership` DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

DROP TABLE IF EXISTS `member`;
CREATE TABLE `member` (
  `id` int NOT NULL,
  `email` VARCHAR(255) NOT NULL,
  `password` varchar(100) NOT NULL,
  `birth_date` date NOT NULL,
  `first_name` varchar(32) NOT NULL,
  `last_name` varchar(32) NOT NULL,
  `gender` enum('M','F') NOT NULL,
  `joined_date` date NOT NULL,
  `created_date` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,
  PRIMARY KEY (`id`),
  UNIQUE KEY `email_unique` (`email`)
);

Basically, we alter some properties of membership database, then we create the first table: member.

Content of migrations/20231008021418_init.down.sql:
-- 08/10/2023.

DROP TABLE IF EXISTS `member`;

We undo what the up script does: we remove the member table from the database.

โบ Let’s run the init migration. The command is:

$ sqlx migrate run

The output is brief:

Applied 20231008021418/migrate init (2.419887162s)

MySQL Workbench shows that the member table has been created, and there’s also an additional table _sqlx_migrations — and there’s one (1) row in it, this row should be self-explanatory.

Let’s just undo that. The command is:

$ sqlx migrate revert

The output is:

Applied 20231008021418/revert init (810.615999ms)

Verify that the member table has been removed, and the table _sqlx_migrations is empty?

โป Let’s create another database update script, we name it stage1:

$ sqlx migrate add -r stage1

Content of migrations/20231008081406_stage1.up.sql:
-- 08/10/2023.

DROP TABLE IF EXISTS `duration`;
CREATE TABLE `duration` (
  `id` smallint NOT NULL,
  `months` smallint NOT NULL,
  `expiry_date` date NULL,
  PRIMARY KEY (`id`)
) comment='Membership duration in months.';

insert into duration ( id, months )
values
    ( 1, 6 ),
    ( 2, 12 ),
    ( 3, 18 ),
    ( 4, 24 );
Content of migrations/20231008081406_stage1.down.sql:
-- 08/10/2023.

DROP TABLE IF EXISTS `duration`;

Apply stage1 with command:

$ sqlx migrate run

Assuming that init has been applied. The output is:

Applied 20231008081406/migrate stage1 (1.742237765s)

Table _sqlx_migrations should now contain two (2) entries. Table duration should also have been created.

โผ Let’s undo:

$ sqlx migrate revert

Output:

Applied 20231008081406/revert stage1 (488.29367ms)

Let’s do another undo:

$ sqlx migrate revert

Output:

Applied 20231008021418/revert init (445.333376ms)

We can see that the last update gets undo first. Also, the membership database is now an empty database. And table _sqlx_migrations has no entry.

PostgreSQL Server

The process for PostgreSQL databases is identical to MySQL databases.

โถ Create a new directory for this migration.

While under /home/behai/rust, create a new sub-directory sqlx-postgresql-migration, and move to this new sub-directory:

$ mkdir sqlx-postgresql-migration
$ cd sqlx-postgresql-migration

โท The .env file.

Content of /home/behai/rust/sqlx-postgresql-migration/.env:
DATABASE_URL=postgresql://postgres:pcb.2176310315865259@localhost:5432/membership?schema=public

๐Ÿ’ฅ Please note: the parameter schema=public might cause the error:

ERROR: no schema has been selected to create in

In the PostgreSQL server config file postgresql.conf, ensure that the search_path entry value has public. For example:

search_path = 'ompdev1, "$user", public'	# schema names

โธ Database creation and removal commands are the same, respectively as:

$ sqlx database create
$ sqlx database drop

We can use a client tool such as pgAdmin 4 to verify that the membership database has been created on the target server.

Please note, from this point on, we need the database to exist.

โน Create the first init database script:

$ sqlx migrate add -r init

Content of migrations/20231008104430_init.up.sql
-- 08/10/2023.

DROP TYPE IF EXISTS genders;

CREATE TYPE genders AS ENUM ('M', 'F');

DROP TABLE IF EXISTS "member";

CREATE TABLE "member" (
  id integer NOT NULL PRIMARY KEY,
  email VARCHAR(255) NOT NULL UNIQUE,
  password varchar(100) NOT NULL,
  birth_date date NOT NULL,
  first_name varchar(32) NOT NULL,
  last_name varchar(32) NOT NULL,
  gender genders NOT NULL,
  joined_date date NOT NULL,
  created_date timestamp with time zone NOT NULL
);

CREATE INDEX member_email_idx ON member (email);
Content of migrations/20231008104430_init.down.sql
-- 08/10/2023.

DROP TABLE IF EXISTS "member";
DROP TYPE IF EXISTS genders;

โบ Let’s also create the second database script stage1:

$ sqlx migrate add -r stage1

Content of migrations/20231008111030_stage1.up.sql
-- 08/10/2023.

DROP TABLE IF EXISTS "duration";
CREATE TABLE "duration" (
  id smallint NOT NULL PRIMARY KEY,
  months smallint NOT NULL,
  expiry_date date NULL
);

COMMENT ON TABLE duration IS 'Membership duration in months.';

insert into duration ( id, months )
values
    ( 1, 6 ),
    ( 2, 12 ),
    ( 3, 18 ),
    ( 4, 24 );
Content of migrations/20231008111030_stage1.down.sql
-- 08/10/2023.

DROP TABLE IF EXISTS "duration";

โป Commands to apply and to undo are as discussed before:

$ sqlx migrate run
$ sqlx migrate revert

If we now apply the migration scripts, the output is:

Applied 20231008104430/migrate init (121.005913ms)
Applied 20231008111030/migrate stage1 (55.043293ms)

And tables, including _sqlx_migrations, are created under schema public as shown in the screenshot below:

So barring database specific syntax differences, the process is identical for MySQL and PostgreSQL.

Thank you for reading, and I do hope you find this post useful. Stay safe as always.

โœฟโœฟโœฟ

Feature image source:

โœฟโœฟโœฟ

Windows 10 OpenSSL Installation Logs

C:\>choco install openssl
Chocolatey v1.2.0
Installing the following packages:
openssl
By installing, you accept licenses for the packages.

openssl v3.1.1 [Approved]
openssl package files install completed. Performing other installation steps.
The package openssl wants to run 'chocolateyinstall.ps1'.
Note: If you don't run this script, the installation will fail.
Note: To confirm automatically next time, use '-y' or consider:
choco feature enable -n allowGlobalConfirmation
Do you want to run the script?([Y]es/[A]ll - yes to all/[N]o/[P]rint): Y

Installing 64-bit openssl...
openssl has been installed.
WARNING: No registry key found based on  'OpenSSL-Win'
PATH environment variable does not have C:\Program Files\OpenSSL-Win64\bin in it. Adding...
WARNING: OPENSSL_CONF has been set to C:\Program Files\OpenSSL-Win64\bin\openssl.cfg
  openssl can be automatically uninstalled.
Environment Vars (like PATH) have changed. Close/reopen your shell to
 see the changes (or in powershell/cmd.exe just type `refreshenv`).
 The install of openssl was successful.
  Software installed to 'C:\Program Files\OpenSSL-Win64\'

Chocolatey installed 1/1 packages.
 See the log for details (C:\ProgramData\chocolatey\logs\chocolatey.log).

Ubuntu 22.10 OpenSSL Installation Logs

behai@hp-pavilion-15:~$ pwd

It is:

/home/behai

Full installation log:

behai@hp-pavilion-15:~$ sudo apt install pkg-config
[sudo] password for behai:
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
The following NEW packages will be installed:
  pkg-config
0 to upgrade, 1 to newly install, 0 to remove and 1 not to upgrade.
Need to get 48.2 kB of archives.
After this operation, 134 kB of additional disk space will be used.
Get:1 http://au.archive.ubuntu.com/ubuntu kinetic/main amd64 pkg-config amd64 0.29.2-1ubuntu3 [48.2 kB]
Fetched 48.2 kB in 0s (697 kB/s)
Selecting previously unselected package pkg-config.
(Reading database ... 237915 files and directories currently installed.)
Preparing to unpack .../pkg-config_0.29.2-1ubuntu3_amd64.deb ...
Unpacking pkg-config (0.29.2-1ubuntu3) ...
Setting up pkg-config (0.29.2-1ubuntu3) ...
Processing triggers for man-db (2.10.2-2) ...
behai@hp-pavilion-15:~$

Ubuntu 22.10 SQLx CLI Failure Installation Logs

behai@hp-pavilion-15:~/rust/sqlx-mysql-migration$ cargo install sqlx-cli
    Updating crates.io index
  Downloaded sqlx-cli v0.7.2
  Downloaded 1 crate (94.4 KB) in 0.73s
  Installing sqlx-cli v0.7.2
    Updating crates.io index
  Downloaded openssl-macros v0.1.1
  Downloaded fd-lock v3.0.13
  Downloaded native-tls v0.2.11
  Downloaded byteorder v1.5.0
  Downloaded md-5 v0.10.6
  Downloaded openssl-probe v0.1.5
  Downloaded endian-type v0.1.2
  Downloaded nibble_vec v0.1.0
  Downloaded errno v0.3.4
  Downloaded tokio-macros v2.1.0
  Downloaded sqlx-macros v0.7.2
  Downloaded anstyle v1.0.4
  Downloaded memchr v2.6.4
  Downloaded unicode-width v0.1.11
  Downloaded futures-macro v0.3.28
  Downloaded cargo_metadata v0.14.2
  Downloaded cargo-platform v0.1.4
  Downloaded anstyle-parse v0.2.2
  Downloaded semver v1.0.19
  Downloaded sha1 v0.10.6
  Downloaded num_cpus v1.16.0
  Downloaded fastrand v2.0.1
  Downloaded promptly v0.3.1
  Downloaded anstream v0.6.4
  Downloaded thiserror v1.0.49
  Downloaded sqlx-macros-core v0.7.2
  Downloaded backoff v0.4.0
  Downloaded libm v0.2.8
  Downloaded thiserror-impl v1.0.49
  Downloaded sha2 v0.10.8
  Downloaded tokio-stream v0.1.14
  Downloaded futures v0.3.28
  Downloaded clap_complete v4.4.3
  Downloaded num-traits v0.2.17
  Downloaded unicode-ident v1.0.12
  Downloaded sqlx-mysql v0.7.2
  Downloaded clap v4.4.6
  Downloaded typenum v1.17.0
  Downloaded socket2 v0.5.4
  Downloaded indexmap v2.0.2
  Downloaded sqlx-sqlite v0.7.2
  Downloaded openssl-sys v0.9.93
  Downloaded flume v0.11.0
  Downloaded sqlx-postgres v0.7.2
  Downloaded rustyline v9.1.2
  Downloaded sqlx-core v0.7.2
  Downloaded hashbrown v0.14.1
  Downloaded sqlx v0.7.2
  Downloaded clap_builder v4.4.6
  Downloaded chrono v0.4.31
  Downloaded radix_trie v0.2.1
  Downloaded nix v0.23.2
  Downloaded openssl v0.10.57
  Downloaded rustix v0.38.17
  Downloaded proc-macro2 v1.0.68
  Downloaded smallvec v1.11.1
  Downloaded syn v2.0.38
  Downloaded libc v0.2.149
  Downloaded tokio v1.32.0
  Downloaded linux-raw-sys v0.4.8
  Downloaded 60 crates (6.6 MB) in 2.06s (largest was `linux-raw-sys` at 1.4 MB)
   Compiling proc-macro2 v1.0.68
   Compiling unicode-ident v1.0.12
   Compiling libc v0.2.149
   Compiling autocfg v1.1.0
   Compiling cfg-if v1.0.0
   Compiling version_check v0.9.4
   Compiling serde v1.0.188
   Compiling quote v1.0.33
   Compiling typenum v1.17.0
   Compiling syn v2.0.38
   Compiling generic-array v0.14.7
   Compiling const-oid v0.9.5
   Compiling getrandom v0.2.10
   Compiling cc v1.0.83
   Compiling pkg-config v0.3.27
   Compiling vcpkg v0.2.15
   Compiling futures-core v0.3.28
   Compiling libm v0.2.8
   Compiling smallvec v1.11.1
   Compiling crypto-common v0.1.6
   Compiling block-buffer v0.10.4
   Compiling num-traits v0.2.17
   Compiling once_cell v1.18.0
   Compiling subtle v2.5.0
   Compiling digest v0.10.7
   Compiling memchr v2.6.4
   Compiling pin-project-lite v0.2.13
   Compiling serde_derive v1.0.188
   Compiling openssl-sys v0.9.93
   Compiling lock_api v0.4.10
   Compiling ahash v0.8.3
   Compiling slab v0.4.9
   Compiling tinyvec_macros v0.1.1
   Compiling futures-sink v0.3.28
   Compiling scopeguard v1.2.0
error: failed to run custom build command for `openssl-sys v0.9.93`

Caused by:
  process didn't exit successfully: `/tmp/cargo-installCJHaz3/release/build/openssl-sys-9774aa6210709b08/build-script-main` (exit status: 101)
  --- stdout
  cargo:rerun-if-env-changed=X86_64_UNKNOWN_LINUX_GNU_OPENSSL_LIB_DIR
  X86_64_UNKNOWN_LINUX_GNU_OPENSSL_LIB_DIR unset
  cargo:rerun-if-env-changed=OPENSSL_LIB_DIR
  OPENSSL_LIB_DIR unset
  cargo:rerun-if-env-changed=X86_64_UNKNOWN_LINUX_GNU_OPENSSL_INCLUDE_DIR
  X86_64_UNKNOWN_LINUX_GNU_OPENSSL_INCLUDE_DIR unset
  cargo:rerun-if-env-changed=OPENSSL_INCLUDE_DIR
  OPENSSL_INCLUDE_DIR unset
  cargo:rerun-if-env-changed=X86_64_UNKNOWN_LINUX_GNU_OPENSSL_DIR
  X86_64_UNKNOWN_LINUX_GNU_OPENSSL_DIR unset
  cargo:rerun-if-env-changed=OPENSSL_DIR
  OPENSSL_DIR unset
  cargo:rerun-if-env-changed=OPENSSL_NO_PKG_CONFIG
  cargo:rerun-if-env-changed=PKG_CONFIG_x86_64-unknown-linux-gnu
  cargo:rerun-if-env-changed=PKG_CONFIG_x86_64_unknown_linux_gnu
  cargo:rerun-if-env-changed=HOST_PKG_CONFIG
  cargo:rerun-if-env-changed=PKG_CONFIG
  cargo:rerun-if-env-changed=OPENSSL_STATIC
  cargo:rerun-if-env-changed=OPENSSL_DYNAMIC
  cargo:rerun-if-env-changed=PKG_CONFIG_ALL_STATIC
  cargo:rerun-if-env-changed=PKG_CONFIG_ALL_DYNAMIC
  cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64-unknown-linux-gnu
  cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64_unknown_linux_gnu
  cargo:rerun-if-env-changed=HOST_PKG_CONFIG_PATH
  cargo:rerun-if-env-changed=PKG_CONFIG_PATH
  cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64-unknown-linux-gnu
  cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64_unknown_linux_gnu
  cargo:rerun-if-env-changed=HOST_PKG_CONFIG_LIBDIR
  cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR
  cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64-unknown-linux-gnu
  cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64_unknown_linux_gnu
  cargo:rerun-if-env-changed=HOST_PKG_CONFIG_SYSROOT_DIR
  cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR
  cargo:rerun-if-env-changed=OPENSSL_STATIC
  cargo:rerun-if-env-changed=OPENSSL_DYNAMIC
  cargo:rerun-if-env-changed=PKG_CONFIG_ALL_STATIC
  cargo:rerun-if-env-changed=PKG_CONFIG_ALL_DYNAMIC
  cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64-unknown-linux-gnu
  cargo:rerun-if-env-changed=PKG_CONFIG_PATH_x86_64_unknown_linux_gnu
  cargo:rerun-if-env-changed=HOST_PKG_CONFIG_PATH
  cargo:rerun-if-env-changed=PKG_CONFIG_PATH
  cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64-unknown-linux-gnu
  cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR_x86_64_unknown_linux_gnu
  cargo:rerun-if-env-changed=HOST_PKG_CONFIG_LIBDIR
  cargo:rerun-if-env-changed=PKG_CONFIG_LIBDIR
  cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64-unknown-linux-gnu
  cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR_x86_64_unknown_linux_gnu
  cargo:rerun-if-env-changed=HOST_PKG_CONFIG_SYSROOT_DIR
  cargo:rerun-if-env-changed=PKG_CONFIG_SYSROOT_DIR
  run pkg_config fail: Could not run `PKG_CONFIG_ALLOW_SYSTEM_CFLAGS="1" "pkg-config" "--libs" "--cflags" "openssl"`
  The pkg-config command could not be found.

  Most likely, you need to install a pkg-config package for your OS.
  Try `apt install pkg-config`, or `yum install pkg-config`,
  or `pkg install pkg-config`, or `apk add pkgconfig` depending on your distribution.

  If you've already installed it, ensure the pkg-config command is one of the
  directories in the PATH environment variable.

  If you did not expect this build to link to a pre-installed system library,
  then check documentation of the openssl-sys crate for an option to
  build the library from source, or disable features or dependencies
  that require pkg-config.

  --- stderr
  thread 'main' panicked at '

  Could not find directory of OpenSSL installation, and this `-sys` crate cannot
  proceed without this knowledge. If OpenSSL is installed and this crate had
  trouble finding it,  you can set the `OPENSSL_DIR` environment variable for the
  compilation process.

  Make sure you also have the development packages of openssl installed.
  For example, `libssl-dev` on Ubuntu or `openssl-devel` on Fedora.

  If you're in a situation where you think the directory *should* be found
  automatically, please open a bug at https://github.com/sfackler/rust-openssl
  and include information about your system as well as this message.

  $HOST = x86_64-unknown-linux-gnu
  $TARGET = x86_64-unknown-linux-gnu
  openssl-sys = 0.9.93


  It looks like you're compiling on Linux and also targeting Linux. Currently this
  requires the `pkg-config` utility to find OpenSSL but unfortunately `pkg-config`
  could not be found. If you have OpenSSL installed you can likely fix this by
  installing `pkg-config`.

  ', /home/behai/.cargo/registry/src/index.crates.io-6f17d22bba15001f/openssl-sys-0.9.93/build/find_normal.rs:190:5
  note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
error: failed to compile `sqlx-cli v0.7.2`, intermediate artifacts can be found at `/tmp/cargo-installCJHaz3`
behai@hp-pavilion-15:~/rust/sqlx-mysql-migration$

Ubuntu 22.10 SQLx CLI Success Installation Logs

behai@hp-pavilion-15:~/rust/sqlx-mysql-migration$ cargo install sqlx-cli
    Updating crates.io index
  Installing sqlx-cli v0.7.2
    Updating crates.io index
   Compiling proc-macro2 v1.0.68
   Compiling unicode-ident v1.0.12
   Compiling libc v0.2.149
   Compiling autocfg v1.1.0
   Compiling cfg-if v1.0.0
   Compiling version_check v0.9.4
   Compiling serde v1.0.188
   Compiling typenum v1.17.0
   Compiling quote v1.0.33
   Compiling syn v2.0.38
   Compiling generic-array v0.14.7
   Compiling const-oid v0.9.5
   Compiling getrandom v0.2.10
   Compiling cc v1.0.83
   Compiling futures-core v0.3.28
   Compiling vcpkg v0.2.15
   Compiling pkg-config v0.3.27
   Compiling libm v0.2.8
   Compiling smallvec v1.11.1
   Compiling crypto-common v0.1.6
   Compiling block-buffer v0.10.4
   Compiling num-traits v0.2.17
   Compiling subtle v2.5.0
   Compiling once_cell v1.18.0
   Compiling digest v0.10.7
   Compiling pin-project-lite v0.2.13
   Compiling memchr v2.6.4
   Compiling openssl-sys v0.9.93
   Compiling lock_api v0.4.10
   Compiling ahash v0.8.3
   Compiling slab v0.4.9
   Compiling futures-task v0.3.28
   Compiling tinyvec_macros v0.1.1
   Compiling scopeguard v1.2.0
   Compiling futures-channel v0.3.28
   Compiling futures-sink v0.3.28
   Compiling serde_derive v1.0.188
   Compiling tinyvec v1.6.0
   Compiling rand_core v0.6.4
   Compiling log v0.4.20
   Compiling futures-util v0.3.28
   Compiling parking_lot_core v0.9.8
   Compiling unicode-normalization v0.1.22
   Compiling futures-macro v0.3.28
   Compiling tokio-macros v2.1.0
   Compiling mio v0.8.8
   Compiling num_cpus v1.16.0
   Compiling socket2 v0.5.4
   Compiling foreign-types-shared v0.1.1
   Compiling unicode-bidi v0.3.13
   Compiling serde_json v1.0.107
   Compiling bytes v1.5.0
   Compiling allocator-api2 v0.2.16
   Compiling base64ct v1.6.0
   Compiling zeroize v1.6.0
   Compiling pin-utils v0.1.0
   Compiling openssl v0.10.57
   Compiling ppv-lite86 v0.2.17
   Compiling futures-io v0.3.28
   Compiling crossbeam-utils v0.8.16
   Compiling rand_chacha v0.3.1
   Compiling pem-rfc7468 v0.7.0
   Compiling hashbrown v0.14.1
   Compiling tokio v1.32.0
   Compiling bitflags v2.4.0
   Compiling either v1.9.0
   Compiling foreign-types v0.3.2
   Compiling openssl-macros v0.1.1
   Compiling num-integer v0.1.45
   Compiling utf8parse v0.2.1
   Compiling ryu v1.0.15
   Compiling byteorder v1.5.0
   Compiling crossbeam-queue v0.3.8
   Compiling minimal-lexical v0.2.1
   Compiling itoa v1.0.9
   Compiling paste v1.0.14
   Compiling cpufeatures v0.2.9
   Compiling native-tls v0.2.11
   Compiling thiserror v1.0.49
   Compiling percent-encoding v2.3.0
   Compiling form_urlencoded v1.2.0
   Compiling nom v7.1.3
   Compiling parking_lot v0.12.1
   Compiling itertools v0.11.0
   Compiling der v0.7.8
   Compiling rand v0.8.5
   Compiling idna v0.4.0
   Compiling tracing-attributes v0.1.26
   Compiling thiserror-impl v1.0.49
   Compiling tracing-core v0.1.31
   Compiling num-iter v0.1.43
   Compiling spin v0.5.2
   Compiling unicode_categories v0.1.1
   Compiling equivalent v1.0.1
   Compiling openssl-probe v0.1.5
   Compiling crc-catalog v2.2.0
   Compiling crc v3.0.1
   Compiling sqlformat v0.2.2
   Compiling tracing v0.1.37
   Compiling indexmap v2.0.2
   Compiling lazy_static v1.4.0
   Compiling url v2.4.1
   Compiling spki v0.7.2
   Compiling futures-intrusive v0.5.0
   Compiling tokio-stream v0.1.14
   Compiling sha2 v0.10.8
   Compiling hashlink v0.8.4
   Compiling atoi v2.0.0
   Compiling memoffset v0.6.5
   Compiling num-bigint-dig v0.8.4
   Compiling event-listener v2.5.3
   Compiling dotenvy v0.15.7
   Compiling rustix v0.38.17
   Compiling hex v0.4.3
   Compiling pkcs8 v0.10.2
   Compiling sqlx-core v0.7.2
   Compiling anstyle-parse v0.2.2
   Compiling hmac v0.12.1
   Compiling libsqlite3-sys v0.26.0
   Compiling colorchoice v1.0.0
   Compiling anstyle v1.0.4
   Compiling linux-raw-sys v0.4.8
   Compiling finl_unicode v1.2.0
   Compiling anstyle-query v1.0.0
   Compiling anstream v0.6.4
   Compiling stringprep v0.1.4
   Compiling hkdf v0.12.3
   Compiling pkcs1 v0.7.5
   Compiling sha1 v0.10.6
   Compiling futures-executor v0.3.28
   Compiling signature v2.1.0
   Compiling spin v0.9.8
   Compiling md-5 v0.10.6
   Compiling nibble_vec v0.1.0
   Compiling dirs-sys-next v0.1.2
   Compiling base64 v0.21.4
   Compiling unicode-width v0.1.11
   Compiling camino v1.1.6
   Compiling clap_lex v0.5.1
   Compiling bitflags v1.3.2
   Compiling strsim v0.10.0
   Compiling semver v1.0.19
   Compiling endian-type v0.1.2
   Compiling whoami v1.4.1
   Compiling heck v0.4.1
   Compiling radix_trie v0.2.1
   Compiling clap_derive v4.4.2
   Compiling clap_builder v4.4.6
   Compiling nix v0.23.2
   Compiling dirs-next v2.0.0
   Compiling flume v0.11.0
   Compiling rsa v0.9.2
   Compiling fd-lock v3.0.13
   Compiling async-trait v0.1.73
   Compiling unicode-segmentation v1.10.1
   Compiling home v0.5.5
   Compiling anyhow v1.0.75
   Compiling sqlx-postgres v0.7.2
   Compiling rustyline v9.1.2
   Compiling sqlx-mysql v0.7.2
   Compiling clap v4.4.6
   Compiling cargo-platform v0.1.4
   Compiling instant v0.1.12
   Compiling iana-time-zone v0.1.57
   Compiling chrono v0.4.31
   Compiling backoff v0.4.0
   Compiling cargo_metadata v0.14.2
   Compiling clap_complete v4.4.3
   Compiling promptly v0.3.1
   Compiling console v0.15.7
   Compiling futures v0.3.28
   Compiling filetime v0.2.22
   Compiling glob v0.3.1
   Compiling sqlx-sqlite v0.7.2
   Compiling sqlx v0.7.2
   Compiling sqlx-cli v0.7.2
    Finished release [optimized] target(s) in 3m 19s
  Installing /home/behai/.cargo/bin/cargo-sqlx
  Installing /home/behai/.cargo/bin/sqlx
   Installed package `sqlx-cli v0.7.2` (executables `cargo-sqlx`, `sqlx`)
behai@hp-pavilion-15:~/rust/sqlx-mysql-migration$

Rust & MySQL: JSON serialisation of result sets retrieved using crate sqlx.

We run a MySQL stored procedure which returns a result set which has date columns. Using crates serde and serde_json, we serialise this result set into a JSON array of objects, whereby date columns are in Australian date format of dd/mm/yyyy.

This post is an extension of Rust & MySQL: connect, execute SQL statements and stored procs using crate sqlx. We’ll use the same Oracle Corporation MySQL test database, the same employees table and the same get_employees stored procedure.

To recap, the employees table has the following structure:

CREATE TABLE `employees` (
  `emp_no` int NOT NULL,
  `birth_date` date NOT NULL,
  `first_name` varchar(14) COLLATE utf8mb4_unicode_ci NOT NULL,
  `last_name` varchar(16) COLLATE utf8mb4_unicode_ci NOT NULL,
  `gender` enum('M','F') COLLATE utf8mb4_unicode_ci NOT NULL,
  `hire_date` date NOT NULL,
  PRIMARY KEY (`emp_no`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

And the get_employees stored procedure is:

DELIMITER $$
CREATE DEFINER=`root`@`%` PROCEDURE `get_employees`( pmLastName varchar(16), pmFirstName varchar(14) )
    READS SQL DATA
begin
  select * from employees e where (e.last_name like pmLastName)
    and (e.first_name like pmFirstName) order by e.emp_no;
end$$
DELIMITER ;

The code is an extension of the final version of the code in Select data via running a stored procedure section of the previously mentioned post:

๐Ÿš€ In this post, instead of manually formatting and printing each row of data, we serialise the entire result set into JSON and printing the JSON data out in a single operation.

The updated dependencies of the Cargo.toml file used in this post:

...
[dependencies]
async-std = "1.12.0"
sqlx = {version = "0.7", default-features = false, features = ["runtime-async-std", "macros", "mysql", "time"]}
time = {version = "0.3.22", default-features = false, features = ["formatting", "macros", "serde"]}
serde = {version = "1.0.188", features = ["derive"]}
serde_json = "1.0.107"

โ— For time crate, we add serde crate feature, so that we can do date serialising.

โ— We add two more crates for serialising and deserialising: serde and serde_json.

The complete working example is presented below.

Content of src/main.rs:
use sqlx::{FromRow, Pool, MySql, Error, MySqlPool, Row};
use sqlx::types::time::Date;
use async_std::task;
use serde::Serialize;

#[derive(FromRow, Serialize)]
struct Employee {    
    emp_no: i32,
    #[serde(with = "my_date_format")]
    birth_date: Date,
    first_name: String,
    last_name: String,    
    gender: String,
    #[serde(with = "my_date_format")]
    hire_date: Date,
}

mod my_date_format {
    use sqlx::types::time::Date;
    use time::macros::format_description;
    use serde::{self, Serializer};

    pub fn serialize&lt;S>(
        date: &Date,
        serializer: S,
    ) -> Result<S::Ok, S::Error>
    where
        S: Serializer,
    {
        let format = format_description!("[day]/[month]/[year]");
        let s = &date.format(&format).unwrap();
        serializer.serialize_str(&s)
    }    
}

async fn connect() -> Result&lt;Pool&lt;MySql>, Error> {
    return MySqlPool::connect("mysql://root:pcb.2176310315865259@localhost:3306/employees").await;
}

async fn do_run_stored_proc() {
    let result = task::block_on(connect());

    match result {
        Err(err) => {
            println!("Cannot connect to database [{}]", err.to_string());
        }        

        Ok(pool) => {
            let query_result = sqlx::query("call get_employees(?, ?)")
                .bind("%chi").bind("%ak")
                .map(|row: sqlx::mysql::MySqlRow| { 
                    Employee {
                        emp_no: row.get(0),
                        birth_date: row.get(1),
                        first_name: row.get(2),
                        last_name: row.get(3),
                        gender: row.get(4),
                        hire_date: row.get(5)
                    }
                })
                .fetch_all(&pool).await.unwrap();

            let json = serde_json::to_string_pretty(&query_result).unwrap();
            println!("{}", json);
        }
    }
}

fn main() {
    task::block_on(do_run_stored_proc());
}

Let’s walk through the code:

โ— Trait serde::Serialize — the struct Employee needs to implement this trait, so that it can be serialised.

โ— This documentation page Date in a custom format from crate serde provides an example for date time serialisation and deserialisation. The mod my_date_format above comes from this example, but I implement only the serialisation part.

The date format in pub fn serialize<S> has been discussed in the previous mentioned post, and in detail in Rust: baby step — some preliminary look at date.

โ— The derive macro helper attribute #[serde(with = "my_date_format")] also comes from Date in a custom format. Through trial and error, I’ve found out that it needs to be added to above all fields which need to be serialised.

โ— let json = serde_json::to_string_pretty(&query_result).unwrap(); is also from Date in a custom format. Although the use of the variable query_result is trial and error… and by sheer luck, I have it working the first time round. Originally I thought of extracting each row into a vector, then serialise the vector: but that is too much additional work. If you asked me why query_result works in this case, I would not be able to explain! Hopefully, I will come to understand this in the future.

Please note: the example code has been tested on both Windows 10 and Ubuntu 22.10.

The following screenshot shows the output of the above example:

One final point, we look at deleting and inserting data in Rust & MySQL: delete, insert data using crate sqlx, also using the employees table. The updated struct Employee in this post would still work in the just mentioned post. This is because when we manually create an instance of struct Employee, we have the two date columns in the correct format, hence no deserialisation is required:

                ...
                birth_date: date!(1999-11-24),
                hire_date: date!(2022-04-29)
                ...   

Thank you for reading. I hope you find this post useful and stay safe as always.

โœฟโœฟโœฟ

Feature image source:

Rust & MySQL: executing MySQL stored procedures which return multiple result sets using crate sqlx.

MySQL stored procedures can return multiple result sets. In this post, weโ€™re looking at calling such stored procedure. For each returned record set, we extract column values into a corresponding struct instance; and store this instance in a vector.

This post is a continuation of:

  1. Rust & MySQL: connect, execute SQL statements and stored procs using crate sqlx.
  2. Rust & MySQL: delete, insert data using crate sqlx.

We’ll use the same Oracle Corporation MySQL test data database. This time, we’ll use the departments and the dept_manager tables.

The departments table:

CREATE TABLE `departments` (
  `dept_no` char(4) NOT NULL,
  `dept_name` varchar(40) NOT NULL,
  PRIMARY KEY (`dept_no`),
  UNIQUE KEY `dept_name` (`dept_name`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

The dept_manager table:

CREATE TABLE `dept_manager` (
  `emp_no` int NOT NULL,
  `dept_no` char(4) NOT NULL,
  `from_date` date NOT NULL,
  `to_date` date NOT NULL,
  PRIMARY KEY (`emp_no`,`dept_no`),
  KEY `dept_no` (`dept_no`),
  CONSTRAINT `dept_manager_ibfk_1` FOREIGN KEY (`emp_no`) REFERENCES `employees` (`emp_no`) ON DELETE CASCADE,
  CONSTRAINT `dept_manager_ibfk_2` FOREIGN KEY (`dept_no`) REFERENCES `departments` (`dept_no`) ON DELETE CASCADE
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

The DemoStoredProc1 stored procedure:

DELIMITER $$
CREATE DEFINER=`root`@`%` PROCEDURE `DemoStoredProc1`( pm_dept_no varchar(4) )
    READS SQL DATA
begin
  select * from departments where dept_no = pm_dept_no;
  select * from dept_manager where dept_no = pm_dept_no;
end$$
DELIMITER ;

Stored procedure DemoStoredProc1(pm_dept_no varchar(4)) returns two result sets whose data come from tables departments and dept_manager respectively.

Please note: the example code has been tested on both Windows 10 and Ubuntu 22.10.

โ€โ€โ€

I could not find any example or documentation on this multiple result sets issue. The final example code is the result of experimentation with crate sqlx.

The final dependencies section of the Cargo.toml file used in this post:

...
[dependencies]
async-std = "1.12.0"
sqlx = {version = "0.7", default-features = false, features = ["runtime-async-std", "macros", "mysql", "time"]}

To keep the example simple, we won’t do time formatting, hence we don’t need crate time.

As you can guess, for the example, we’ll call the stored procedure DemoStoredProc1 with a 4-character dept_no. The first resultant record set, from the departments table, will have at most one (1) record. The second resultant record set, from the dept_manager table, will have at least one (1) record.

The complete working example is presented below.

Content of src/main.rs:
use sqlx::{FromRow, Pool, MySql, Error, MySqlPool, Row};
use sqlx::types::time::Date;

use async_std::task;

#[derive(FromRow, Debug)]
pub struct Department {
    pub dept_no: String,
    pub dept_name: String,
}

#[derive(FromRow, Debug)]
pub struct DepartmentManager {
    pub emp_no: i32,
    pub dept_no: String,
    pub from_date: Date,
    pub to_date: Date,
}

async fn connect() -> Result<Pool<MySql>, Error> {
    return MySqlPool::connect("mysql://root:pcb.2176310315865259@localhost:3306/employees").await;
}

async fn do_run_stored_proc(pool: &sqlx::Pool<MySql>, dept_no: &str) {
    let result = sqlx::query("call DemoStoredProc1(?)")
    .bind(dept_no)
    .fetch_all(pool).await;

    match result {
        Err(e) => {
            println!("Error select data for department number: {}", dept_no);
            println!("Error message: [{}].\n", e.to_string());
        }

        Ok(query_result) => {
            let mut dept_vec = Vec::<Department>::new(); 
            let mut dept_mgr_vec = Vec::<DepartmentManager>::new();

            println!("dept_vec.len() = {}", dept_vec.len());
            println!("dept_mgr_vec.len() = {}", dept_mgr_vec.len());

            for row in query_result {
                if row.columns().len() == 2 {
                    dept_vec.push(Department{dept_no: row.get(0), dept_name: row.get(1)});
                }
                else {
                    dept_mgr_vec.push(DepartmentManager{
                        emp_no: row.get(0),
                        dept_no: row.get(1),
                        from_date: row.get(2),
                        to_date: row.get(3),
                    });
                }
            }

            println!("{:#?}", dept_vec);
            println!("{:#?}", dept_mgr_vec);
        }
    }
}

fn main() {
    let result: Result<sqlx::Pool<sqlx::MySql>, sqlx::Error> = task::block_on(connect());

    match result {
        Err(err) => {
            println!("Cannot connect to database [{}]", err.to_string());
        }        

        Ok(pool) => {
            task::block_on(do_run_stored_proc(&pool, "d009"));
        }
    }
}

The code is pretty much a refactored version of the code in the previous two mentioned posts. We declare two (2) new structs: Department and DepartmentManager to capture the two (2) result sets.

We’re interested the for loop in async fn do_run_stored_proc(pool: &sqlx::Pool<MySql>, dept_no: &str):

            ...
            for row in query_result {
                if row.columns().len() == 2 {
                    dept_vec.push(Department{dept_no: row.get(0), dept_name: row.get(1)});
                }
                else {
                    dept_mgr_vec.push(DepartmentManager{
                        emp_no: row.get(0),
                        dept_no: row.get(1),
                        from_date: row.get(2),
                        to_date: row.get(3),
                    });
                }
            }
            ...

Basically, for each row, if there’re two (2) columns, we extract the values into an instance of the struct Department, and push this instance onto vector dept_vec. Otherwise, the row values go to an instance of the struct DepartmentManager, and this instance goes to vector dept_mgr_vec. This logic is very weak. What should we do if the stored procedure returns three (3) result sets, and two (2) of them have two (2) columns each? But, for the purpose of this post, we’ll ignore this issue.

โ“ต As mentioned previously, I’ve done several iterations before this final version of the code. My first attempt looking at the returned result is:

            ...
            for (rindex, row) in query_result.iter().enumerate() {
                println!("* Row number: {}", rindex+1);
                println!("* Total columns: {}", row.columns().len());
                println!("{:#?}\n", row);
            }
            ...

In total, there are only five (5) rows returned between the two (2) result sets, but the output is pretty long. Please see section First Iteration Output for the full printout.

โ“ถ Based on the output, we can see that all result sets returned. And there are no separate result sets. There’re just rows, with different column meta. My first attempt, then, at extracting values out is to get the column data type, then case out (match) the data type: i.e. for each column data type, declare a variable of corresponding type, and extract the target column value into this variable. My attempt at getting the column data type:

use sqlx::{FromRow, Pool, MySql, Error, MySqlPool, Row, TypeInfo, Column};
...
            ...
            for (rindex, row) in query_result.iter().enumerate() {
                println!("* Row number: {}", rindex+1);
                println!("* Total columns: {}\n", row.columns().len());

                for (cindex, col) in row.columns().iter().enumerate() {
                    println!("{:#?}", col.type_info());
                    // println!("> {}. Name: {}. Value: {}.", cindex+1, col.name(), row.get(cindex));
                    println!("> {}. Name: {}. Type Name: {}.", cindex+1, col.name(), col.type_info().name());
                }
            }
            ...

— Note, the two (2) additional traits added to the use declarations: Trait sqlx::TypeInfo and trait sqlx::Column.

Please see section Second Iteration Output for the output of the above for loop. Trait sqlx::TypeInfo and struct sqlx_mysql::MySqlTypeInfo do give info on column meta data. But still I could not get to pub enum ColumnType, and the documentation does not seem to have anything about pub enum ColumnType either (?).

In hindsight, checking individual columns’ data types, declaring variables, etc. is a bad idea… I can see in my mind, how messy, and error prone the code is going to be. I’m not sure if the above solution is a correct or even an acceptable one, but it seems much cleaner.

Again, this post is just a documentation of my learning progress. I do hope someone would find it useful. Thank you for reading and stay safe as always.

โœฟโœฟโœฟ

Feature image source:

โœฟโœฟโœฟ

First Iteration Output

* Row number: 1
* Total columns: 2
MySqlRow {
    row: Row {
        storage: b"\0\x04d009\x10Customer Service",
        values: [
            Some(
                2..6,
            ),
            Some(
                7..23,
            ),
        ],
    },
    format: Binary,
    columns: [
        MySqlColumn {
            ordinal: 0,
            name: dept_no,
            type_info: MySqlTypeInfo {
                type: String,
                flags: ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
                ),
                char_set: 224,
                max_size: Some(
                    16,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
                ),
            ),
        },
        MySqlColumn {
            ordinal: 1,
            name: dept_name,
            type_info: MySqlTypeInfo {
                type: VarString,
                flags: ColumnFlags(
                    NOT_NULL | UNIQUE_KEY | NO_DEFAULT_VALUE,
                ),
                char_set: 224,
                max_size: Some(
                    160,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | UNIQUE_KEY | NO_DEFAULT_VALUE,
                ),
            ),
        },
    ],
    column_names: {},
}

* Row number: 2
* Total columns: 4
MySqlRow {
    row: Row {
        storage: b"\0L\xb4\x01\0\x04d009\x04\xc1\x07\x01\x01\x04\xc4\x07\n\x11",
        values: [
            Some(
                1..5,
            ),
            Some(
                6..10,
            ),
            Some(
                10..15,
            ),
            Some(
                15..20,
            ),
        ],
    },
    format: Binary,
    columns: [
        MySqlColumn {
            ordinal: 0,
            name: emp_no,
            type_info: MySqlTypeInfo {
                type: Long,
                flags: ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
                ),
                char_set: 63,
                max_size: Some(
                    11,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
                ),
            ),
        },
        MySqlColumn {
            ordinal: 1,
            name: dept_no,
            type_info: MySqlTypeInfo {
                type: String,
                flags: ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | MULTIPLE_KEY | NO_DEFAULT_VALUE,
                ),
                char_set: 224,
                max_size: Some(
                    16,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | MULTIPLE_KEY | NO_DEFAULT_VALUE,
                ),
            ),
        },
        MySqlColumn {
            ordinal: 2,
            name: from_date,
            type_info: MySqlTypeInfo {
                type: Date,
                flags: ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
                char_set: 63,
                max_size: Some(
                    10,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
            ),
        },
        MySqlColumn {
            ordinal: 3,
            name: to_date,
            type_info: MySqlTypeInfo {
                type: Date,
                flags: ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
                char_set: 63,
                max_size: Some(
                    10,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
            ),
        },
    ],
    column_names: {
        emp_no: 0,
        dept_no: 1,
        from_date: 2,
        to_date: 3,
    },
}

* Row number: 3
* Total columns: 4
MySqlRow {
    row: Row {
        storage: b"\0\xa8\xb4\x01\0\x04d009\x04\xc4\x07\n\x11\x04\xc8\x07\t\x08",
        values: [
            Some(
                1..5,
            ),
            Some(
                6..10,
            ),
            Some(
                10..15,
            ),
            Some(
                15..20,
            ),
        ],
    },
    format: Binary,
    columns: [
        MySqlColumn {
            ordinal: 0,
            name: emp_no,
            type_info: MySqlTypeInfo {
                type: Long,
                flags: ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
                ),
                char_set: 63,
                max_size: Some(
                    11,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
                ),
            ),
        },
        MySqlColumn {
            ordinal: 1,
            name: dept_no,
            type_info: MySqlTypeInfo {
                type: String,
                flags: ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | MULTIPLE_KEY | NO_DEFAULT_VALUE,
                ),
                char_set: 224,
                max_size: Some(
                    16,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | MULTIPLE_KEY | NO_DEFAULT_VALUE,
                ),
            ),
        },
        MySqlColumn {
            ordinal: 2,
            name: from_date,
            type_info: MySqlTypeInfo {
                type: Date,
                flags: ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
                char_set: 63,
                max_size: Some(
                    10,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
            ),
        },
        MySqlColumn {
            ordinal: 3,
            name: to_date,
            type_info: MySqlTypeInfo {
                type: Date,
                flags: ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
                char_set: 63,
                max_size: Some(
                    10,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
            ),
        },
    ],
    column_names: {
        emp_no: 0,
        dept_no: 1,
        from_date: 2,
        to_date: 3,
    },
}

* Row number: 4
* Total columns: 4
MySqlRow {
    row: Row {
        storage: b"\0\x05\xb5\x01\0\x04d009\x04\xc8\x07\t\x08\x04\xcc\x07\x01\x03",
        values: [
            Some(
                1..5,
            ),
            Some(
                6..10,
            ),
            Some(
                10..15,
            ),
            Some(
                15..20,
            ),
        ],
    },
    format: Binary,
    columns: [
        MySqlColumn {
            ordinal: 0,
            name: emp_no,
            type_info: MySqlTypeInfo {
                type: Long,
                flags: ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
                ),
                char_set: 63,
                max_size: Some(
                    11,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
                ),
            ),
        },
        MySqlColumn {
            ordinal: 1,
            name: dept_no,
            type_info: MySqlTypeInfo {
                type: String,
                flags: ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | MULTIPLE_KEY | NO_DEFAULT_VALUE,
                ),
                char_set: 224,
                max_size: Some(
                    16,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | MULTIPLE_KEY | NO_DEFAULT_VALUE,
                ),
            ),
        },
        MySqlColumn {
            ordinal: 2,
            name: from_date,
            type_info: MySqlTypeInfo {
                type: Date,
                flags: ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
                char_set: 63,
                max_size: Some(
                    10,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
            ),
        },
        MySqlColumn {
            ordinal: 3,
            name: to_date,
            type_info: MySqlTypeInfo {
                type: Date,
                flags: ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
                char_set: 63,
                max_size: Some(
                    10,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
            ),
        },
    ],
    column_names: {
        emp_no: 0,
        dept_no: 1,
        from_date: 2,
        to_date: 3,
    },
}

* Row number: 5
* Total columns: 4
MySqlRow {
    row: Row {
        storage: b"\0C\xb5\x01\0\x04d009\x04\xcc\x07\x01\x03\x04\x0f'\x01\x01",
        values: [
            Some(
                1..5,
            ),
            Some(
                6..10,
            ),
            Some(
                10..15,
            ),
            Some(
                15..20,
            ),
        ],
    },
    format: Binary,
    columns: [
        MySqlColumn {
            ordinal: 0,
            name: emp_no,
            type_info: MySqlTypeInfo {
                type: Long,
                flags: ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
                ),
                char_set: 63,
                max_size: Some(
                    11,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
                ),
            ),
        },
        MySqlColumn {
            ordinal: 1,
            name: dept_no,
            type_info: MySqlTypeInfo {
                type: String,
                flags: ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | MULTIPLE_KEY | NO_DEFAULT_VALUE,
                ),
                char_set: 224,
                max_size: Some(
                    16,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | PRIMARY_KEY | MULTIPLE_KEY | NO_DEFAULT_VALUE,
                ),
            ),
        },
        MySqlColumn {
            ordinal: 2,
            name: from_date,
            type_info: MySqlTypeInfo {
                type: Date,
                flags: ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
                char_set: 63,
                max_size: Some(
                    10,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
            ),
        },
        MySqlColumn {
            ordinal: 3,
            name: to_date,
            type_info: MySqlTypeInfo {
                type: Date,
                flags: ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
                char_set: 63,
                max_size: Some(
                    10,
                ),
            },
            flags: Some(
                ColumnFlags(
                    NOT_NULL | BINARY | NO_DEFAULT_VALUE,
                ),
            ),
        },
    ],
    column_names: {
        emp_no: 0,
        dept_no: 1,
        from_date: 2,
        to_date: 3,
    },
}

Second Iteration Output

* Row number: 1
* Total columns: 2

MySqlTypeInfo {
    type: String,
    flags: ColumnFlags(
        NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
    ),
    char_set: 224,
    max_size: Some(
        16,
    ),
}
> 1. Name: dept_no. Type Name: CHAR.
MySqlTypeInfo {
    type: VarString,
    flags: ColumnFlags(
        NOT_NULL | UNIQUE_KEY | NO_DEFAULT_VALUE,
    ),
    char_set: 224,
    max_size: Some(
        160,
    ),
}
> 2. Name: dept_name. Type Name: VARCHAR.
* Row number: 2
* Total columns: 4

MySqlTypeInfo {
    type: Long,
    flags: ColumnFlags(
        NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
    ),
    char_set: 63,
    max_size: Some(
        11,
    ),
}
> 1. Name: emp_no. Type Name: INT.
MySqlTypeInfo {
    type: String,
    flags: ColumnFlags(
        NOT_NULL | PRIMARY_KEY | MULTIPLE_KEY | NO_DEFAULT_VALUE,
    ),
    char_set: 224,
    max_size: Some(
        16,
    ),
}
> 2. Name: dept_no. Type Name: CHAR.
MySqlTypeInfo {
    type: Date,
    flags: ColumnFlags(
        NOT_NULL | BINARY | NO_DEFAULT_VALUE,
    ),
    char_set: 63,
    max_size: Some(
        10,
    ),
}
> 3. Name: from_date. Type Name: DATE.
MySqlTypeInfo {
    type: Date,
    flags: ColumnFlags(
        NOT_NULL | BINARY | NO_DEFAULT_VALUE,
    ),
    char_set: 63,
    max_size: Some(
        10,
    ),
}
> 4. Name: to_date. Type Name: DATE.
* Row number: 3
* Total columns: 4

MySqlTypeInfo {
    type: Long,
    flags: ColumnFlags(
        NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
    ),
    char_set: 63,
    max_size: Some(
        11,
    ),
}
> 1. Name: emp_no. Type Name: INT.
MySqlTypeInfo {
    type: String,
    flags: ColumnFlags(
        NOT_NULL | PRIMARY_KEY | MULTIPLE_KEY | NO_DEFAULT_VALUE,
    ),
    char_set: 224,
    max_size: Some(
        16,
    ),
}
> 2. Name: dept_no. Type Name: CHAR.
MySqlTypeInfo {
    type: Date,
    flags: ColumnFlags(
        NOT_NULL | BINARY | NO_DEFAULT_VALUE,
    ),
    char_set: 63,
    max_size: Some(
        10,
    ),
}
> 3. Name: from_date. Type Name: DATE.
MySqlTypeInfo {
    type: Date,
    flags: ColumnFlags(
        NOT_NULL | BINARY | NO_DEFAULT_VALUE,
    ),
    char_set: 63,
    max_size: Some(
        10,
    ),
}
> 4. Name: to_date. Type Name: DATE.
* Row number: 4
* Total columns: 4

MySqlTypeInfo {
    type: Long,
    flags: ColumnFlags(
        NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
    ),
    char_set: 63,
    max_size: Some(
        11,
    ),
}
> 1. Name: emp_no. Type Name: INT.
MySqlTypeInfo {
    type: String,
    flags: ColumnFlags(
        NOT_NULL | PRIMARY_KEY | MULTIPLE_KEY | NO_DEFAULT_VALUE,
    ),
    char_set: 224,
    max_size: Some(
        16,
    ),
}
> 2. Name: dept_no. Type Name: CHAR.
MySqlTypeInfo {
    type: Date,
    flags: ColumnFlags(
        NOT_NULL | BINARY | NO_DEFAULT_VALUE,
    ),
    char_set: 63,
    max_size: Some(
        10,
    ),
}
> 3. Name: from_date. Type Name: DATE.
MySqlTypeInfo {
    type: Date,
    flags: ColumnFlags(
        NOT_NULL | BINARY | NO_DEFAULT_VALUE,
    ),
    char_set: 63,
    max_size: Some(
        10,
    ),
}
> 4. Name: to_date. Type Name: DATE.
* Row number: 5
* Total columns: 4

MySqlTypeInfo {
    type: Long,
    flags: ColumnFlags(
        NOT_NULL | PRIMARY_KEY | NO_DEFAULT_VALUE,
    ),
    char_set: 63,
    max_size: Some(
        11,
    ),
}
> 1. Name: emp_no. Type Name: INT.
MySqlTypeInfo {
    type: String,
    flags: ColumnFlags(
        NOT_NULL | PRIMARY_KEY | MULTIPLE_KEY | NO_DEFAULT_VALUE,
    ),
    char_set: 224,
    max_size: Some(
        16,
    ),
}
> 2. Name: dept_no. Type Name: CHAR.
MySqlTypeInfo {
    type: Date,
    flags: ColumnFlags(
        NOT_NULL | BINARY | NO_DEFAULT_VALUE,
    ),
    char_set: 63,
    max_size: Some(
        10,
    ),
}
> 3. Name: from_date. Type Name: DATE.
MySqlTypeInfo {
    type: Date,
    flags: ColumnFlags(
        NOT_NULL | BINARY | NO_DEFAULT_VALUE,
    ),
    char_set: 63,
    max_size: Some(
        10,
    ),
}
> 4. Name: to_date. Type Name: DATE.

Rust & MySQL: delete, insert data using crate sqlx.

We’ll look at: how to delete data from and insert data into MySQL tables using crate sqlx.

This post is a continuation of Rust & MySQL: connect, execute SQL statements and stored procs using crate sqlx. We’ll use the same Oracle Corporation MySQL test data database. We’ll also use the employees table. To recap, its structure is:

CREATE TABLE `employees` (
  `emp_no` int NOT NULL,
  `birth_date` date NOT NULL,
  `first_name` varchar(14) COLLATE utf8mb4_unicode_ci NOT NULL,
  `last_name` varchar(16) COLLATE utf8mb4_unicode_ci NOT NULL,
  `gender` enum('M','F') COLLATE utf8mb4_unicode_ci NOT NULL,
  `hire_date` date NOT NULL,
  PRIMARY KEY (`emp_no`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

And also the stored procedure:

DELIMITER $$
CREATE DEFINER=`root`@`%` PROCEDURE `get_employees`( pmLastName varchar(16), pmFirstName varchar(14) )
    READS SQL DATA
begin
  select * from employees e where (e.last_name like pmLastName)
    and (e.first_name like pmFirstName) order by e.emp_no;
end$$
DELIMITER ;

Please note: the example code has been tested on both Windows 10 and Ubuntu 22.10.

โ€โ€โ€

The followings are the references used to develop the code for this post:

Cargo.toml is also identical to the one used in the previous quoted post. Its dependencies section is as follow:

...
[dependencies]
async-std = "1.12.0"
sqlx = {version = "0.7", default-features = false, features = ["runtime-async-std", "macros", "mysql", "time"]}
time = {version = "0.3.22", default-features = false, features = ["formatting", "macros"]}

The example code is simple. We delete the employee whose number is 600000 from the database. If the deletion was successful, we would insert a new employee whose number is 600000. Finally, if the addition was successful, we would retrieve the just inserted employee by calling the stored procedure get_employees(...), with partial last name and partial first name of the just inserted employee.

The complete working example is presented below.

Content of src/main.rs:
use sqlx::{FromRow, Pool, MySql, Row, Error, MySqlPool};
use sqlx::types::time::Date;
use time::macros::{date, format_description};

use async_std::task;

#[derive(FromRow, Debug)]
pub struct Employee {
    pub emp_no: i32,
    pub birth_date: Date,
    pub first_name: String,
    pub last_name: String,    
    pub gender: String,
    pub hire_date: Date,
}

const TEST_EMP_NO: i32 = 600000; // Last emp_no in database is 500113.

async fn connect() -> Result<Pool<MySql>, Error> {
    return MySqlPool::connect("mysql://root:pcb.2176310315865259@localhost:3306/employees").await;
}

async fn do_delete(pool: &sqlx::Pool<MySql>, emp_no: i32) -> bool {
    let result = sqlx::query("delete from employees where emp_no = ?")
        .bind(emp_no)
        .execute(pool).await;

    match result {
        Err(e) => {
            println!("Error deleting employee: {}\n", e.to_string());
            return false;
        }

        Ok(res) => {
            println!("Employee number: {} has been deleted.", emp_no);
            println!("Number of Employees deleted: {}", res.rows_affected());            
        }
    }

    true
}

async fn do_insert(pool: &sqlx::Pool<MySql>, emp: &Employee) -> bool {
    let result = sqlx::query(
        "insert into employees (
            emp_no, 
            birth_date, 
            first_name, 
            last_name, 
            gender, 
            hire_date) 
        values (?, ?, ?, ?, ?, ?)")
        .bind(&emp.emp_no)
        .bind(&emp.birth_date)
        .bind(&emp.first_name)
        .bind(&emp.last_name)
        .bind(&emp.gender)
        .bind(&emp.hire_date)
        .execute(pool).await;

    match result {
        Err(e) => {
            println!("Error inserting employee: {:#?}", emp);
            println!("Error message: [{}].\n", e.to_string());
            return false;
        }

        Ok(res) => {
            println!("Employee has been inserted.");
            println!("Number of employees inserted: {}", res.rows_affected());
        }
    }

    true
}

async fn do_query(pool: &sqlx::Pool<MySql>, last_name: &str, first_name: &str) {
    let result = sqlx::query("call get_employees(?, ?)")
    .bind(last_name)
    .bind(first_name)
    .map(|row: sqlx::mysql::MySqlRow| { 
        Employee {
            emp_no: row.get(0),
            birth_date: row.get(1),
            first_name: row.get(2),
            last_name: row.get(3),
            gender: row.get(4),
            hire_date: row.get(5)
        }
    })
    .fetch_all(pool).await;

    match result {
        Err(e) => {
            println!("Error select employee with last name: {}, first name: {}", last_name, first_name);
            println!("Error message: [{}].\n", e.to_string());
        }

        Ok(query_result) => {
            println!("Number of Employees selected: {}", query_result.len());

            let format = format_description!("[day]/[month]/[year]");

            for (rindex, employee) in query_result.iter().enumerate() {
                println!("{}. No.: {}, Birth Date: {}, First Name: {}, Last Name: {}, Gender: {}, Hire Date: {}", 
                    rindex+1,
                    &employee.emp_no,
                    &employee.birth_date.format(&format).unwrap(),
                    &employee.first_name,
                    &employee.last_name,
                    &employee.gender,
                    &employee.hire_date.format(&format).unwrap());
            }
        }
    }
}

async fn do_delete_insert_data() {
    let result: Result<sqlx::Pool<sqlx::MySql>, sqlx::Error> = task::block_on(connect());

    match result {
        Err(err) => {
            println!("Cannot connect to database [{}]", err.to_string());
        }        

        Ok(pool) => {
            if !task::block_on(do_delete(&pool, TEST_EMP_NO)) {
                panic!("Failed to delete test employee.");
            }

            if !task::block_on(do_insert(&pool, &Employee {
                emp_no: TEST_EMP_NO,
                birth_date: date!(1999-11-24),
                first_name: String::from("Bรฉ Hai"),
                last_name: String::from("Nguyแป…n"),
                gender: String::from("M"),
                hire_date: date!(2022-04-29)
            })) {
                panic!("Failed to insert test employee.");
            }

            task::block_on(do_query(&pool, "%uyแป…n", "%รฉ H%"));
        }
    }
}

fn main() {
    task::block_on(do_delete_insert_data());
}

Some of the code should be familiar, based on the last mentioned post above. We’ll go over the new code.

  • Method do_delete(pool: &sqlx::Pool<MySql>, emp_no: i32) -> bool is where test record deletion takes place. We call sqlx::query with a parameterised delete SQL statement, this call returns struct sqlx::query::Query. We then call its bind(…) method to pass the value of do_delete(...)‘s parameter emp_no to SQL statement parameter. We then chained-call to the execute(…) method to run the delete SQL statement. If the deletion fails, we return false otherwise true.
  • Method do_insert(pool: &sqlx::Pool<MySql>, emp: &Employee) -> bool is where test record insertion takes place. Its internal working is pretty much identical to do_delete(...).
  • Method do_query(pool: &sqlx::Pool<MySql>, last_name: &str, first_name: &str) is a just a refactored version of the last example in the above mentioned post.
  • Method do_delete_insert_data() should be self-explanatory.

I write this example code for my own understanding, and this post so that I will have something to go back to if I forgot how to do this ๐Ÿ˜‚. It has been easier than the last one. I do hope it’s useful for somebody. Thank you for reading and stay safe as always.

โœฟโœฟโœฟ

Feature image source:

Rust & MySQL: connect, execute SQL statements and stored procs using crate sqlx.

We’ll discuss: โ“ต how to connect to a MySQL server, โ“ถ run queries to select some data and display returned data, โ“ท finally, execute stored procedures which return a single dataset.

First, please let me state that I’m aware of at least three (3) different crates for MySQL: โ“ต mysql, โ“ถ sqlx, and โ“ท Diesel.

I did look at mysql initially. Then I started checking other crates. Diesel is an Object Relation Model (ORM), I’m not yet keen on taking on the complication of learning ORM, I give this crate a pass in the meantime.

According to the documentation, crate sqlx is implemented in Rust, and it’s database agnostic: it supports PostgreSQL, MySQL, SQLite, and MSSQL.

— It sounds enticing ๐Ÿ˜‚… We learn one crate for several database servers. The learning process is tough for me. The Rust standard library gives examples. This crate lacks that… It takes a long time for me to be able to write the example code in this post, with help from the Rust Users Community Forum.

The database used in this post is the Oracle Corporation MySQL test data; it’s also been used in the following posts:

  1. Python: ReportLab — a Master Detail Report.
  2. Python: executing MySQL stored procedures which return multiple result sets.
  3. pgloader Docker: migrating from Docker & localhost MySQL to localhost PostgreSQL.
  4. Python: executing PostgreSQL stored functions which return multiple result sets.
  5. Python: SQLAlchemy — understanding sessions and associated queries.
  6. Python: SQLAlchemy — user-defined query classes for scoped_session.query_property(query_cls=None).

In this post, we use the employees table, whose structure is:

CREATE TABLE `employees` (
  `emp_no` int NOT NULL,
  `birth_date` date NOT NULL,
  `first_name` varchar(14) COLLATE utf8mb4_unicode_ci NOT NULL,
  `last_name` varchar(16) COLLATE utf8mb4_unicode_ci NOT NULL,
  `gender` enum('M','F') COLLATE utf8mb4_unicode_ci NOT NULL,
  `hire_date` date NOT NULL,
  PRIMARY KEY (`emp_no`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;

Cargo.toml is common for all examples. Its dependencies section is as follow:

...
[dependencies]
async-std = "1.12.0"
sqlx = {version = "0.7", default-features = false, features = ["runtime-async-std", "macros", "mysql", "time"]}
time = {version = "0.3.22", default-features = false, features = ["formatting", "macros"]}

Crate async-std is required as stated in crate sqlx‘s documentation.

On sqlx crate features macros and time. Crate feature macros is required to make the constraint FromRow available for the derive attribute. It took me a while to figure this one out, the documentation does not seem to mention it. Crate feature time must be enabled, otherwise sqlx::types::time::Date would not work: I think it’s crate time that sqlx uses. Although I could not find any documentation to back this up.

We’ve discussed crate time in this post Rust: baby step — some preliminary look at date.

— Without explicitly including crate time, and enable crate features, formatting and macros, I can’t use date formatting function. I’m unsure of how this relates to sqlx::types::time::Date. So please keep this point in mind, there might be a better alternative.

Please note: I only tested these examples on Windows 10.

โถ Establishing a MySQL server database connection. Based on Struct sqlx::MySqlConnection, and examples given in async-std and launchbadge/sqlx, I came up with the following example:

Content of src/main.rs:
use sqlx::{Pool, MySql, Error, MySqlPool};

use async_std::task;

async fn connect() -> Result<Pool<MySql>, Error> {
    return MySqlPool::connect("mysql://root:pcb.2176310315865259@localhost:3306/employees").await;
}

async fn do_test_connection() {
    let result = task::block_on(connect());

    match result {
        Err(err) => {
            println!("Cannot connect to database [{}]", err.to_string());
        }        

        Ok(_) => {
            println!("Connected to database successfully.");
        }
    }
}

fn main() {
    task::block_on(do_test_connection());
}

โท Select data using a SQL statement. In addition to the references quoted in Establishing a MySQL server database connection above, the following posts help me write this example:

Content of src/main.rs:
use sqlx::{FromRow, Pool, MySql, Error, MySqlPool};
use sqlx::types::time::Date;
use time::macros::format_description;

use async_std::task;

#[derive(FromRow)]
struct Employee {
    emp_no: i32,
    birth_date: Date,
    first_name: String,
    last_name: String,    
    gender: String,
    hire_date: Date,
}

async fn connect() -> Result<Pool<MySql>, Error> {
    return MySqlPool::connect("mysql://root:pcb.2176310315865259@localhost:3306/employees").await;
}

async fn do_run_query() {
    let result = task::block_on(connect());

    match result {
        Err(err) => {
            println!("Cannot connect to database [{}]", err.to_string());
        }        

        Ok(pool) => {
            let query_result = sqlx::query_as::<_, Employee>("select * from employees where emp_no <= 10010")
                .fetch_all(&pool).await.unwrap();

            println!("Number of Employees selected: {}", query_result.len());

            let format = format_description!("[day]/[month]/[year]");

            for (rindex, employee) in query_result.iter().enumerate() {
                println!("{}. No.: {}, Birth Date: {}, First Name: {}, Last Name: {}, Gender: {}, Hire Date: {}", 
                    rindex+1,
                    &employee.emp_no,
                    &employee.birth_date.format(&format).unwrap(),
                    &employee.first_name,
                    &employee.last_name,
                    &employee.gender,
                    &employee.hire_date.format(&format).unwrap());
            }
        }
    }
}

fn main() {
    task::block_on(do_run_query());
}

The fields in struct Employee match table employees‘s exactly. I think the code is self-explanatory, please see the relevant documentation for detail, I can’t explain better than the official crates’ documentation.

โธ Select data via running a stored procedure. The stored procedure is simple:

DELIMITER $$
CREATE DEFINER=`root`@`%` PROCEDURE `get_employees`( pmLastName varchar(16), pmFirstName varchar(14) )
    READS SQL DATA
begin
  select * from employees e where (e.last_name like pmLastName)
    and (e.first_name like pmFirstName) order by e.emp_no;
end$$
DELIMITER ;

Within MySQL Workbench, it can be called with:

call get_employees('%chi', '%ak'); 
call get_employees('%CHI', '%AK'); 

I could not find any example or documentation on how to call stored procedures. So willy-nilly, instead of a query, as seen in Select data using a SQL statement above, I pass in the stored procedure call:

            ...
            let query_result = sqlx::query_as::<_, Employee>("call get_employees('%chi', '%ak')")
                //.bind("%chi").bind("%ak")
                .fetch_all(&pool).await.unwrap();

            println!("Number of Employees selected: {}", query_result.len());
            ...			

It panics with:

F:\rust\sqlx>set RUST_BACKTRACE=1

F:\rust\sqlx>target\debug\learn_sqlx.exe
thread 'main' panicked at 'called `Result::unwrap()` on an `Err` value: ColumnNotFound("emp_no")', src\main.rs:32:41
stack backtrace:
...

I asked for help: How to call a MySQL stored proc using crate sqlx? It seems that crate sqlx has some unresolved bug — accessing resultant dataset column by name would cause a panic.

My first attempt based on the suggested solution: access the resultant dataset column by index, and copy column values into individual variables, then (process and) display these variables:

Content of src/main.rs:
use sqlx::{Pool, MySql, Error, MySqlPool, Row};
use sqlx::types::time::Date;
use time::macros::format_description;

use async_std::task;

async fn connect() -> Result<Pool<MySql>, Error> {
    return MySqlPool::connect("mysql://root:pcb.2176310315865259@localhost:3306/employees").await;
}

async fn do_run_stored_proc() {
    let result = task::block_on(connect());

    match result {
        Err(err) => {
            println!("Cannot connect to database [{}]", err.to_string());
        }        

        Ok(pool) => {
            let query_result = sqlx::query("call get_employees(?, ?)")
                .bind("%chi").bind("%ak")
                .fetch_all(&pool).await.unwrap();

            println!("Number of Employees selected: {}", query_result.len());

            let format = format_description!("[day]/[month]/[year]");

            for (rindex, row) in query_result.iter().enumerate() {
                let emp_no: i32 = row.get(0);
                let birth_date: Date = row.get(1);
                let first_name: String = row.get(2);
                let last_name: String = row.get(3);
                let gender: String = row.get(4);
                let hire_date: Date = row.get(5);

                println!("{}. No.: {}, Birth Date: {}, First Name: {}, Last Name: {}, Gender: {}, Hire Date: {}", 
                    rindex+1,
                    emp_no,
                    birth_date.format(&format).unwrap(),
                    first_name,
                    last_name,
                    gender,
                    hire_date.format(&format).unwrap());
            }
        }
    }
}

fn main() {
    task::block_on(do_run_stored_proc());
}

Based on the following posts:

My second attempt at calling the stored procedure and manually map to struct Employee:

Content of src/main.rs:
use sqlx::{FromRow, Pool, MySql, Error, MySqlPool, Row};
use sqlx::types::time::Date;
use time::macros::format_description;

use async_std::task;

#[derive(FromRow)]
struct Employee {
    emp_no: i32,
    birth_date: Date,
    first_name: String,
    last_name: String,    
    gender: String,
    hire_date: Date,
}

async fn connect() -> Result<Pool<MySql>, Error> {
    return MySqlPool::connect("mysql://root:pcb.2176310315865259@localhost:3306/employees").await;
}

async fn do_run_stored_proc() {
    let result = task::block_on(connect());

    match result {
        Err(err) => {
            println!("Cannot connect to database [{}]", err.to_string());
        }        

        Ok(pool) => {
            let query_result = sqlx::query("call get_employees(?, ?)")
                .bind("%chi").bind("%ak")
                .map(|row: sqlx::mysql::MySqlRow| { 
                    Employee {
                        emp_no: row.get(0),
                        birth_date: row.get(1),
                        first_name: row.get(2),
                        last_name: row.get(3),
                        gender: row.get(4),
                        hire_date: row.get(5)
                    }
                })
                .fetch_all(&pool).await.unwrap();

            println!("Number of Employees selected: {}", query_result.len());

            let format = format_description!("[day]/[month]/[year]");

            for (rindex, employee) in query_result.iter().enumerate() {
                println!("{}. No.: {}, Birth Date: {}, First Name: {}, Last Name: {}, Gender: {}, Hire Date: {}", 
                    rindex+1,
                    &employee.emp_no,
                    &employee.birth_date.format(&format).unwrap(),
                    &employee.first_name,
                    &employee.last_name,
                    &employee.gender,
                    &employee.hire_date.format(&format).unwrap());
            }
        }
    }
}

fn main() {
    task::block_on(do_run_stored_proc());
}

Crate sqlx is very large, it’d take a lot of time to be fluent in this crate. This is my first step. It takes a lot of time to write these simple examples: Rust is certainly tougher to learn than Python!

I’m writing this post to have a record of my progress. But I do hope some newcomers find it helpful and useful. Thank you for reading and stay safe as always.

โœฟโœฟโœฟ

Feature image source:

Python: local date time and UTC date time.

Understanding the relationship between local date time and UTC date time. We will look at the following issues — โ“ต local time, time zone, UTC offset, local date time and UTC date time; โ“ถ assign UTC time zone to MySQL and PostgreSQL UTC date time values which are stored with no time zone information.

I’ve done some studies on this subject, and have written some investigative code. I will just start off at the point most relevant to what we are going to look at.

We will be using only the Python standard date time library datetime โ€” Basic date and time types, no third party packages involved.

โถ First, let’s have a look at local time, time zone, UTC offset, local date time and UTC date time.

import time

local_time = time.localtime()

print("Time Zone: ", time.tzname)
print("Time Zone: ", time.strftime("%Z", local_time))
print("Date and Time Zone: ", time.strftime("%Y-%m-%d %H:%M:%S %Z", local_time) )
print("UTC Offset: ", time.strftime("%z", local_time))

Output:

Time Zone:  ('AUS Eastern Standard Time', 'AUS Eastern Summer Time')
Time Zone:  AUS Eastern Summer Time
Date and Time Zone:  2023-02-10 15:19:18 AUS Eastern Summer Time
UTC Offset:  +1100

The most important piece of information is UTC Offset: +1100, the first two (2) digits, positive 11, is the number of hours, the second two (2) digits, 00, is the number of minutes. I am in the beautiful state of Victoria, Australia; and at the time of this writing, we are 11 (eleven) hours ahead of the UTC date time. Of course, depending on where we are, this UTC offset figure could be a negative, which would indicate that we are behind the UTC date time.

Let’s look at this 11 (eleven) hours ahead of the UTC date time:

from datetime import datetime, timezone

local_datetime = datetime.now()
utc_datetime = datetime.now(timezone.utc)

local_iso_str = datetime.strftime(local_datetime, "%Y-%m-%dT%H:%M:%S.%f")[:-3]
utc_iso_str = datetime.strftime(utc_datetime, "%Y-%m-%dT%H:%M:%S.%f")[:-3]

print(f"local dt: {local_iso_str}, tzname: {local_datetime.tzname()}")
print(f"  utc dt: {utc_iso_str}, tzname: {utc_datetime.tzname()}")

print("\n")

print(f"local dt: {local_datetime.isoformat()}")
print(f"  utc dt: {utc_datetime.isoformat()}")

Output:

local dt: 2023-02-10T15:46:08.407, tzname: None
  utc dt: 2023-02-10T04:46:08.407, tzname: UTC

local dt: 2023-02-10T15:46:08.407281
  utc dt: 2023-02-10T04:46:08.407281+00:00

We can see that my local date time is 11 hours ahead of UTC. The UTC offset for UTC date time is 00:00 — which is understandable.

The time zone name for the local date time is None, and UTC for UTC date time. These are in conformance with datetime.tzname(). However, the first time I wrote this code, I was expecting either AUS Eastern Standard Time or AUS Eastern Summer Time for the local date time! ๐Ÿ˜‚ This leads to datetime.astimezone(tz=None).

Add the following 5 (five) lines to the end of the last example:

print("\n")

utc_to_local_datetime = utc_datetime.astimezone()
utc_2_local_iso_str = datetime.strftime(utc_to_local_datetime, "%Y-%m-%dT%H:%M:%S.%f")[:-3]

print( f"utc to local dt: {utc_2_local_iso_str}, tzname: {utc_to_local_datetime.tzname()}" )
print( f"utc to local dt: {utc_to_local_datetime.isoformat()}" )

The last two (2) output lines are from the new code:

local dt: 2023-02-10T16:24:40.089, tzname: None
  utc dt: 2023-02-10T05:24:40.089, tzname: UTC

local dt: 2023-02-10T16:24:40.089415
  utc dt: 2023-02-10T05:24:40.089415+00:00

utc to local dt: 2023-02-10T16:24:40.089, tzname: AUS Eastern Summer Time
utc to local dt: 2023-02-10T16:24:40.089415+11:00

We can see that datetime.astimezone(tz=None) converts a UTC date time into local date time correctly, and the converted value now also has local time zone name; and furthermore, it still retains the original UTC offset value.

We can also calculate the UTC offset from our local date time:

from datetime import datetime, timezone

local_now = datetime.now()
utc_now = local_now.astimezone(timezone.utc)
local_as_utc = local_now.replace(tzinfo=timezone.utc)

print( f"{local_now}, tzname: {local_now.tzname()}" )
print( f"{utc_now}, tzname: {utc_now.tzname()}" )
print( f"{local_as_utc}, tzname: {local_as_utc.tzname()}" )
print( f"{local_as_utc - utc_now}" )

The last output line, 11:00:00, is the UTC offset:

2023-02-11 10:08:22.023929, tzname: None
2023-02-10 23:08:22.023929+00:00, tzname: UTC
2023-02-11 10:08:22.023929+00:00, tzname: UTC
11:00:00

โท Consider cases where date time data are UTC date time, but stored with no time zone information. The expiry column of the following MySQL table:

CREATE TABLE `sessions` (
  `id` int NOT NULL AUTO_INCREMENT,
  `session_id` varchar(255) DEFAULT NULL,
  `data` blob,
  `expiry` datetime DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `session_id` (`session_id`)
) ENGINE=InnoDB AUTO_INCREMENT=51 DEFAULT CHARSET=utf8mb3;

If you are familiar with the Python Flask Web Development Framework, you might recognise that the above sessions table is the server-side session table implemented by the package Flask-Session.

And a PostgreSQL equivalence:

CREATE TABLE IF NOT EXISTS ompdev1.sessions
(
    id integer NOT NULL GENERATED BY DEFAULT AS IDENTITY 
        ( CYCLE INCREMENT 1 START 1 MINVALUE 1 MAXVALUE 
        2147483647 CACHE 1 ),
    session_id character varying(255) COLLATE 
        pg_catalog."default",
    data bytea,
    expiry timestamp without time zone,
    CONSTRAINT sessions_pkey PRIMARY KEY (id),
    CONSTRAINT sessions_session_id_key UNIQUE (session_id)
)

Values for sessions.expiry are stored as:

โ— MySQL: 2023-02-09 05:00:45, 2023-02-09 03:58:36, etc.

โ— PostgreSQL: 2023-01-07 11:18:32.442136, 2023-02-06 21:33:06.190584, etc.

We know these are UTC date time: because that’s how they are in the Flask-Session package code, when checking if a particular server session has expired, the package code also uses UTC date time comparison.

Suppose we want to check when a particular abandoned session has expired in our own local date time. How do we do the conversion?

Let’s have a look at two (2) methods which will help to accomplish the conversion.

โ“ต Method datetime.timetuple() breaks a date time value into individual components, and returns class time.struct_time, which we can access as a tuple:

time.struct_time((d.year, d.month, d.day,
                  d.hour, d.minute, d.second,
                  d.weekday(), yday, dst))

In the following example, we get the local date time, call the above method, then write out individual elements:

from datetime import datetime

local_datetime = datetime.now()

print(f"local dt: {local_datetime.isoformat()}")

dt_tuple = local_datetime.timetuple()

print("\n")

print(dt_tuple[0], dt_tuple[1], dt_tuple[2])
print(dt_tuple[3], dt_tuple[4], dt_tuple[5])
print(dt_tuple[6], dt_tuple[7], dt_tuple[8])
local dt: 2023-02-10T23:34:00.062678

2023 2 10
23 34 0
4 41 -1

We’re interested in the first 6 (six) elements, which are year, month, day, hour, minute and second.

โ“ถ Next, the datetime constructor reads:

class datetime.datetime(year, month, day, hour=0, minute=0, 
    second=0, microsecond=0, tzinfo=None, *, fold=0)

What that means is, if we have year, month, day, hour, minute, second, microsecond and time zone, we can create a time zone aware date time. We’ll ignore microsecond, and default it to 0 from here onward.

Let’s pick one of the sessions.expiry value from above, 2023-02-06 21:33:06.190584, and see how this constructor works with the following example:

from datetime import datetime, timezone

utc_datetime = datetime(2023, 2, 6, 21, 33, 6, 0, timezone.utc)

utc_iso_str = datetime.strftime(utc_datetime, "%Y-%m-%dT%H:%M:%S.%f")[:-3]
utc_2_local_iso_str = datetime.strftime(utc_datetime.astimezone(), "%Y-%m-%dT%H:%M:%S.%f")[:-3]

print( "         utc dt: ", utc_iso_str, "tzname: ", utc_datetime.tzname() )
print( "utc to local dt: ", utc_2_local_iso_str, "tzname: ", utc_datetime.astimezone().tzname() )
         utc dt:  2023-02-06T21:33:06.000 tzname:  UTC
utc to local dt:  2023-02-07T08:33:06.000 tzname:  AUS Eastern Summer Time

In short, converting sessions.expiry date time to UTC date time, or more precisely, assigning UTC time zone to the sessions.expiry values, there are two (2) steps involved:

I am not sure if this is the most effective way of doing this, please keep a look out for a better approach.

I have tried this successfully with MySQL and PostgreSQL:

โœฟโœฟโœฟ

I hope you find the information in this post useful and helpful. Thank you for reading and stay safe as always.

Python: SQLAlchemy — understanding sessions and associated queries.

In this post, we look at some of the basics of sessionmaker, scoped_session and their associated query methods.

Please note that this post is not a tutorial. I sought to answer some of my own questions. And I’m merely writing down my answers.

The source database used in this post is the MySQL test data released by Oracle Corporation. Downloadable from https://github.com/datacharmer/test_db.

Let’s start with the Session class. SQLAlchemy official documentation Using the Session.

โถ We can use instances of sessionmaker to run full text queries:

from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from sqlalchemy import text

SQLALCHEMY_DATABASE_URL = "mysql+mysqlconnector://behai:pcb.2176310315865259@localhost/employees"

engine = create_engine(SQLALCHEMY_DATABASE_URL, echo=False, future=True)

session_factory = sessionmaker(autocommit=False, autoflush=False, bind=engine, future=True)

session = session_factory()

sql = "select * from employees where last_name like '%treh%' limit 0, 10"

dataset = session.execute(text(sql))
for r in dataset:
    print(r)

dataset.close()
session.close()

For future=True, please see SQLAlchemy 2.0 Future (Core)

โท Next comes the scoped_session. Basically, it is the session that we should use in web applications: each scoped session is โ€œlocalโ€ to the context a web request. Please see Contextual/Thread-local Sessions.

For the purpose of this discussion, we will not be doing any web applications. Simple command line Python scripts would suffice for illustration purposes:

from threading import get_ident
from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker, scoped_session
from sqlalchemy import text

SQLALCHEMY_DATABASE_URL = "mysql+mysqlconnector://behai:pcb.2176310315865259@localhost/employees"

engine = create_engine(SQLALCHEMY_DATABASE_URL, echo=False, future=True)

session_factory = sessionmaker(autocommit=False, autoflush=False, bind=engine, future=True)

database_sesssion = scoped_session(session_factory, scopefunc=get_ident)

session = database_sesssion(future=True)

sql = "select * from employees where last_name like '%treh%' limit 0, 10"

dataset = session.execute(text(sql))
for r in dataset:
    print(r)

dataset.close()
session.close()

This script does the same thing as the previous one, barring one addition, and one modification:

The addition is line 12:

database_sesssion = scoped_session(session_factory, scopefunc=get_ident)

For scopefunc=get_ident, please see sqlalchemy.orm.scoping.scoped_session, and this Stackoverflow post flask _app_ctx_stack.__ident_func__ error due to __ident_func__ deprecated in werkzeug 2.1 should make it clearer.

The modification is line 14:

session = database_sesssion(future=True)

In this second script, instead of getting a session directly from sessionmaker, we get one indirectly from scoped_session. We are running the same query as the previous script, so the final output is also the same.

From here on, scripts all have lines 1-12 ( one to twelve ) identical, I will only list the relevant new codes for the current discussions.

โธ According to Contextual/Thread-local Sessions, after:

session = database_sesssion(future=True)

If we make repeated calls to database_sesssion(), we get back the same session:

...
session = database_sesssion(future=True)
session1 = database_sesssion()

print(f"1. database_sesssion: {id(database_sesssion)}")
print(f"1. session: {id(session)}")
print(f"1. session1: {id(session1)}")
print(f"1. session is session1: {session is session1}")
</pre>

<!-- WordPress quote -->
<pre>
1. database_sesssion: 1724058633408
1. session: 1724061992896
1. session1: 1724061992896
1. session is session1: True

We can call database_sesssion(future=True) only once, subsequent calls must be made with no parameters, otherwise it will result in the following exception:

(venv) F:\my_project>venv\Scripts\python.exe src\my_project\my_python_script.py
Traceback (most recent call last):
  File "F:\my_project\src\my_project\my_python_script.py", line 15, in <module>
    session1 = database_sesssion(future=True)
  File "F:\my_project\venv\lib\site-packages\sqlalchemy\orm\scoping.py", line 39, in __call__
    raise sa_exc.InvalidRequestError(
sqlalchemy.exc.InvalidRequestError: Scoped session is already present; no new arguments may be specified.

โน According to method sqlalchemy.orm.scoping.scoped_session.remove():

Dispose of the current Session, if present.

This will first call Session.close() method on the current Session, which releases any existing transactional/connection resources still being held; transactions specifically are rolled back. The Session is then discarded. Upon next usage within the same scope, the scoped_session will produce a new Session object.

Let’s see what that means, the following script produces the same results as the first script:
...
session = database_sesssion(future=True)
session.close()

database_sesssion.remove()

session = database_sesssion(future=True)

sql = "select * from employees where last_name like '%treh%' limit 0, 10"

dataset = session.execute(text(sql))
for r in dataset:
    print(r)

dataset.close()
session.close()

After calling database_sesssion.remove(), a subsequent call with parameter session = database_sesssion(future=True) actually works, it does not raise an exception.

This makes sense in the context of the above statement: the internal registry is now empty, there is no active scoped session present, so we can create a new one with whatever configurations we see fit.

This would suggest that the database_sesssion itself is still the same object after calling database_sesssion.remove(). Let’s test it out with the following script:

...
session = database_sesssion(future=True)
session1 = database_sesssion()

print(f"1. database_sesssion: {id(database_sesssion)}")
print(f"1. session: {id(session)}")
print(f"1. session1: {id(session1)}")
print(f"1. session is session1: {session is session1}")

session1.close()
session.close()

database_sesssion.remove()

session = database_sesssion(future=True)
session1 = database_sesssion()

print(f"2. database_sesssion: {id(database_sesssion)}")
print(f"2. session: {id(session)}")
print(f"2. session1: {id(session1)}")
We can see that it is, in the output:
1. database_sesssion: 2102627796160
1. session: 2102631155648
1. session1: 2102631155648
1. session is session1: True
2. database_sesssion: 2102627796160
2. session: 2102631155792
2. session1: 2102631155792

โบ We will introduce a model, which basically is a SQLAlchemy class representation of a database table. As per documentation, we will descend our model from function sqlalchemy.orm.declarative_base(…), then we can use method sqlalchemy.orm.scoping.scoped_session.query_property(query_cls=None) with this model. We will do a model for the employees table.

โบโ“ต Despite the official document example, I wanted to try using database_sesssion.query_property() directly, this is my first attempt ( I’m listing a complete new script ):

from threading import get_ident
from sqlalchemy import (
    create_engine,
    Column,
    Integer,
    Date,
    String,
)
from sqlalchemy.orm import sessionmaker, scoped_session
"""
Any one of these import of declarative_base will work.
"""
# from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import declarative_base

SQLALCHEMY_DATABASE_URL = "mysql+mysqlconnector://behai:pcb.2176310315865259@localhost/employees"

engine = create_engine(SQLALCHEMY_DATABASE_URL, echo=False, future=True)

session_factory = sessionmaker(autocommit=False, autoflush=False, bind=engine, future=True)

database_sesssion = scoped_session(session_factory, scopefunc=get_ident)

class Employees(declarative_base()):
    __tablename__ = 'employees'

    emp_no = Column(Integer, primary_key=True)
    birth_date = Column(Date, nullable=False)
    first_name = Column(String(14), nullable=False)
    last_name = Column(String(16), nullable=False)
    gender = Column(String(1), nullable=False )
    hire_date = Column(Date, nullable=False )

query = database_sesssion.query_property()
result = query.filter(Employees.emp_no==16621).first()

I did expect it to work, but it did not:

(venv) F:\my_project>venv\Scripts\python.exe src\my_project\my_python_script.py
Traceback (most recent call last):
  File "F:\my_project\src\my_project\my_python_script.py", line 35, in <module>
    result = query.filter(Employees.emp_no==16621).first()
AttributeError: 'query' object has no attribute 'filter'
From here on, scripts will have lines 1-22 ( one to twenty two ) identical to โบโ“ต, I will only list the relevant additions and changes for the current discussions.

โบโ“ถ The example snippet listed under method sqlalchemy.orm.scoping.scoped_session.query_property(query_cls=None) is:

Session = scoped_session(sessionmaker())

class MyClass(object):
    query = Session.query_property()

# after mappers are defined
result = MyClass.query.filter(MyClass.name=='foo').all()

Accordingly, I modify โบโ“ต as follows:

...
class BaseModel(object):
    query = database_sesssion.query_property()

Base = declarative_base(cls=BaseModel)

class Employees(Base):
    __tablename__ = 'employees'

    ...

result = Employees.query.filter(Employees.emp_no==16621).first()
print(result.__dict__)

And it does work as expected:

(venv) F:\my_project>venv\Scripts\python.exe src\my_project\my_python_script.py
{'_sa_instance_state': <sqlalchemy.orm.state.InstanceState object at 0x0000026A680079A0>, 'last_name': 'Strehl', 'emp_no': 16621, 'hire_date': datetime.date(1992, 6, 11), 'first_name': 'Parviz', 'gender': 'M', 'birth_date': datetime.date(1962, 5, 30)}

I don’t know why it has to be like this. But the official document states that it must be, and my experimentation confirms it. I’m not going to dig into the SQLAlchemy codes to see why! I have a feeling that it’s going to be futile, too ๐Ÿ˜‚…

We’ve seen previously that scoped_session(…)([…]) and sessionmaker(…)() both result in a class sqlalchemy.orm.Session(…), and this class has a method sqlalchemy.orm.Session.query(*entities, **kwargs). This method can be used to do what scoped_session(…).query_property() does, barring some differences. The first is that the model can descend directly from declarative_base(). Secondly, the syntax is slightly different.

โบโ“ทThe modified script which uses scoped_session(…)([…])’s query() method:

<pre>
...
class Employees(declarative_base()):
    __tablename__ = 'employees'

    ...

session = database_sesssion(future=True)
result = session.query(Employees).filter(Employees.emp_no==10545).first()
print( result.__dict__ )

โบโ“ธ And similarly for sessionmaker(…)():

...
class Employees(declarative_base()):
    __tablename__ = 'employees'

    ...

session = session_factory()
result = session.query(Employees).filter(Employees.emp_no==11000).first()
print( result.__dict__ )

Let’s reiterate that we use scoped_session with web applications — Contextual/Thread-local Sessions. We include sessionmaker for the shake of comparison and completeness. Back to scoped_session’s query() method and query property:

โ— โบโ“ถ: result = Employees.query.filter(Employees.emp_no==16621).first()

โ— โบโ“ท: result = session.query(Employees).filter(Employees.emp_no==10545).first()

I choose to use โบโ“ถ approach, despite having to have an extra base class. I.e.:

class BaseModel(object):
    query = database_sesssion.query_property()

This BaseModel can be the parent class for all of the database tables, so in the long run, it sort of pays for itself.

โœฟโœฟโœฟ

As I have stated up front, this post is not a tutorial. I find SQLAlchemy to be a tough row to hoe… I set out to answer my own questions. I hope the content of this post will be useful for others who are learning SQLAlchemy. I do hope I have not made any mistakes in this post. Thank you for reading and stay safe as always.

Design a site like this with WordPress.com
Get started