Rust: Actix-web — Async Functions as Middlewares

In the tenth post of our actix-web learning application, we added an ad hoc middleware. In this post, with the assistance of the actix-web-lab crate, we will refactor this ad hoc middleware into a standalone async function to enhance the overall code readability.

🦀 Index of the Complete Series.

🚀 Please note, complete code for this post can be downloaded from GitHub with:

git clone -b v0.13.0 https://github.com/behai-nguyen/rust_web_01.git

The actix-web learning application mentioned above has been discussed in the twelve previous posts. The index of the complete series can be found here.

The code we’re developing in this post is a continuation of the code from the twelfth post. 🚀 To get the code of this twelfth post, please use the following command:

git clone -b v0.12.0 https://github.com/behai-nguyen/rust_web_01.git

— Note the tag v0.12.0.

While this post continues from previous posts in this series, it can be read in conjunction with only the tenth post, focusing particularly on the section titled Code Updated in the src/lib.rs Module.

❶ For this post, no new modules are introduced. Instead, we will update existing modules and files. The layout chart below displays the updated files and modules, with those marked with indicating the ones that have been updated.

.
├── Cargo.toml ★
├── ...
├── README.md ★
└── src
├── lib.rs ★
└── ...

❷ An update to the Cargo.toml file:

...
[dependencies]
...
actix-web-lab = "0.20.2"

We added the new crate actix-web-lab. This crate is:

In-progress extractors and middleware for Actix Web.

This crate provides mechanisms for implementing middlewares as standalone async functions, rather than using actix-web‘s wrap_fn.

According to the documentation, the actix-web-lab crate is essentially experimental. Functionalities implemented in this crate might eventually be integrated into the actix-web crate. In such a case, we would need to update our code.

❸ Refactor an existing ad hoc middleware out of wrap_fn.

As mentioned at the beginning, this post should be read in conjunction with the tenth post, where we introduced this ad hoc middleware. The description of this simple middleware functionality is found in the section Code Updated in the src/lib.rs Module of the tenth post.

Below, we reprint the code of this ad hoc middleware:

            //
            // This ad hoc middleware looks for the updated access token String attachment in 
            // the request extension, if there is one, extracts it and sends it to the client 
            // via both the ``authorization`` header and cookie.
            //
            .wrap_fn(|req, srv| {
                let mut updated_access_token: Option<String> = None;

                // Get set in src/auth_middleware.rs's 
                // fn update_and_set_updated_token(request: &ServiceRequest, token_status: TokenStatus).
                if let Some(token) = req.extensions_mut().get::<String>() {
                    updated_access_token = Some(token.to_string());
                }

                srv.call(req).map(move |mut res| {

                    if updated_access_token.is_some() {
                        let token = updated_access_token.unwrap();
                        res.as_mut().unwrap().headers_mut().append(
                            header::AUTHORIZATION, 
                            header::HeaderValue::from_str(token.as_str()).expect(TOKEN_STR_JWT_MSG)
                        );

                        let _ = res.as_mut().unwrap().response_mut().add_cookie(
                            &build_authorization_cookie(&token));
                    };

                    res
                })
            })

It’s not particularly lengthy, but its inclusion in the application instance construction process makes it difficult to read. While closures can call functions, refactoring this implementation into a standalone function isn’t feasible. This is because the function would require access to the parameter srv, which in this case refers to the AppRouting struct. Please refer to the screenshot below for clarification:

The AppRouting struct is located in the private module actix-web/src/app_service.rs, which means we don’t have direct access to it. I attempted to refactor it into a standalone function but encountered difficulties. Someone else had also attempted it before me and faced similar issues.

Please refer to the GitHub issue titled wrap_fn &AppRouting should use Arc<AppRouting> #2681 for more details. This reply suggests using the actix-web-lab crate.

I believe I’ve come across this crate before, particularly the function actix_web_lab::middleware::from_fn, but it didn’t register with me at the time.

Drawing from the official example actix-web-lab/actix-web-lab/examples/from_fn.rs and compiler suggestions, I’ve successfully refactored the ad hoc middleware mentioned above into the standalone async function async fn update_return_jwt<B>(req: ServiceRequest, next: Next<B>) -> Result<ServiceResponse<B>, Error>. The screenshot below, taken from Visual Studio Code with the Rust-Analyzer plugin, displays the full source code and variable types:

Compared to the original ad hoc middleware, the code is virtually unchanged. It’s worth noting that this final version is the result of my sixth or seventh attempt; without the compiler suggestions, I would not have been able to complete it. We register it with the application instance using only a single line, as per the documentation:

            .wrap(from_fn(update_return_jwt))

❹ Other minor refactorings include optimising the application instance builder code for brevity. Specifically, I’ve moved the code to create the CORS instance to the standalone function fn cors_config(config: &config::Config) -> Cors, and the code to create the session store to the standalone async function async fn config_session_store() -> (actix_web::cookie::Key, RedisSessionStore).

Currently, the src/lib.rs module is less than 250 lines long, housing 7 helper functions that are completely unrelated. I find it still very manageable. The code responsible for creating the server instance and the application instance, encapsulated in the function pub async fn run(listener: TcpListener) -> Result<Server, std::io::Error>, remains around 60 lines. Although I anticipate it will grow a bit more as we add more functionalities, I don’t foresee it becoming overly lengthy.

❺ I am happy to have learned something new about actix-web. And I hope you find the information useful. Thank you for reading. And stay safe, as always.

✿✿✿

Feature image source:

🦀 Index of the Complete Series.

Rust: Actix-web Daily Logging — Fix Local Offset, Apply Event Filtering

In the last post of our actix-web learning application, we identified two problems. First, there is an issue with calculating the UTC time offset on Ubuntu 22.10, as described in the section 💥 Issue with calculating UTC time offset on Ubuntu 22.10. Secondly, loggings from other crates that match the logging configuration are also written onto log files, as mentioned in the Concluding Remarks section. We should be able to configure what gets logged. We will address both of these issues in this post.

🦀 Index of the Complete Series.

🚀 Please note, complete code for this post can be downloaded from GitHub with:

git clone -b v0.12.0 https://github.com/behai-nguyen/rust_web_01.git

The actix-web learning application mentioned above has been discussed in the eleven previous posts. The index of the complete series can be found here.

The code we’re developing in this post is a continuation of the code from the eleventh post. 🚀 To get the code of this eleventh post, please use the following command:

git clone -b v0.11.0 https://github.com/behai-nguyen/rust_web_01.git

— Note the tag v0.11.0.

While this post continues from previous posts in this series, it can be read in conjunction with only the eleventh post.

❶ For this post, no new modules are introduced. Instead, we will update some existing modules and files. The layout chart below shows the updated files and modules, with those marked with indicating the updated ones.

.
├── .env ★
├── Cargo.toml ★
├── ...
├── README.md ★
├── src
│ ├── helper
│ │ ├── app_logger.rs ★
│ │ └── ...
│ ├── main.rs ★
│ └── ...
└── tests
├── common.rs ★
└── ...

🦀 In the context of this post, our focus is solely on the RUST_LOG entry in the .env file, which we will discuss in a later section.

❷ Updates to the Cargo.toml file:

...
[dependencies]
...
time-tz = {version = "2.0", features = ["system"]}
tracing-subscriber = {version = "0.3", features = ["fmt", "std", "local-time", "time", "env-filter"]}

We added the new crate time-tz, and for tracing-subscriber, we added the crate feature env-filter. We will discuss these additions in later sections.

❸ Resolve the issue with calculating the UTC time offset to ensure reliable functionality on both Ubuntu 22.10 and Windows 10.

⓵ As mentioned in the last post:

After extensive searching, I came across this GitHub issue, Document #293 in local-offset feature description #297. It appears that even after three years, this issue remains unresolved.

Document #293 is dated December 19, 2020. Additionally, there are other relevant documents that I did not come across during my previous “extensive searching”:

  1. November 25, 2020 — Time 0.2.23 fails to determine local offset #296.
  2. November 2, 2021 — Better solution for getting local offset on unix #380.
  3. Dec 5, 2019 — tzdb support #193.

This reply posted on February 13, 2022 mentions the crate time-tz, which resolves the previously mentioned issue.

The parameter utc_offset: time::UtcOffset was removed from the function pub fn init_app_logger() -> WorkerGuard, and the offset calculation is now carried out internally:

    let timer = OffsetTime::new(
        localtime.offset(),
        format_description!("[year]-[month]-[day] [hour]:[minute]:[second]"),
    );

⓶ Offset calculations were accordingly removed from both the function async fn main() -> Result<(), std::io::Error> in the module src/main.rs, and the function pub async fn spawn_app() -> TestApp in the module tests/common.rs.

❸ Configuration to determine what get logged.

We use tracing_subscriber‘s filter::EnvFilter struct to filter which events are logged. This functionality requires the crate feature env-filter, as described above.

Event filtering is configured via the environment variable RUST_LOG. Its value can be much more sophisticated than simply trace, debug, info, warn and error. The documentation in the section Enabling logging of the env_logger crate describes the syntax of RUST_LOG with plenty of informative examples.

⓵ Implementing event filtering for the function pub fn init_app_logger() -> WorkerGuard:

    let filter_layer = EnvFilter::try_from_default_env()
        .or_else(|_| EnvFilter::try_new("debug"))
        .unwrap();

    let subscriber = tracing_subscriber::registry()
        .with(
            ...
                .with_filter(filter_layer)
        );

Please note that the code to convert the value of RUST_LOG to tracing::Level has also been removed.

For further documentation, please refer to Filtering Events with Environment Variables. The code above is from the section Composing Layers of the mentioned documentation page. As for the two functions being called, please see:

pub fn try_from_default_env() -> Result<Self, FromEnvError>.

pub fn try_new<S: AsRef<str>>(dirs: S) -> Result<Self, ParseError>.

And finally, the line:

            ...
                .with_filter(filter_layer)

is from the trait tracing_subscriber::layer::Layer, which is a “a composable handler for tracing events.”

⓶ As for the value of RUST_LOG, there are three cases that do not behave as I initially assumed:

The first two cases are RUST_LOG=xxxx and RUST_LOG=, where nothing gets logged. I had assumed that this error handling would default the logging event to debug:

        .or_else(|_| EnvFilter::try_new("debug"))

I attempted several times to default them to debug, but unfortunately, I was unsuccessful.

The third case is RUST_LOG, where only the RUST_LOG variable name is present in the .env file without any value assigned to it. Based on the above two instances, I expected that nothing would be logged. However, it defaults to debug!

Please note that for the next example discussion, it’s important to keep in mind that the Cargo.toml file contains the following declaration, where learn_actix_web is defined:

[[bin]]
path = "src/main.rs"
name = "learn_actix_web"

Examples of some valid values:

  1. RUST_LOG=off,learn_actix_web=debug — Only debug logging events from the learn_actix_web crate are logged. All logging events from other crates are ignored.
  2. RUST_LOG=off,learn_actix_web=info — Only info logging events from the application are logged. If there are no info events in the application, nothing gets logged and the log files remain empty.
  3. RUST_LOG=off,learn_actix_web=debug,actix_server=info — Only debug events from the application and info events from the actix_server crate are logged.
  4. RUST_LOG=off,learn_actix_web::middleware=debug — Only debug events from the src/middleware.rs module of the application are logged. This middleware is triggered when accessing the GET route http://0.0.0.0:5000/helloemployee/{partial last name}/{partial first name} from an authenticated session.

A further illustration for example 4 above: Log in and click on the last button as shown in the screenshot below:

The current log file should contain the following three new lines:

2024-03-18 00:51:15 DEBUG learn_actix_web::middleware: Hi from start. You requested: /helloemployee/%chi/%ak
2024-03-18 00:51:15 DEBUG learn_actix_web::middleware: Middleware. last name: %chi, first name: %ak.
2024-03-18 00:51:15 DEBUG learn_actix_web::middleware: Hi from response -- some employees found.

This finer control demonstrates the power, utility, and helpfulness of tracking an intermittent bug that is not reproducible on staging and development environments. By enabling debug and tracing logging for specific modules, we can effectively troubleshoot such issues.

❹ Logging events should be grouped within the unique ID of the authenticated session.

For each authenticated session, there is a third-party session ID. I have conducted some studies on this cookie, and its value seems to change after each request. For further discussion of this ID under HTTPS, please refer to this discussion.

My initial plan is to group logging for each request under the value of this ID. For example:

** 2024-03-18 00:51:15 DEBUG learn_actix_web::middleware: {value of id} entered.
2024-03-18 00:51:15 DEBUG learn_actix_web::middleware: Hi from start. You requested: /helloemployee/%chi/%ak
...
** 2024-03-18 00:51:15 DEBUG learn_actix_web::middleware: {value of id} exited.

I have not yet determined how to achieve this; further study is required.

❺ We have concluded this post. I’m pleased to have resolved the offset issue and to have implemented logging in a more effective manner.

I hope you find the information useful. Thank you for reading. And stay safe, as always.

✿✿✿

Feature image source:

🦀 Index of the Complete Series.

Rust: Actix-web and Daily Logging

Currently, our actix-web learning application simply prints debug information to the console using the println! macro. In this post, we will implement proper non-blocking daily logging to files. Daily logging entails rotating to a new log file each day. Non-blocking refers to having a dedicated thread for file writing operations. We will utilise the tracing, tracing-appender, and tracing-subscriber crates for our logging implementation.

🦀 Index of the Complete Series.

🚀 Please note, complete code for this post can be downloaded from GitHub with:

git clone -b v0.11.0 https://github.com/behai-nguyen/rust_web_01.git

The actix-web learning application mentioned above has been discussed in the following ten previous posts:

  1. Rust web application: MySQL server, sqlx, actix-web and tera.
  2. Rust: learning actix-web middleware 01.
  3. Rust: retrofit integration tests to an existing actix-web application.
  4. Rust: adding actix-session and actix-identity to an existing actix-web application.
  5. Rust: actix-web endpoints which accept both application/x-www-form-urlencoded and application/json content types.
  6. Rust: simple actix-web email-password login and request authentication using middleware.
  7. Rust: actix-web get SSL/HTTPS for localhost.
  8. Rust: actix-web CORS, Cookies and AJAX calls.
  9. Rust: actix-web global extractor error handlers.
  10. Rust: actix-web JSON Web Token authentication.

The code we’re developing in this post is a continuation of the code from the tenth post above. 🚀 To get the code of this tenth post, please use the following command:

git clone -b v0.10.0 https://github.com/behai-nguyen/rust_web_01.git

— Note the tag v0.10.0.

While this post continues from previous posts in this series, it can also be read independently. The logging module developed herein can be used in other projects without modification.

❶ For this post, we introduce a new module src/helper/app_logger.rs, and some other modules and files are updated. The project layout remains the same as in the last post. The layout chart below shows the affected files and modules:

— Please note that files marked with are updated, and src/helper/app_logger.rs is marked with , as it is the only new module.

.
├── .env ★
├── Cargo.toml ★
├── ...
├── README.md ★
├── src
│ ├── auth_middleware.rs ★
│ ├── database.rs ★
│ ├── helper
│ │ ├── app_logger.rs ☆
│ │ └── ...
│ ├── helper.rs ★
│ ├── main.rs ★
│ ├── middleware.rs ★
│ └── ...
└── tests
├── common.rs ★
└── ...

❷ An update to the .env file: a new entry has been added:

RUST_LOG=debug

The value of RUST_LOG is translated into tracing::Level. Valid values include trace, debug, info, warn and error. Any other values are invalid and will default to Level::DEBUG.

❸ Updates to the Cargo.toml file: as expected, the new crates are added to the [dependencies] section.

...
[dependencies]
...
tracing = "0.1"
tracing-appender = "0.2"
tracing-subscriber = {version = "0.3", features = ["fmt", "std", "local-time", "time"]}

❹ 💥 Issue with calculating UTC time offset on Ubuntu 22.10.

In the new code added for this post, we need to calculate the UTC time offset to obtain local time. The following code works on Windows 10:

use time::UtcOffset;

let offset = UtcOffset::current_local_offset().unwrap();

However, on Ubuntu 22.10, it doesn’t always function as expected. Sometimes, it raises the error IndeterminateOffset. The inconsistency in its behavior makes it challenging to identify a clear pattern of when it works and when it doesn’t.

After extensive searching, I came across this GitHub issue, Document #293 in local-offset feature description #297. It appears that even after three years, this issue remains unresolved.

This complication adds an extra layer of difficulty in ensuring both the code and integration tests function properly. In the subsequent sections of this post, when discussing the code, we’ll refer back to this issue when relevant. Please keep this in mind.

❺ The src/helper/app_logger.rs module has been designed to be easily copied into other projects, provided that the Cargo.toml file includes the required crates discussed earlier.

This module contains only a single public function, pub fn init_app_logger(utc_offset: time::UtcOffset) -> WorkerGuard, which the application calls to set up the log. Please refer to the notes and documentation within this module while reading the code.

Originally, the utc_offset: time::UtcOffset parameter was not present. However, due to the issue mentioned in 💥 Issue with calculating UTC time offset on Ubuntu 22.10, the code was refactored to include this parameter, offering a bit more flexibility.

⓵ Setting up the daily log files.

    let log_appender = RollingFileAppender::builder()
        .rotation(Rotation::DAILY) // Daily log file.
        .filename_suffix("log") // log file names will be suffixed with `.log`
        .build("./log") // try to build an appender that stores log files in `/var/log`
        .expect("Initialising rolling file appender failed");

To set up the daily log files, we begin by calling the pub fn builder() -> Builder function.

We specify DAILY rotation to generate daily log files. However, it’s important to note that according to the documentation, the file names are appended with the current date in UTC. Since I’m in the Australian Eastern Standard Time (AEST) zone, which is 10-11 hours ahead of UTC, there were instances where my log file names were created with dates from the previous day.

To give log files the .log extension, we use the method pub fn filename_suffix(self, suffix: impl Into<String>) -> Self.

The format of the daily log file names follows the pattern YYYY-MM-DD.log, for example, 2024-03-10.log.

We then invoke the method pub fn build( &self, directory: impl AsRef<Path>) -> Result<RollingFileAppender, InitError> to specify the location of the log files within the log/ sub-directory relative to where the application is executed. For instance:

▶️Windows 10: F:\rust\actix_web>target\debug\learn_actix_web.exe
▶️Ubuntu 22.10: behai@hp-pavilion-15:~/rust/actix_web$ /home/behai/rust/actix_web/target/debug/learn_actix_web

This results in the log files being stored at F:\rust\actix_web\log\ and /home/behai/rust/actix_web/target/debug/learn_actix_web/log/ respectively.

⓶ We create a non-blocking writer thread using the following code:

    let (non_blocking_appender, log_guard) = tracing_appender::non_blocking(log_appender);

This is the documentation section for the function tracing_appender::non_blocking. For more detailed documentation, refer to the tracing_appender::non_blocking module. Please note the following:

This function returns a tuple of NonBlocking and WorkerGuard. NonBlocking implements MakeWriter which integrates with tracing_subscriber. WorkerGuard is a drop guard that is responsible for flushing any remaining logs when the program terminates.

Note that the WorkerGuard returned by non_blocking must be assigned to a binding that is not _, as _ will result in the WorkerGuard being dropped immediately. Unintentional drops of WorkerGuard remove the guarantee that logs will be flushed during a program’s termination, in a panic or otherwise.

What this means is that we must keep log_guard alive for the application to continue logging. log_guard is an instance of the WorkerGuard struct and is also the returned value of the public function pub fn init_app_logger(utc_offset: time::UtcOffset) -> WorkerGuard. We will revisit this returned value in a later section.

⓷ Next, we specify the date and time format for each log line. Each line begins with a local date and time. For instance, 2024-03-12-08:19:13:

    // Each log line starts with a local date and time token.
    // 
    // On Ubuntu 22.10, calling UtcOffset::current_local_offset().unwrap() after non_blocking()
    // causes IndeterminateOffset error!!
    // 
    // See also https://github.com/time-rs/time/pull/297.
    //
    let timer = OffsetTime::new(
        //UtcOffset::current_local_offset().unwrap(),
        utc_offset,
        format_description!("[year]-[month]-[day]-[hour]:[minute]:[second]"),
    );

We’ve discussed local dates in some detail in this post.

🚀 Please note that this is a local date and time. In my time zone, Australian Eastern Standard Time (AEST), which is 10-11 hours ahead of UTC, the log file name for a log line that starts with 2024-03-12-08:19:13 would actually be log/2024-03-11.log.

⓸ Next, we attempt to define the tracing::Level based on the environment variable RUST_LOG discussed previously:

    // Extracts tracing::Level from .env RUST_LOG, if there is any problem, 
    // defaults to Level::DEBUG.
    //
    let level: Level = match std::env::var_os("RUST_LOG") {
        None => Level::DEBUG,

        Some(text) => {
            match Level::from_str(text.to_str().unwrap()) {
                Ok(val) => val,
                Err(_) => Level::DEBUG
            }
        }
    };

💥 I initially assumed that having RUST_LOG defined in the environment file .env would suffice. However, it turns out that we need to explicitly set it in the code.

⓹ We then proceed to “create a subscriber”, I hope I’m using the correct terminology:

    let subscriber = tracing_subscriber::registry()
        .with(
            Layer::new()
                .with_timer(timer)
                .with_ansi(false)
                .with_writer(non_blocking_appender.with_max_level(level)
                    .and(std::io::stdout.with_max_level(level)))
        );

The function tracing_subscriber::registry() returns a tracing_subscriber::registry::Registry struct. This struct implements the trait tracing_subscriber::layer::SubscriberExt. The method fn with<L>(self, layer: L) -> Layered<L, Self> from this trait returns a tracing_subscriber::layer::Layered struct, which is a:

A Subscriber composed of a Subscriber wrapped by one or more Layers.

We create the new Layer using tracing_subscriber::fmt::Layer implementation.

Note that non_blocking_appender is an instance of tracing_appender::non_blocking::NonBlocking struct. This struct implements the trait tracing_subscriber::fmt::writer::MakeWriterExt, where the method fn with_max_level(self, level: Level) -> WithMaxLevel<Self> is defined.

🚀 .and(std::io::stdout.with_max_level(level)) means that anything logged to the log file will also be printed to the console.

⓺ Next, the new Subscriber is set as the global default for the duration of the entire program:

    // tracing::subscriber::set_global_default(subscriber) can only be called once. 
    // Subsequent calls raise SetGlobalDefaultError, ignore these errors.
    //
    // There are integeration test methods which call this init_app_logger(...) repeatedly!!
    //
    match tracing::subscriber::set_global_default(subscriber) {
        Err(err) => tracing::error!("Logger set_global_default, ignored: {}", err),
        _ => (),
    }

The documentation for the function tracing::subscriber::set_global_default states:

Can only be set once; subsequent attempts to set the global default will fail. Returns whether the initialization was successful.

Since some integration test methods call the pub fn init_app_logger(utc_offset: time::UtcOffset) -> WorkerGuard more than once, we catch potential errors and ignore them.

⓻ Finally, pub fn init_app_logger(utc_offset: time::UtcOffset) -> WorkerGuard returns log_guard, as discussed above.

❻ Updates to the src/main.rs module.

⓵ Coming back to pub fn init_app_logger(utc_offset: time::UtcOffset) -> WorkerGuard, specifically regarding the returned value discussed previously, I read and understood the quoted documentation, and I believe the code was correct. However, it did not write to log files as expected. I sought help. As per my help request post, I initially called init_app_logger in the src/lib.rs module’s pub async fn run(listener: TcpListener) -> Result<Server, std::io::Error>. Consequently, as soon as run went of scope, the returned WorkerGuard was dropped, and the writer thread terminated.

Simply moved it to src/main.rs‘s async fn main() -> Result<(), std::io::Error>, fixed this problem:

    // Call this to load RUST_LOG.
    dotenv().ok(); 

    // Calling UtcOffset::current_local_offset().unwrap() here works in Ubuntu 22.10, i.e.,
    // it does not raise the IndeterminateOffset error.
    //
    // TO_DO. But this does not guarantee that it will always work! 
    //
    let _guards = init_app_logger(UtcOffset::current_local_offset().unwrap());

Please note the call UtcOffset::current_local_offset().unwrap(). This is due to the problem discussed in the section 💥 Issue with calculating UTC time offset on Ubuntu 22.10.

⓶ The function pub fn init_app_logger(utc_offset: time::UtcOffset) -> WorkerGuard requires the environment variable RUST_LOG as discussed previously. That’s why dotenv().ok() is called in async fn main() -> Result<(), std::io::Error>.

Recall that dotenv().ok() is also called in the src/lib.rs module’s pub async fn run(listener: TcpListener) -> Result<Server, std::io::Error> to load other environment variables. This setup might seem clunky, but I haven’t found a better solution yet!

❼ Updating integration tests. We want integration tests to be able to log as well. These updates are made solely in the tests/common.rs module.

The function pub async fn spawn_app() -> TestApp in tests/common.rs calls the src/lib.rs module’s function pub async fn run(listener: TcpListener) -> Result<Server, std::io::Error> to create application server instances.

This means that spawn_app() must be refactored to call pub fn init_app_logger(utc_offset: time::UtcOffset) -> WorkerGuard and somehow keep the writer thread alive after spawn_app() goes out of scope. We manage this by:

⓵ Update TestApp struct by adding pub guard: WorkerGuard.

⓶ Update the function pub async fn spawn_app() -> TestApp with additional calls:

    // To load RUST_LOG from .env file.
    dotenv().ok(); 

    /*
    On Ubuntu 22.10, calling UtcOffset's offset methods causes IndeterminateOffset error!!

    See also https://github.com/time-rs/time/pull/297

    ...
    */

    // TO_DO: 11 is the current number of hours the Australian Eastern Standard Time (AEST)
    // is ahead of UTC. This value need to be worked out dynamically -- if it is at all 
    // possible on Linux!!
    // 
    let guard = init_app_logger(UtcOffset::from_hms(11, 0, 0).unwrap());

Note the call UtcOffset::from_hms(11, 0, 0).unwrap(). This is due to the problem discussed in section 💥 Issue with calculating UTC time offset on Ubuntu 22.10:

— 👎 Unlike src/main.rs, where UtcOffset::current_local_offset().unwrap() works, calling it here consistently results in the IndeterminateOffset error! UtcOffset::from_hms(11, 0, 0).unwrap() works, but again, this is not a guarantee it will keep working.

👎 The value 11 is hardcoded. Presently, the Australian Eastern Standard Time (AEST) zone is 11 hours ahead of UTC. To get the AEST date and time, we need to offset UTC by 11 hours. However, 11 is not a constant value; due to daylight savings, in Southern Hemisphere winters, it changes to 10 hours (I think). This means that this code will no longer be correct.

❽ We’ve reached the conclusion of this post. I’d like to mention that the ecosystem surrounding tracing and logging is incredibly vast. While this post only scratches the surface, it provides a complete working example nonetheless. We can build upon this foundation as needed.

The UTC offset issue on Ubuntu 22.10, as described, must be addressed definitively. However, that task is for another day.

I’m not entirely satisfied with the numerous debug loggings from other crates. These can be filtered and removed, but that’s a topic for another post, perhaps.

I hope you find the information useful. Thank you for reading. And stay safe, as always.

✿✿✿

Feature image source:

🦀 Index of the Complete Series.

Rust: actix-web JSON Web Token authentication.

In the sixth post of our actix-web learning application, we implemented a basic email-password login process with a placeholder for a token. In this post, we will implement a comprehensive JSON Web Token (JWT)-based authentication system. We will utilise the jsonwebtoken crate, which we have previously studied.

🦀 Index of the Complete Series.

🚀 Please note, complete code for this post can be downloaded from GitHub with:

git clone -b v0.10.0 https://github.com/behai-nguyen/rust_web_01.git

The actix-web learning application mentioned above has been discussed in the following nine previous posts:

  1. Rust web application: MySQL server, sqlx, actix-web and tera.
  2. Rust: learning actix-web middleware 01.
  3. Rust: retrofit integration tests to an existing actix-web application.
  4. Rust: adding actix-session and actix-identity to an existing actix-web application.
  5. Rust: actix-web endpoints which accept both application/x-www-form-urlencoded and application/json content types.
  6. Rust: simple actix-web email-password login and request authentication using middleware.
  7. Rust: actix-web get SSL/HTTPS for localhost.
  8. Rust: actix-web CORS, Cookies and AJAX calls.
  9. Rust: actix-web global extractor error handlers.

The code we’re developing in this post is a continuation of the code from the ninth post above. 🚀 To get the code of this ninth post, please use the following command:

git clone -b v0.9.0 https://github.com/behai-nguyen/rust_web_01.git

— Note the tag v0.9.0.

Table of contents

Previous Studies on JSON Web Token (JWT)

As mentioned earlier, we conducted studies on the jsonwebtoken crate, as detailed in the post titled Rust: JSON Web Token — some investigative studies on crate jsonwebtoken. The JWT implementation in this post is based on the specifications discussed in the second example of the aforementioned post, particularly focusing on this specification:

🚀 It should be obvious that: this implementation implies SECONDS_VALID_FOR is the duration the token stays valid since last active. It does not mean that after this duration, the token becomes invalid or expired. So long as the client keeps sending requests while the token is valid, it will never expire!

We will provide further details on this specification later in the post. Additionally, before studying the jsonwebtoken crate, we conducted research on the jwt-simple crate, as discussed in the post titled Rust: JSON Web Token — some investigative studies on crate jwt-simple. It would be beneficial to review this post as well, as it covers background information on JWT.

Proposed JWT Implementations: Problems and Solutions

Proposed JWT Implementations

Let’s revisit the specifications outlined in the previous section:

🚀 It should be obvious that: this implementation implies SECONDS_VALID_FOR is the duration the token stays valid since last active. It does not mean that after this duration, the token becomes invalid or expired. So long as the client keeps sending requests while the token is valid, it will never expire!

This concept involves extending the expiry time of a valid token every time a request is made. This functionality was demonstrated in the original discussion, specifically in the second example section mentioned earlier.

🦀 Since the expiry time is updated, we generate a new access token. Here’s what we do with the new token:

  1. Replace the current actix-identity::Identity login with the new access token.
  2. Always send the new access token to clients via both the response header and the response cookie authorization, as in the login process.

We generate a new access token based on logic, but it doesn’t necessarily mean the previous ones have expired.”

Problems with the Proposed Implementations

The proposed implementations outlined above present some practical challenges, which we will discuss next.

However, for the sake of learning in this project, we will proceed with the proposed implementations despite the identified issues.

Problems when Used as an API-Server or Service

In an API-like server or a service, users are required to include a valid access token in the request authorization header. Therefore, if a new token is generated, users should have access to this latest token.

What happens if users simply ignore the new tokens and continue using a previous one that has not yet expired? In such a scenario, request authentication would still be successful, and the requests would potentially succeed until the old token expires. However, a more serious concern arises if we implement blacklisting. In that case, we would need to blacklist all previous tokens. This would necessitate writing the current access token to a blacklist table for every request, which is impractical.
Problems when Used as an Application Server

When used as an application server, we simply replace the current actix-identity::Identity login with the new access token. If we implement blacklisting, we only need to blacklist the last token

🚀 This process makes sense, as we cannot expire a session while a user is still actively using it.

However, we still encounter similar problems as described in the previous section for API-like servers or services. Since clients always have access to the authorization response header and cookie, they can use this token with different client tools to send requests, effectively treating the application as an API-like server or a service.

Proposed Solutions

The above problems would disappear, and the actual implementations would be simpler if we adjust the logic slightly:

  1. Only send the access token to clients once if the content type of the login request is application/json.
  2. Then users of an API-like server or a service will only have one access token until it expires. They will need to log in again to obtain a new token.
  3. Still replace the current actix-identity::Identity login with the new access token . The application server continues to function as usual. However, since users no longer have access to the token, we only need to manage the one stored in the actix-identity::Identity login.

But as mentioned at the start of this section, we will ignore the problems and, therefore, the solutions for this revision of the code.

The “Bearer” Token Scheme

We adhere to the “Bearer” token scheme as specified in RFC 6750, section 2.1. Authorization Request Header Field:

    For example:
GET /resource HTTP/1.1
Host: server.example.com
Authorization: Bearer mF_9.B5f-4.1JqM

That is, the access token used during request authentication is in the format:

Bearer. + the proper JSON Web Token

For example:

Bearer.eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJlbWFpbCI6ImNoaXJzdGlhbi5rb2JsaWNrLjEwMDA0QGdtYWlsLmNvbSIsImlhdCI6MTcwODU1OTcwNywiZXhwIjoxNzA4NTYxNTA3LCJsYXN0X2FjdGl2ZSI6MTcwODU1OTcwN30.CN-whQ0rWW8IuLPVTF7qprk4-GgtK1JSJqp3C8X-ytE

❶ The access token included in the request authorization header must adhere to the “Bearer” token format.

❷ Similarly, the access token set for the actix-identity::Identity login is also a “Bearer” token.

🦀 However, the access token sent to clients via the response header and the response cookie authorization is always a pure JSON Web Token.

Project Layout

Below is the complete project layout.

— Please note, those marked with are updated, and those marked with are new.

.
├── .env ★
├── Cargo.toml ★
├── cert
│ ├── cert-pass.pem
│ ├── key-pass-decrypted.pem
│ └── key-pass.pem
├── migrations
│ ├── mysql
│ │ └── migrations
│ │ ├── 20231128234321_emp_email_pwd.down.sql
│ │ └── 20231128234321_emp_email_pwd.up.sql
│ └── postgres
│ └── migrations
│ ├── 20231130023147_emp_email_pwd.down.sql
│ └── 20231130023147_emp_email_pwd.up.sql
├── README.md ★
├── src
│ ├── auth_handlers.rs ★
│ ├── auth_middleware.rs ★
│ ├── bh_libs
│ │ ├── api_status.rs
│ │ └── australian_date.rs
│ ├── bh_libs.rs
│ ├── config.rs ★
│ ├── database.rs
│ ├── handlers.rs
│ ├── helper
│ │ ├── app_utils.rs ★
│ │ ├── constants.rs ★
│ │ ├── endpoint.rs ★
│ │ ├── jwt_utils.rs ☆
│ │ └── messages.rs ★
│ ├── helper.rs ★
│ ├── lib.rs ★
│ ├── main.rs
│ ├── middleware.rs
│ └── models.rs ★
├── templates
│ ├── auth
│ │ ├── home.html
│ │ └── login.html
│ └── employees.html
└── tests
├── common.rs ★
├── test_auth_handlers.rs ★
├── test_handlers.rs ★
└── test_jsonwebtoken.rs ☆

The Token Utility jwt_utils.rs and Test test_jsonwebtoken.rs Modules

The Token Utility src/helper/jwt_utils.rs Module

In the module src/helper/jwt_utils.rs, we implement all the JWT management code, which includes the core essential code that somewhat repeats the code already mentioned in the second example:

  • struct JWTPayload — represents the JWT payload, where the email field uniquely identifies the logged-in user.
  • JWTPayload implementation — implements some of the required functions and methods:
    • A function to create a new instance.
    • Methods to update the expiry field (exp) and the last_active field using seconds, minutes, and hours.
    • Four getter methods which return the values of the iat, email, exp, and last_active fields.

Additionally, there are two main functions:

  1. pub fn make_token — creates a new JWT from an email. The parameter secs_valid_for indicates how many seconds the token is valid for, and the parameter secret_key is used by the jsonwebtoken crate to encode the token. It creates an instance of struct JWTPayload, and then creates a token using this instance.
  2. pub fn decode_token — decodes a given token. If the token is valid and successfully decoded, it returns the token’s struct JWTPayload. Otherwise, it returns an ApiStatus which describes the error.

Other functions are “convenient” functions or wrapper functions:

  1. pub fn make_token_from_payload — creates a JWT from an instance of struct struct JWTPayload. It is a “convenient” function. We decode the current token, update the extracted payload, then call this function to create an updated token.
  2. pub fn make_bearer_token — a wrapper function that creates a “Bearer” token from a given token.
  3. pub fn decode_bearer_token — a wrapper function that decodes a “Bearer” token.

Please note also the unit test section in this module. There are sufficient tests to cover all functions and methods.

The documentation in the source code should be sufficient to aid in the reading of the code.

The Test tests/test_jsonwebtoken.rs Module

We implement some integration tests for JWT management code. These tests are self-explanatory.

The Updated Login Process

In the current login process, at step 4, we note:

...
    // TO_DO: Work in progress -- future implementations will formalise access token.
    let access_token = &selected_login.email;

    // https://docs.rs/actix-identity/latest/actix_identity/
    // Attach a verified user identity to the active session
    Identity::login(&request.extensions(), String::from(access_token)).unwrap();
...	

This part of the login process handler pub async fn login(request: HttpRequest, app_state: web::Data<super::AppState>, body: Bytes) -> Either<impl Responder, HttpResponse> is updated to:

...
    let access_token = make_token(&selected_login.email, 
        app_state.cfg.jwt_secret_key.as_ref(), app_state.cfg.jwt_mins_valid_for * 60);

    // https://docs.rs/actix-identity/latest/actix_identity/
    // Attach a verified user identity to the active session
    Identity::login(&request.extensions(), String::from( make_bearer_token(&access_token) )).unwrap();
...	

Please note the call to make_bearer_token, which adheres to The “Bearer” Token Scheme.

This update would take care of the application server case. In the case of an API-like server or a service, users are required to include a valid access token in the request authorization header, as mentioned, so we don’t need to do anything.

The next task is to update the request authentication process. This update occurs in the src/auth_middleware.rs and the src/lib.rs modules.

The Updated Request Authentication Process

The updated request request authentication involves changes to both the src/auth_middleware.rs and src/lib.rs modules.

This section, How the Request Authentication Process Works, describes the current process.

Code Updated in the src/auth_middleware.rs Module

Please recall that the src/auth_middleware.rs module serves as the request authentication middleware. We will make some substantial updates within this module.

Although the code has sufficient documentation, we will discuss the updates in the following sections.

⓵ The module documentation has been updated to describe how the request authentication process works with JWT. Please refer to the documentation section How This Middleware Works for more details.

⓶ New struct TokenStatus:

struct TokenStatus {
    is_logged_in: bool,
    payload: Option<JWTPayload>,
    api_status: Option<ApiStatus>
}

The struct TokenStatus represents the status of the access token for the current request:

⓷ The function fn verify_valid_access_token(request: &ServiceRequest) -> TokenStatus has been completely rewritten, although its purpose remains the same. It checks if the token is present and, if so, decodes it.

The return value of this function is struct TokenStatus, whose fields are set based on the rules discussed previously.

⓸ The new helper function fn update_and_set_updated_token(request: &ServiceRequest, token_status: TokenStatus) is called when there is a token and the token is successfully decoded.

It uses the JWTPayload instance in the token_status parameter to create the updated access token. Then, it:

  1. Replaces the current actix-identity::Identity login with the new updated token, as discussed earlier.
  2. Attaches the updated token to dev::ServiceRequest‘s dev::Extensions by calling fn extensions_mut(&self) -> RefMut<‘_, Extensions>.

    The next adhoc middleware, discussed in the next section, consumes this extension.

⓹ The new closure, let unauthorised_token = |req: ServiceRequest, api_status: ApiStatus| -> Self::Future, calls the Unauthorized() method on HttpResponse to return a JSON serialisation of ApiStatus.

Note the calls to remove the server-side per-request cookies redirect-message and original-content-type.

⓺ Update the fn call(&self, request: ServiceRequest) -> Self::Future function. All groundwork has been completed. The updates to this method are fairly straightforward:

  1. Update the call to fn verify_valid_access_token(request: &ServiceRequest) -> TokenStatus; the return value is now struct TokenStatus.
  2. If the token is in error, call the closure unauthorised_token() to return the error response. The request is then completed.
  3. If the request is from an authenticated session, meaning we have a token, and the token has been decoded successfully, we make an additional call to the new helper function fn update_and_set_updated_token(request: &ServiceRequest, token_status: TokenStatus), which has been described in the previous section.

The core logic of this method remains unchanged.

Code Updated in the src/lib.rs Module

As mentioned previously, if a valid token is present, an updated token is generated from the current token’s payload every time a request occurs. This updated access token is then sent to the client via both the response header and the response cookie authorization.

This section describes how the updated token is attached to the request extension so that the next adhoc middleware can pick it up and send it to the clients.

This is the updated src/lib.rs next adhoc middleware. Its functionality is straightforward. It queries the current dev::ServiceRequest‘s dev::Extensions for a String, which represents the updated token. If found, it sets the ServiceResponse authorization header and cookie with this updated token.

Afterward, it forwards the response. Since it is currently the last middleware in the call stack, the response will be sent directly to the client, completing the request.

JWT and Logout

Due to the issues outlined in this section and this section, we were unable to effectively implement the logout functionality in the application. This will remain unresolved until we implement the proposed solutions and integrate blacklisting.

— For the time being, we will retain the current logout process unchanged.

Once blacklisting is implemented, the request authentication process will need to validate the access token against the blacklist table. If the token is found in the blacklist, it will be considered invalid.

Updating Integration Tests

There is a new integration test module as already discussed in section The Test tests/test_jsonwebtoken.rs Module. There is no new integration test added to existing modules.

Some common test code has been updated as a result of implementing JSON Web Token.

⓵ There are several updates in module tests/common.rs:

  1. Function pub fn mock_access_token(&self, secs_valid_for: u64) -> String now returns a correctly formatted “Bearer” token. Please note the new parameter secs_valid_for.
  2. New function pub fn jwt_secret_key() -> String
  3. New function pub fn assert_token_email(token: &str, email: &str). It decodes the parameter token, which is expected to always succeed, then tests that the token JWTPayload‘s email value equal to parameter email.
  4. Rewrote pub fn assert_access_token_in_header(response: &reqwest::Response, email: &str) and pub fn assert_access_token_in_cookie(response: &reqwest::Response, email: &str).
  5. Updated pub async fn assert_json_successful_login(response: reqwest::Response, email: &str).

⓶ Some minor changes in both the tests/test_handlers.rs and the tests/test_auth_handlers.rs modules:

  1. Call the function pub fn mock_access_token(&self, secs_valid_for: u64) -> String with the new parameter secs_valid_for.
  2. Other updates as a result of the updates in the tests/common.rs module.

Concluding Remarks

It has been an interesting process for me as I delved into the world of actix-web adhoc middleware. While the code may seem simple at first glance, I encountered some problems along the way and sought assistance to overcome them.

I anticipated the problems, as described in this section and this section, before diving into the actual coding process. Despite the hurdles, I proceeded with the implementation because I wanted to learn how to set a custom header for all routes before their final response is sent to clients – that’s the essence of adhoc middleware.

In a future post, I plan to implement the proposed solutions and explore the concept of blacklisting.

I hope you find this post informative and helpful. Thank you for reading. And stay safe, as always.

✿✿✿

Feature image source:

🦀 Index of the Complete Series.

Rust: actix-web global extractor error handlers.

Continuing with our actix-web learning application, we implement global extractor error handlers for both application/json and application/x-www-form-urlencoded data. This enhances the robustness of the code. Subsequently, we refactor the login data extraction process to leverage the global extractor error handlers.

🦀 Index of the Complete Series.

🚀 Please note, complete code for this post can be downloaded from GitHub with:

git clone -b v0.9.0 https://github.com/behai-nguyen/rust_web_01.git

The actix-web learning application mentioned above has been discussed in the following eight previous posts:

  1. Rust web application: MySQL server, sqlx, actix-web and tera.
  2. Rust: learning actix-web middleware 01.
  3. Rust: retrofit integration tests to an existing actix-web application.
  4. Rust: adding actix-session and actix-identity to an existing actix-web application.
  5. Rust: actix-web endpoints which accept both application/x-www-form-urlencoded and application/json content types.
  6. Rust: simple actix-web email-password login and request authentication using middleware.
  7. Rust: actix-web get SSL/HTTPS for localhost.
  8. Rust: actix-web CORS, Cookies and AJAX calls.

The code we’re developing in this post is a continuation of the code from the eighth post above. 🚀 To get the code of this eighth post, please use the following command:

git clone -b v0.8.0 https://github.com/behai-nguyen/rust_web_01.git

— Note the tag v0.8.0.

❶ We are not adding any new files to the project; it remains the same as in the seventh post. We are only making changes to some modules.

.
├── Cargo.toml ★
├── README.md ★
├── src
│ ├── auth_handlers.rs ★
│ ├── handlers.rs ★
│ ├── helper
│ │ ├── app_utils.rs ★
│ │ ├── endpoint.rs ★
│ │ └── messages.rs ★
│ ├── helper.rs ★
│ └── lib.rs ★
└── tests
├── test_auth_handlers.rs ★
└── test_handlers.rs ★

— Please note, those marked with are updated, and those marked with are new.

❷ Currently, the application does not handle extraction errors for both application/json and application/x-www-form-urlencoded data in data-related routes.

🚀 As a reminder, we have the following existing data-related routes. Briefly:

  • Route https://0.0.0.0:5000/data/employees accepts application/json. For example {"last_name": "%chi", "first_name": "%ak"}.
  • Route https://0.0.0.0:5000/ui/employees accepts application/x-www-form-urlencoded. For example last_name=%chi&first_name=%ak.

Unlike the data-related routes, the login route https://0.0.0.0:5000/api/login currently implements a custom extractor that also handles extraction errors. Please refer to the sections Implementations of Routes /ui/login and /api/login and How the Email-Password Login Process Works in previous posts for more details. 💥 We will refactor this implementation to eliminate the custom extractor and fully leverage the global extractor error handlers that we are going to implement.

Let’s demonstrate some unhandled extraction errors for both content types.

🚀 Please note that the ajax_test.html page is used in the examples below.

application/json content type. First, we make an invalid submission with empty data. Then, we submit data with an invalid field name:

The above screenshots indicate that there is some implicit default extraction error handling in place: the response status code is 400 for BAD REQUEST, and the response text contains the actual extraction error message.

💥 However, this behavior is not consistent with the existing implementation for the https://0.0.0.0:5000/api/login route, where an extraction error always results in a JSON serialisation of ApiStatus with a code of 400 for BAD REQUEST, and the message containing the exact extraction error. For more details, refer to the current implementation of pub fn extract_employee_login(body: &Bytes, content_type: &str) -> Result<EmployeeLogin, ApiStatus> It’s worth noting that, as mentioned earlier, we are also refactoring this custom extractor while retaining its current handling of extraction errors.

application/x-www-form-urlencoded content type. Similar to the previous example, we also submit two invalid requests: one with empty data and another with data containing an invalid field name:

❸ Implementing “global extractor error handlers” for application/json and application/x-www-form-urlencoded data.

This involves configuring extractor configurations provided by the actix-web crate, namely JsonConfig and FormConfig, respectively. We can define custom error handlers for each content type using their error_handler(...) method.

In our context, we refer to these custom error handlers as “global extractor error handlers”.

Based on the documentation, we implement the functions fn json_config() -> web::JsonConfig and fn form_config() -> web::FormConfig, and then register them according to the official example.

The key part is the error_handler(...) function within both extractor configurations:

...
        .error_handler(|err, _req| {
            let err_str: String = String::from(err.to_string());
            error::InternalError::from_response(err, 
                make_api_status_response(StatusCode::BAD_REQUEST, &err_str, None)).into()
        })
...

Here, err_str represents the actual extraction error message.

We utilise the function pub fn make_api_status_response( status_code: StatusCode, message: &str, session_id: Option<String>) -> HttpResponse to construct a response, which is a JSON serialisation of ApiStatus.

We can verify the effectiveness of the global extractor error handlers by repeating the previous two examples.

application/json content type:

The screenshots confirm that we receive the expected response, which contrasts the example prior to refactoring.

application/x-www-form-urlencoded content type:

We get the expected response. This is the example before refactoring.

⓷ Let’s try another example via Postman:

When an extraction error occurs, the response is a JSON serialisation of ApiStatus. When a request to route https://0.0.0.0:5000/ui/employees is successful, the response is HTML. (As a reminder, we need to set the request authorization header to something, for example, chirstian.koblick.10004@gmail.com.)

❹ Integration tests for data-related routes.

To ensure that the global extractor error handlers function correctly, we need tests to verify their behavior.

In tests/test_handlers.rs, we’ve implemented four failed extraction tests, each ending with _error_empty and _error_missing_field.

These tests closely resemble the examples shown previously. The code for the new tests is similar to existing ones, so we won’t walk through it as they are self-explanatory.

💥 In the new tests, take note of the error messages: "Content type error" and "Content type error."!

❺ Refactoring the login data extraction process.

In the fifth post, Rust: actix-web endpoints which accept both application/x-www-form-urlencoded and application/json content types, we implemented the custom extractor function pub fn extract_employee_login(body: &Bytes, content_type: &str) -> Result<EmployeeLogin, ApiStatus> which accepts both application/x-www-form-urlencoded and application/json content types, and deserialises the byte stream to the EmployeeLogin struct.

This function is currently functional. As mentioned previously, we intend to refactor the code while retaining its extraction error handling behaviors, which are now available automatically due to the introduction of global extractor error handlers.

We are eliminating this helper function and instead using the enum Either, which provides a mechanism for trying two extractors: a primary and a fallback.

In src/auth_handlers.rs, the login function, the endpoint handler for route /api/login, is updated as follows:

#[post("/login")]
pub async fn login(
    request: HttpRequest,
    app_state: web::Data<super::AppState>,
    body: Either<web::Json<EmployeeLogin>, web::Form<EmployeeLogin>>
) -> HttpResponse {
    let submitted_login  = match body {
        Either::Left(json) => json.into_inner(),
        Either::Right(form) => form.into_inner(),
    };
...	

The last parameter and the return type have changed. The parameter body is now an enum Either, which is the focal point of this refactoring. The extraction process is more elegant, and we are taking advantage of a built-in feature, which should be well-tested.

The global extractor error handlers enforce the same validations on the submitted data as the previous custom extractor helper function.

Please note the previous return type of this function:

#[post("/login")]
pub async fn login(
    request: HttpRequest,
    app_state: web::Data<super::AppState>,
    body: Bytes
) -> Either<impl Responder, HttpResponse> {
...

There are other minor changes throughout the function, but they are self-explanatory.

Let’s observe the refactored login code in action.

application/json content type. Two invalid requests and one valid request:

application/x-www-form-urlencoded content type. Two invalid requests and one valid request:

application/x-www-form-urlencoded content type. Using Postman. Two invalid requests and one valid request:

application/x-www-form-urlencoded content type. Using the application’s login page, first log in with an invalid email, then log in again with a valid email and password.

❻ Integration tests for invalid login data.

These tests should have been written earlier, immediately after completing the login functionalities.

In the test module, tests/test_auth_handlers.rs, we’ve added four failed extraction tests, denoted by functions ending with _error_empty and _error_missing_field.

❼ We have reached the conclusion of this post. I don’t feel that implementing the function extract_employee_login was a waste of time. Through this process, I’ve gained valuable insights into Rust.

As for the next post for this project, I’m not yet sure what it will entail 😂… There are still several functionalities I would like to implement. I’ll let my intuition guide me in deciding the topic for the next post.

Thank you for reading, and I hope you find the information in this post useful. Stay safe, as always.

✿✿✿

Feature image source:

🦀 Index of the Complete Series.

Rust: actix-web CORS, Cookies and AJAX calls.

Continuing with our actix-web learning application, we will discuss proper AJAX calls to ensure reliable functionality of CORS and session cookies. This also addresses issue ❷ raised in a previous post.

🦀 Index of the Complete Series.

🚀 Please note, complete code for this post can be downloaded from GitHub with:

git clone -b v0.8.0 https://github.com/behai-nguyen/rust_web_01.git

The actix-web learning application mentioned above has been discussed in the following seven previous posts:

  1. Rust web application: MySQL server, sqlx, actix-web and tera.
  2. Rust: learning actix-web middleware 01.
  3. Rust: retrofit integration tests to an existing actix-web application.
  4. Rust: adding actix-session and actix-identity to an existing actix-web application.
  5. Rust: actix-web endpoints which accept both application/x-www-form-urlencoded and application/json content types.
  6. Rust: simple actix-web email-password login and request authentication using middleware.
  7. Rust: actix-web get SSL/HTTPS for localhost.

The code we’re developing in this post is a continuation of the code from the seventh post above. 🚀 To get the code of this seventh post, please use the following command:

git clone -b v0.7.0 https://github.com/behai-nguyen/rust_web_01.git

— Note the tag v0.7.0.

❶ We are not adding any new files to the project; it remains the same as in the seventh post. We are only making changes to a few modules.

.
├── ...
├── README.md ★
├── src
│ ├── auth_handlers.rs ★
│ ├── auth_middleware.rs ★
│ └── lib.rs ★
├── ...

— Please note, those marked with are updated, and those marked with are new.

❷ Session cookies.

I was working on CORS, session cookies, and AJAX calls when I realised that we couldn’t get session cookies to work consistently across domains for Firefox and other Chromium browsers without HTTPS. This realisation prompted the focus on enabling the application to run under HTTPS, as discussed in the seventh post: Rust: actix-web get SSL/HTTPS for localhost.

💥 However, despite running the application under HTTPS, we later discovered that it still didn’t fully resolve the cookie issue. This is because Chromium browsers are in the process of phasing out third-party cookies.

Cross-Origin Resource Sharing (CORS) allowed origin.

While studying and experimenting with this issue, I made an observation regarding the application’s allowed origin. The allowed origin is set to http://localhost, as per configuration.

During my experiments, I removed the http:// scheme, leaving only localhost as the allowed origin:

ALLOWED_ORIGIN=localhost

CORS just simply reject the requests:

● Please refer to the following Mdm Web Docs for explanations regarding CORS header ‘Access-Control-Allow-Origin’ missing.

● Additionally, I found the Wikipedia article on Cross-origin resource sharing to be informative.

As for why I made that change, I can’t recall the exact reason. It may have been due to some confusion while reading the documentation and examining examples from other sources.

According to the same-origin policy, an origin is defined by its scheme, host, and port. You can find detailed rules for origin determination in the Wikipedia article on the Same-origin policy.

Two resources are considered to be of the same origin if and only if all these values are exactly the same.

https://en.wikipedia.org/wiki/Same-origin_policy#Origin_determination_rules

This would likely explain why dropping the scheme resulted in all requests being rejected

❹ Cookies and AJAX calls.

In the sixth post, we raised issue ❷. I further noted in that section:

It seems logical, but it does not work when we log in using JSON with either an invalid email or password. The client tools simply report that the request could not be completed. I haven’t been able to work out why yet.

By “...log in using JSON...” I mean AJAX calls. I do apologise for not being clear earlier.

— Recall that in this scenario, the application acts as an API-like server or a service.

After some study and experimentation, I have been able to determine the reasons:

  1. AJAX requests must have both xhrFields.withCredentials and crossDomain set to true.
  2. How session cookies are created. We will discuss these in detail in the following sections.

⓵ AJAX and cross domain.

I use the HTML page ajax_test.html to test the application with AJAX calls. In the sixth post, I used the function runAjaxEx(…), which caused session cookies not to work properly when calls were cross-domain. Now, I am using the function runAjaxCrossDomain(…):

...
            // https://stackoverflow.com/questions/76956593/how-to-persist-data-across-routes-using-actix-session-and-redisactorsessionstore
            xhrFields: {
                withCredentials: true
            },
            crossDomain: true,
...

Refer to the following Mdm Web Docs page on XMLHttpRequest: withCredentials property for explanations of xhrFields.withCredentials.

💥 Please note that I am still unclear why this is considered a cross-domain case. I am accessing ajax_test.html via localhost, while the application is hosted at localhost:5000. In the correct response screenshot below, without the cross-domain setting, the response would be the login HTML page without the Please check login detail. message because cookies are simply rejected:

We have successfully refactored the fn first_stage_login_error_response(request: &HttpRequest, message: &str) -> HttpResponse function, as discussed in the aforementioned issue ❷. Additionally, we’ve included a call to create the server-side per-request original-content-type cookie:

...
        .cookie(build_original_content_type_cookie(&request, request.content_type()))
...

⓶ How session cookies are created.

As mentioned earlier, without HTTPS, cookies do not function properly; they are rejected by Chromium browsers and Firefox.

In this post, the cookie implementations are as follows:

  1. Scheme: HTTPS://
  2. Secure: true
  3. SameSite: None

Let’s examine some examples where cookies are rejected.

1. Scheme: HTTP://, Secure: false, and SameSite: Strict.

When accessing ajax_test.html on localhost, and the application is hosted on the Ubuntu 22.10 machine at 192.168.0.16:5000, the server-side per-request cookies are rejected:

The expected output is:

{
	"code": 401,
	"message": "Please check login detail.",
	"session_id": null
}

The warnings in the above screenshot are:

Some cookies are misusing the recommended “SameSite“ attribute

Cookie “original-content-type” has been rejected because it is in a cross-site context and its “SameSite” is “Lax” or “Strict”.

Cookie “redirect-message” has been rejected because it is in a cross-site context and its “SameSite” is “Lax” or “Strict”.

Cookie “redirect-message” has been rejected because it is in a cross-site context and its “SameSite” is “Lax” or “Strict”.

Cookie “original-content-type” has been rejected because it is in a cross-site context and its “SameSite” is “Lax” or “Strict”.

Cookie “redirect-message” has been rejected because it is in a cross-site context and its “SameSite” is “Lax” or “Strict”.

Cookie “original-content-type” has been rejected because it is in a cross-site context and its “SameSite” is “Lax” or “Strict”.

The warnings indicate that both Lax and Strict would result in these cookies being rejected. The only remaining option left is None. For more information, please refer to the following Mdm Web Docs article on Set-Cookie SameSite.

2. Scheme: HTTP://, Secure: false, and SameSite: None.

The cookies are accepted, but there are still warnings regarding the server-side per-request cookies:

The warnings are:

Some cookies are misusing the recommended “SameSite“ attribute

Cookie “original-content-type” will be soon rejected because it has the “SameSite” attribute set to “None” without the “secure” attribute. To know more about the “SameSite“ attribute, read https://developer.mozilla.org/docs/Web/HTTP/Headers/Set-Cookie/SameSite

Cookie “redirect-message” will be soon rejected because it has the “SameSite” attribute set to “None” without the “secure” attribute. To know more about the “SameSite“ attribute, read https://developer.mozilla.org/docs/Web/HTTP/Headers/Set-Cookie/SameSite

Cookie “redirect-message” will be soon rejected because it has the “SameSite” attribute set to “None” without the “secure” attribute. To know more about the “SameSite“ attribute, read https://developer.mozilla.org/docs/Web/HTTP/Headers/Set-Cookie/SameSite

Cookie “original-content-type” will be soon rejected because it has the “SameSite” attribute set to “None” without the “secure” attribute. To know more about the “SameSite“ attribute, read https://developer.mozilla.org/docs/Web/HTTP/Headers/Set-Cookie/SameSite

Cookie “redirect-message” will be soon rejected because it has the “SameSite” attribute set to “None” without the “secure” attribute. To know more about the “SameSite“ attribute, read https://developer.mozilla.org/docs/Web/HTTP/Headers/Set-Cookie/SameSite login

Cookie “original-content-type” will be soon rejected because it has the “SameSite” attribute set to “None” without the “secure” attribute. To know more about the “SameSite“ attribute, read https://developer.mozilla.org/docs/Web/HTTP/Headers/Set-Cookie/SameSite

The application also maintains an application-wide publicly available cookie named authorization cookie, discussed toward the end of this section. This cookie stores the access token after a successful login. Based on the warnings above, we would expect to receive the same warning for this cookie. And indeed, we do:

Generally, this is not a problem, as this access token is also included in the response’s authorization header, clients can get it from this header instead.

3. Scheme: HTTP://, Secure: false, and SameSite: None — as per in 2.

Chromium browsers, including Opera, appear to reject cookies even when not accessed cross-domain. For instance, when logging in with an invalid email, the login page is displayed without the Please check login detail. message.

Now that the application can run under HTTPS://, let’s set Secure to true and SameSite to None and observe how browsers handle cookies.

Scheme: HTTPS://, Secure: true, and SameSite: None. We need to make two changes to the cookie creation code.

First, for server-side per-request cookies, we’ve already made changes since the seventh post. Please refer to src/helper/app_utils.rs:

...
pub fn build_cookie<'a>(
    _request: &'a HttpRequest,
    name: &'a str,
    value: &'a str,
    server_only: bool,
    removal: bool
) -> Cookie<'a> {
    ...

    let mut cookie = Cookie::build(name, value)
        // .domain(String::from(parts.collect::<Vec<&str>>()[0]))
        .path("/")
        .secure(true)
        .http_only(server_only)
        .same_site(SameSite::None)
        .finish();
...

I’ve also removed the domain setting, as it doesn’t seem to make any difference; we just let it use the default value.

These changes will also apply to the authorization cookie, discussed toward the end of this section.

Secondly, for the third-party secured cookie id, we update the src/lib.rs module as follows:

...
            .wrap(SessionMiddleware::builder(
                    redis_store.clone(),
                    secret_key.clone()
                )
                .cookie_secure(true)
                .cookie_same_site(SameSite::None)
                .build(),
            )
...

To recap, all cookies now have Secure set to true, and SameSite set to None. 💥 While this currently satisfies Chromium browsers, it comes with a new warning. There’s no assurance that these cookies will continue to be accepted in the future, as illustrated by the Chrome example below.

Firefox does not show any cookie warnings, we will not show any screenshots.

Opera accepts the server-side per-request cookies. Log in using an invalid email, we get the expected response:

Chrome also accepts the cookies, but shows a new warning:

The warning is:

Setting cookie in cross-site context will be blocked in future Chrome versions

Cookies with the SameSite=None; Secure and not Partitioned attributes that operate in cross-site contexts are third-party cookies. In future Chrome versions, setting third-party cookies will be blocked. This behavior protects user data from cross-site tracking.

Please refer to the article linked to learn more about preparing your site to avoid potential breakage.

We will briefly discuss this warning in the next section.

Chromium is in the process of phasing out of third-party cookies!

The article linked Chrome mentions in the warning is:

🚀 Prepare for third-party cookie restrictions

I have not read everything yet, but it does look very comprehensive, listing a lot of alternatives to third-party cookies.

I can’t remember how, but I came across this Mdm Web Docs article titled Cookies Having Independent Partitioned State (CHIPS) before encountering the Chrome article mentioned above. It explains Partitioned cookie. Subsequently, I reached out to the authors of the relevant crates to inquire about this topic:

It appears that they are going to support this Partitioned cookie. We’ll just have to wait and see how it pans out.

I haven’t delved into cookies for a while, and there have been changes. I feel up-to-date with cookies now! 😂 It has been an interesting issue to study. I hope you find the information in this post useful. Thank you for reading. And stay safe, as always.

✿✿✿

Feature image source:

🦀 Index of the Complete Series.

Rust: actix-web get SSL/HTTPS for localhost.

We are going to enable our actix-web learning application to run under HTTPS. As a result, we need to do some minor refactoring to existing integration tests. We also move and rename an existing module for better code organisation.

🦀 Index of the Complete Series.

🚀 Please note, complete code for this post can be downloaded from GitHub with:

git clone -b v0.7.0 https://github.com/behai-nguyen/rust_web_01.git

The actix-web learning application mentioned above has been discussed in the following six previous posts:

  1. Rust web application: MySQL server, sqlx, actix-web and tera.
  2. Rust: learning actix-web middleware 01.
  3. Rust: retrofit integration tests to an existing actix-web application.
  4. Rust: adding actix-session and actix-identity to an existing actix-web application.
  5. Rust: actix-web endpoints which accept both application/x-www-form-urlencoded and application/json content types.
  6. Rust: simple actix-web email-password login and request authentication using middleware.

The code we’re developing in this post is a continuation of the code from the sixth post above. 🚀 To get the code of this sixth post, please use the following command:

git clone -b v0.6.0 https://github.com/behai-nguyen/rust_web_01.git

— Note the tag v0.6.0.

Table of contents

❶ To run under HTTPS. That is:

https://localhost:5000/ui/login
https://192.168.0.16:5000/ui/login

❷ Project Layout.

This post introduces a self-signed encrypted private key file and a certificate file. The updated directory layout for the project is listed below.

— Please note, those marked with are updated, and those marked with are new.

.
├── Cargo.toml ★
├── cert
│ ├── cert-pass.pem ☆ -- Self-signed encrypted private key
│ └── key-pass.pem ☆ -- Certificate
├── .env
├── migrations
│ ├── mysql
│ │ └── migrations
│ │ ├── 20231128234321_emp_email_pwd.down.sql
│ │ └── 20231128234321_emp_email_pwd.up.sql
│ └── postgres
│ └── migrations
│ ├── 20231130023147_emp_email_pwd.down.sql
│ └── 20231130023147_emp_email_pwd.up.sql
├── README.md ★
├── src
│ ├── auth_handlers.rs
│ ├── auth_middleware.rs
│ ├── bh_libs
│ │ ├── api_status.rs
│ │ └── australian_date.rs ★
│ ├── bh_libs.rs ★
│ ├── config.rs
│ ├── database.rs
│ ├── handlers.rs
│ ├── helper
│ │ ├── app_utils.rs ★
│ │ ├── constants.rs
│ │ ├── endpoint.rs
│ │ └── messages.rs
│ ├── helper.rs
│ ├── lib.rs ★
│ ├── main.rs
│ ├── middleware.rs
│ └── models.rs ★
├── templates
│ ├── auth
│ │ ├── home.html
│ │ └── login.html
│ └── employees.html
└── tests
├── common.rs ★
├── test_auth_handlers.rs ★
└── test_handlers.rs ★

❸ In this post, we are using the OpenSSL Cryptography and SSL/TLS Toolkit to generate the self-signed encrypted private key and the certificate files.

⓵ We have previously discussed its installation on both Windows 10 Pro and Ubuntu 22.10 in this section of another post.

⓶ 💥 On Windows 10 Pro, I have observed that, once we include the openssl crate, we should set the environment variable OPENSSL_DIR at the system level, otherwise the Rust Analyzer Visual Studio Code plug-in would run into trouble.

The environment variable OPENSSL_DIR indicates where OpenSSL has been installed. For example, C:\Program Files\OpenSSL-Win64. Following are the steps to access Windows 10 Pro environment variable setting dialog.

Right click on This PCPropertiesAdvance system settings (right hand side) ➜ Advanced tab ➜ Environment Variables… button ➜ under System variablesNew… ➜ enter variable name and value in the dialog ➜ OK button.

The screenshot below is a brief visual representation of the above steps, including the environment variable OPENSSL_DIR in place:

We might need to restart Visual Studio Code to get the new setting recognised.

❹ Generate the self-signed encrypted private key and the certificate files using the OpenSSL Toolkit.

The OpenSSL command to generate the files will prompt a series of questions. One important question is the Common Name which is the server name or FQDN where the certificate is going to be used. If we are not yet familiar with this process, this FQDN (Fully Qualified Domain Name): What It Is, Examples, and More article would be an essential reading, in my humble opnion.

I did seek help working on this issue, please see Actix-web OpenSSL SSL/HTTPS for Localhost, is it possible please? And I was pointed to examples/https-tls/openssl/ — and this is my primary source of reference for getting this learning application to run under HTTPS.

The command I choose to use is:

$ openssl req -x509 -newkey rsa:4096 -keyout key-pass.pem -out cert-pass.pem -sha256 -days 365

Be prepared, we will be asked the following questions:

Enter PEM pass pharse: 

Country Name (2 letter code) [AU]:
State or Province Name (full name) [Some-State]:
Locality Name (eg, city) []: Melbourne
Organization Name (eg, company) [Internet Widgits Pty Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (e.g. server FQDN or YOUR name) []:
Email Address []:

Please enter the following 'extra' attributes
to be sent with your certificate request

A challenge password []:
An optional company name []:

Both key-pass.pem and cert-pass.pem are in the cert/ sub-directory as seen in the Project Layout section.

💥 Please note I also use these two files on Windows 10 Pro to run the application. It works, I am not sure why yet. I need to keep an eye out for this.

❺ Code refactoring to enable HTTPS.

We are also taking the code from examples/https-tls/openssl/. In src/lib.rs, we add two private functions PKey” target=”_blank”>fn load_encrypted_private_key() -> PKey<Private> and SslAcceptorBuilder” target=”_blank”>fn ssl_builder() -> SslAcceptorBuilder. They are basically the code copied from the above example re-arranged into two separate functions.

And for the actual HttpServer object, we call method listen_openssl(…) instead of method listen(…):

...
    .listen_openssl(listener, ssl_builder())?
...	

I have tested with the latest version of the following browsers: Firefox, Chrome, Edge, Opera, Brave and Vivaldi, for both:

https://localhost:5000/ui/login
https://192.168.0.16:5000/ui/login

We might get a warning of potential security risk... For example, see the Firefox warning in the below screenshot:

I just ignore the warning and choose to go ahead. Even though https:// works, but all mentioned browsers state that the connection is not secure. Please see Firefox, Chrome and Opera sample screenshots below:

❻ We have to make changes to both integration tests common code and actual test code.

⓵ It’s quite obvious that we should access the routes via HTTPS. The first change would be TestApp” target=”_blank”>pub async fn spawn_app() -> TestApp in module tests/common.rs. We should set the scheme of app_url to https://:

...
    TestApp {
        app_url: format!("https://127.0.0.1:{}", port)
    }
}	

I did run integration tests after making this change. They failed. Base on the error messages, it seems that reqwest::Client should “have” the certificate as well (?).

Looking through the reqwest crate documentation, pub fn add_root_certificate(self, cert: Certificate) -> ClientBuilder seems like a good candidate…

Base on the example given in reqwest::tls::Certificate, I come up with pub fn load_certificate() -> Certificate and pub fn reqwest_client() -> reqwest::Client.

I have tried to document all my observations during developing these two helper functions. They are short and simple, I think the inline documentation explains the code quite sufficiently.

— Initially, reqwest_client() does not include .danger_accept_invalid_certs(true), resulting in a certificate error. This solution, provided in the following Stack Overflow thread titled How to resolve a Rust Reqwest Error: Invalid Certificate suggests adding .danger_accept_invalid_certs(true), which appears to resolve the issue.

💥 Base on all evidences presented so far, including the connection not secure warnings reported by browsers and the need to call .danger_accept_invalid_certs(true) when creating a reqwest::Client instance, it seems to suggest that there may still be an issue with this implementation. Or is it common for a self-signed certificate, which is not issued by a trusted certificate authority, to encounter such problems? However, having the application run under https:// addresses issues I have had with cookies. For now, I will leave it as is. We will discuss cookie in another new post.

⓶ In both integration test modules, tests/test_handlers.rs and tests/test_auth_handlers.rs, we use the pub fn reqwest_client() -> reqwest::Client function to create instances of reqwest::Client for testing purposes, instead of creating instances directly in each test method.

❼ The final task of this post involves moving src/utils.rs to src/bh_libs/australian_date.rs, as it is a generic module, even though it depends on other third-party crates. It is possible that this module will be moved elsewhere again.

The module src/bh_libs/australian_date.rs is generic enough to used as-is in other projects.

As a result, the src/models.rs module is updated.

❽ We’ve reached the end of this post. I’d like to mention that I also followed the tutorial How to Get SSL/HTTPS for Localhost. I completed it successfully on Ubuntu 22.10, but browsers still warn about the connection not being secure. Perhaps this is to be expected with a self-signed certificate?

Overall, it’s been an interesting exercise. I hope you find the information in this post useful. Thank you for reading. And stay safe, as always.

✿✿✿

Feature image source:

🦀 Index of the Complete Series.

Rust: simple actix-web email-password login and request authentication using middleware.

For our learning actix-web application, we are now adding two new features. ⓵ A simple email-password login with no session expiry. ⓶ A new middleware that manages request authentication using an access token “generated” by the login process. All five existing routes are now protected by this middleware: they can only be accessed if the request has a valid access token. With these two new features added, this application acts as both an application server and an API-like server or a service.

🦀 Index of the Complete Series.

🚀 Please note, complete code for this post can be downloaded from GitHub with:

git clone -b v0.6.0 https://github.com/behai-nguyen/rust_web_01.git

The actix-web learning application mentioned above has been discussed in the following five previous posts:

  1. Rust web application: MySQL server, sqlx, actix-web and tera.
  2. Rust: learning actix-web middleware 01.
  3. Rust: retrofit integration tests to an existing actix-web application.
  4. Rust: adding actix-session and actix-identity to an existing actix-web application.
  5. Rust: actix-web endpoints which accept both application/x-www-form-urlencoded and application/json content types.

The code we’re developing in this post is a continuation of the code from the fifth post above. 🚀 To get the code of this fifth post, please use the following command:

git clone -b v0.5.0 https://github.com/behai-nguyen/rust_web_01.git

— Note the tag v0.5.0.

Table of contents

Some Terms and Phrases Definition

Let’s clarify the meanings of some glossary terms to facilitate the understanding of this post.

● An application server — the application functions as a website server, serving interactive HTML pages and managing states associated with client web sessions.

● An API-like server or a service — the application operates as a data provider, verifying the validity of client requests. Specifically, it checks for a valid access token included in the request authorization header. If the requests are valid, it proceeds to serve them.

● An access tokenin this revision of the code, any non-blank string is considered a valid access token! Please note that this is a work in progress, and currently, login emails are used as access tokens.

As such, we acknowledge that this so-called access token is relatively ineffective as a security measure. The primary focus of this post is on the login and request authentication processes. Consider it a placeholder, as we plan to refactor it into a more formal authentication method.

The response from the login process always includes the access token in the authorization header implictly, and explictly in JSON responses. Clients should store this access token for future use.

To utilise this application as an API-like server or a service, client requests must include the previously provided access token in the authorization header.

● An authenticated session — a client web session who has previously logged in or authenticated. That is, having been given an access token by the login process.

Request authentication — the process of verifying that the access token is present and valid. If a request passes the request authentication process, it indicates that the request comes from an authenticated session.

Request authentication middleware — this is the new middleware mentioned in the introduction, fully responsible for the request authentication process.

● An authenticated request — a request which has passed the request authentication process.

Project Layout

This post introduces several new modules and a new HTML home page, with some modules receiving updates. The updated directory layout for the project is listed below.

— Please note, those marked with are updated, and those marked with are new.

.
├── Cargo.toml ★
├── .env
├── migrations
│ ├── mysql
│ │ └── migrations
│ │ ├── 20231128234321_emp_email_pwd.down.sql
│ │ └── 20231128234321_emp_email_pwd.up.sql
│ └── postgres
│ └── migrations
│ ├── 20231130023147_emp_email_pwd.down.sql
│ └── 20231130023147_emp_email_pwd.up.sql
├── README.md ★
├── src
│ ├── auth_handlers.rs ★
│ ├── auth_middleware.rs ☆
│ ├── bh_libs
│ │ └── api_status.rs ★
│ ├── bh_libs.rs ★
│ ├── config.rs
│ ├── database.rs
│ ├── handlers.rs
│ ├── helper
│ │ ├── app_utils.rs ☆
│ │ ├── constants.rs ☆
│ │ ├── endpoint.rs ★
│ │ └── messages.rs ★
│ ├── helper.rs ★
│ ├── lib.rs ★
│ ├── main.rs
│ ├── middleware.rs
│ ├── models.rs ★
│ └── utils.rs
├── templates
│ ├── auth
│ │ ├── home.html ☆
│ │ └── login.html
│ └── employees.html ★
└── tests
├── common.rs ★
├── test_auth_handlers.rs ☆
└── test_handlers.rs ★

Code Documentation

The code has extensive documentation. It probably has more detail than in this post, as documentation is specific to functionalities and implementation.

To view the code documentation, change to the project directory (where Cargo.toml is located) and run the following command:

▶️Windows 10: cargo doc --open
▶️Ubuntu 22.10: $ cargo doc --open

Issues Covered In This Post

❶ “Complete” the login function.

In the fifth post, we introduced two new login-related routes, /ui/login and /api/login, used them to demonstrate accepting request data in both application/x-www-form-urlencoded and application/json formats.

In this post, we’ll fully implement a simple email and password login process with no session expiry. In other words, if we can identify an employee by email, and the submitted password matches the database password, then the session is considered logged in or authenticated. The session remains valid indefinitely, until the browser is shut down.

🚀 The handlers for /ui/login and /api/login have the capability of conditionally return either HTML or JSON depending on the content type of the original request.

❷ Protect all existing and new /data/xxx and /ui/xxx routes (except /ui/login) using the new request authentication middleware as mentioned in the introduction.

This means only authenticated requests can access these routes. Recall that we have the following five routes, which query the database and return data in some form:

  1. JSON response route http://0.0.0.0:5000/data/employees — method: POST; content type: application/json; request body: {"last_name": "%chi", "first_name": "%ak"}.
  2. JSON response route http://0.0.0.0:5000/data/employees/%chi/%ak — method GET.
  3. HTML response route http://0.0.0.0:5000/ui/employees — method: POST; content type: application/x-www-form-urlencoded; charset=UTF-8; request body: last_name=%chi&first_name=%ak.
  4. HTML response route http://0.0.0.0:5000/ui/employees/%chi/%ak — method: GET.
  5. HTML response route http://0.0.0.0:5000/helloemployee/%chi/%ak — method: GET.

We implement protection, or request authentication, around these routes, allowing only authenticated sessions to access them. When a request is not authenticated, it gets redirected to the /ui/login route. The handler for this route uses the content type of the original request to determine whether it returns the HTML login page with a user-friendly error message or an appropriate JSON error response.

The new middleware we’re using to manage the request authentication process is based on the official redirect example. We rename it to src/auth_middleware.rs.

❸ We implement two additional authentication-related routes: /ui/home and /api/logout.

The /ui/home route is protected, and if requests are successful, its handler always returns the HTML home page.

The /api/logout handler always returns the HTML login page.

To recap, we have the following four new authentication-related routes:

  1. HTML/JSON response route http://0.0.0.0:5000/ui/login — method: GET.
  2. HTML/JSON response route http://0.0.0.0:5000/api/login — method: POST.
    content type: application/x-www-form-urlencoded; charset=UTF-8; request body: email=chirstian.koblick.10004@gmail.com&password=password.
    content type: application/json; request body: {"email": "chirstian.koblick.10004@gmail.com", "password": "password"}.
  3. HTML response route http://0.0.0.0:5000/ui/home — method: GET.
  4. HTML response route http://0.0.0.0:5000/api/logout — method: POST.

❹ Updating existing integration tests and creating new ones for new functionalities.

On the Hard-Coded Value for employees.password

In the section Add new fields email and password to the employees table of the fifth post, in the migration script, we hard-coded the string $argon2id$v=19$m=16,t=2,p=1$cTJhazRqRWRHR3NYbEJ2Zg$z7pMnKzV0eU5eJkdq+hycQ for all passwords. It is the hashed version of password.

It was generated using Argon2 Online by Esse.Tools, which is compatible with the argon2 crate. Thus, we can use this crate to de-hash a hashed password to compare it to a plain text one.

Notes On Cookies

❶ In the fourth post, Rust: adding actix-session and actix-identity to an existing actix-web application, we introduced the crate actix-identity, which requires the crate actix-session. However we didn’t make use of them. Now, they are used in the code of this post.

The crate actix-session will create a secured cookie named id. However, since we’re only testing the application with HTTP (not HTTPS), some browsers reject such secured cookie.

Since this is only a learning application, we’ll make all cookies non-secured. Module src/lib.rs gets updated as follows:

...
            .wrap(SessionMiddleware::builder(
                    redis_store.clone(),
                    secret_key.clone()
                )
                .cookie_secure(false)
                .build(),
            )
...

We call the builder(…) method to access the cookie_secure(…) method and set the cookie id to non-secured.

❷ To handle potential request redirections during the login and the request authentication processes, the application utilises the following server-side per-request cookies: redirect-message and original-content-type.

💥 Request redirection occurs when a request is redirected to /ui/login due to some failure condition. When a request gets redirected elsewhere, request redirection does not apply.

These cookies help persisting necessary information between requests. Between requests refers to the original request that gets redirected, resulting in the second and final independent request. Hence, per-request pertains to the original request.

We implement a helper function to create these cookies in the module src/helper/app_utils.rs:

pub fn build_cookie<'a>(
...
    let mut cookie = Cookie::build(name, value)
        .domain(String::from(parts.collect::<Vec<&str>>()[0]))
        .path("/")
        .secure(false)
        .http_only(server_only)
        .same_site(SameSite::Strict)
        .finish();

    if removal {
        cookie.make_removal();
    }
...		

Refer to the following Mdm Web Docs Set-Cookie for explanations of the settings used in the above function.

Take note of the call to the method make_removal(…) — it’s necessary to remove the server-side per-request cookies when the request completes.

In addition to the aforementioned temporary cookies, the application also maintains an application-wide publicly available cookie named authorization. This cookie stores the access token after a successful login.

To recap, the application maintains three cookies. In the module src/helper/app_utils.rs, we also implement three pairs of helper methods, build_xxx_cookie(...) and remove_xxx_cookie(...), to help manage the lifetime of these cookies.

HTTP Response Status Code

All HTTP responses — successful and failure, HTML and JSON — have their HTTP response status code set to an appropriate code. In addition, if a response is in JSON format, the field ApiStatus.code also has its value sets to the value of the HTTP response status code.

— We’ve introduced ApiStatus in the fifth post. Basically, it’s a generic API status response that gets included in all JSON responses.

We set the HTTP response status code base on “The OAuth 2.0 Authorization Framework”: https://datatracker.ietf.org/doc/html/rfc6749; sections Successful Response and Error Response.

How the Email-Password Login Process Works

👎 This is the area where I encountered the most difficulties while learning actix-web and actix-web middleware. Initially, I thought both the login and the request authentication processes should be in the same middleware. I attempted that approach, but it was unsuccessful. Eventually, I realised that login should be handled by an endpoint handler function. And request authentication should be managed by the middleware. In this context, the middleware is much like a Python decorator.

The email-password login process exclusively occurs in module src/auth_handlers.rs. In broad terms, this process involves two routes /api/login and /ui/login.

❶ The login process, /api/login handler.

The login process handler is pub async fn login(request: HttpRequest, app_state: web::Data<super::AppState>, body: Bytes) -> Either<impl Responder, HttpResponse>. It works as follows:

⓵ Attempt to extract the submitted log in information, a step discussed the fifth post above. If the extraction fails, it always returns a JSON response of ApiStatus with a code of 400 for BAD REQUEST. And that’s the end of the request.

⓶ Next, we use the submitted email to retrieve the target employee from the database. If there is no match, we call the helper function fn first_stage_login_error_response(request: &HttpRequest, message: &str) -> HttpResponse to handle the failure:

● If the request content type is application/json, we return a JSON response of ApiStatus with a code of 401 for UNAUTHORIZED. The value for the message field is set to the value of the parameter message.

● For the application/x-www-form-urlencoded content type, we set the server-side per-request cookie redirect-message and redirect to route /ui/login:

...
        HttpResponse::Ok()
            .status(StatusCode::SEE_OTHER)
            .append_header((header::LOCATION, "/ui/login"))
            // Note this per-request server-side only cookie.
            .cookie(build_login_redirect_cookie(&request, message))
            .finish()
...

We’ve previously described redirect-message. In the following section, we’ll cover the /ui/login handler.

● An appropriate failure response has been provided, and the request is completed.

⓷ An employee’s been found using an exact email match. The next step is to compare password.

The function fn match_password_response(request: &HttpRequest, submitted_login: &EmployeeLogin, selected_login: &EmployeeLogin) -> Result<(), HttpResponse> handles password comparison. It uses the argon2 crate to de-hash the database password and compare it to the submitted password. We’ve briefly discussed this process in the section On the Hard-Coded Value for employees.password.

● If the passwords don’t match, similar to step ⓶ above, we call the function fn first_stage_login_error_response(request: &HttpRequest, message: &str) -> HttpResponse to return an appropriate HTTP response.

● The passwords don’t match. An appropriate failure response has been provided, and the request is completed.

⓸ Email-password login has been successful. Now, we’re back in the endpoint handler for /api/login, pub async fn login(request: HttpRequest, app_state: web::Data<super::AppState>, body: Bytes) -> Either<impl Responder, HttpResponse>.

...
    // TO_DO: Work in progress -- future implementations will formalise access token.
    let access_token = &selected_login.email;

    // https://docs.rs/actix-identity/latest/actix_identity/
    // Attach a verified user identity to the active session
    Identity::login(&request.extensions(), String::from(access_token)).unwrap();

    // The request content type is "application/x-www-form-urlencoded", returns the home page.
    if request.content_type() == ContentType::form_url_encoded().to_string() {
        Either::Right( HttpResponse::Ok()
            // Note this header.
            .append_header((header::AUTHORIZATION, String::from(access_token)))
            // Note this client-side cookie.
            .cookie(build_authorization_cookie(&request, access_token))
            .content_type(ContentType::html())
            .body(render_home_page(&request))
        )
    }
    else {
        // The request content type is "application/json", returns a JSON content of
        // LoginSuccessResponse.
        // 
        // Token field is the access token which the users need to include in the future 
        // requests to get authenticated and hence access to protected resources.		
        Either::Right( HttpResponse::Ok()
            // Note this header.
            .append_header((header::AUTHORIZATION, String::from(access_token)))
            // Note this client-side cookie.
            .cookie(build_authorization_cookie(&request, access_token))
            .content_type(ContentType::json())
            .body(login_success_json_response(&selected_login.email, &access_token))
        )
    }
...

● The access_token is a work in progress. The main focus of this post is on the login and the request authentication processes. Setting the access_token to just the email is sufficient to get the entire process working, helping us understand how everything comes together better. We’ll refactor this to a more formal type of authentication later.

● The line Identity::login(&request.extensions(), String::from(access_token)).unwrap(); is taken directly from the actix-identity crate. I believe this allows the application to operate as an application server.

● 🚀 Note that for all responses, the access_token is set in both the authorization header and the authorization cookie. This is intended for clients usage, for example, in JavaScript. Clients have the option to extract and store this access_token for later use.

● 💥 Take note of this authorization header. It is only available to clients, for example, in JavaScript. The request authentication middleware also attempts to extract the access_token from this header, as explained earlier. This header is set explicitly by clients when making requests. While, at this point, it is a response header, and therefore, it will not be available again in later requests unless explicitly set.

And, finally:

● If the content type is application/x-www-form-urlencoded, we return the HTML home page as is.

● If the content type is application/json, we return a JSON serialisation of LoginSuccessResponse.

❷ The login page, /ui/login handler.

The login page handler is pub async fn login_page(request: HttpRequest) -> Either<impl Responder, HttpResponse>.

This route can be accessed in the following three ways:

⓵ Direct access from the browser address bar, the login page HTML gets served as is. This is a common use case. The request content type is blank.

⓶ Redirected to by the login process handler as already discussed. It should be apparent that when this handler is called, the server-side per-request cookie redirect-message has already been set. The presence of this cookie signifies that this handler is called after a fail login attempt. The value of the redirect-message cookie is included in the final response, and the HTTP response code is set to 401 for UNAUTHORIZED.

In this scenario, the request content type is available throughout the call stack.

⓷ Redirected to by src/auth_middleware.rs. This middleware is discussed in its own section titled How the Request Authentication Process Works.

At this point, we need to understand that, within the middleware, the closure redirect_to_route = |req: ServiceRequest, route: &str| -> Self::Future:

  • Always creates the server-side per-request original-content-type cookie, with its value being the original request content type.
  • If it redirects to /ui/login, then creates the server-side per-request redirect-message cookie with a value set to the constant UNAUTHORISED_ACCESS_MSG.

⓸ Back to the login page handler pub async fn login_page(request: HttpRequest) -> Either<impl Responder, HttpResponse>:

...
    let mut content_type: String = String::from(request.content_type());
    let mut status_code = StatusCode::OK;
    let mut message = String::from("");

    // Always checks for cookie REDIRECT_MESSAGE.
    if let Some(cookie) = request.cookie(REDIRECT_MESSAGE) {
        message = String::from(cookie.value());
        status_code = StatusCode::UNAUTHORIZED;

        if let Some(cookie) = request.cookie(ORIGINAL_CONTENT_TYPE) {
            if content_type.len() == 0 {
                content_type = String::from(cookie.value());
            }
        }
    }
...

From section ⓶ and section ⓷, it should be clear that the presence of the server-side per-request redirect-message cookie indicates a redirect access. If the request content type is not available, we attempt to retrieve it from the server-side per-request original-content-type cookie.

Finally, it delivers the response based on the content type and removes both the redirect-message and original-content-type cookies. Note on the following code:

...
    else {
        Either::Left( 
            ApiStatus::new(http_status_code(status_code)).set_message(&message) 
        )
    }
...	

We implement Responder trait for ApiStatus as described in the Response with custom type section of the official documentation.

How the Request Authentication Process Works

Now, let’s delve into the discussion of the request authentication middleware. Recall the definition of request authentication

The essential logic of this new middleware is to determine if a request is from an authenticated session, and then either pass the request through or redirect to an appropriate route.

This logic can be described by the following pseudocode:

Requests to “/favicon.ico” should proceed.

When Logged In
--------------

1. Requests to the routes “/ui/login” and “/api/login”
are redirected to the route “/ui/home”.

2. Requests to any other routes should proceed.

When Not Logged In
------------------

1. Requests to the routes “/ui/login” and “/api/login”
should proceed.

2. Requests to any other route are redirected to
the route “/ui/login”.
When Logged In
--------------

1. Requests to the routes “/ui/login” and “/api/login”
are redirected to the route “/ui/home”.

2. Requests to any other routes should proceed.

When Not Logged In
------------------

1. Requests to the routes “/ui/login” and “/api/login”
should proceed.

2. Requests to any other route are redirected to
the route “/ui/login”.

This logic should cover all future routes. Since this middleware is registered last, it means that all existing routes and potential future routes are protected by this middleware.

A pair of helper functions discribed below is responsible for managing the request authentication process.

The helper function fn extract_access_token(request: &ServiceRequest) -> Option<String> looks for the access token in:

  • The authorization header, which is set explicitly by clients when making requests.
  • If it isn’t in the header, we look for it in the identity managed by the actix-identity crate as described previously.
  • Note: we could also look in the authorization cookie, but this code has been commented out to focus on testing the identity functionality.

Function fn verify_valid_access_token(request: &ServiceRequest) -> bool is a work in progress. It calls the extract_access_token(...) function to extract the access token. If none is found, the request is not authenticated. If something is found, and it has a non-zero length, the request is considered authenticated. For the time being, this suffices to demonstrate the login and the request authentication processes. As mentioned previously, this will be refactored later on.

The next essential piece of functionality is the closure redirect_to_route = |req: ServiceRequest, route: &str| -> Self::Future, which must be described in an earlier section.

As discussed earlier, this closure also creates the server-side per-request original-content-type cookie. This cookie is so obscured. To help addressing the obscurities, the helper method that creates this cookie comes with extensive documentation explaining all scenarios where this cookie is required.

The Home Page and the Logout Routes

❶ The home page handler pub async fn home_page(request: HttpRequest) -> impl Responder is simple; it just delivers the HTML home page as is.

The home page HTML itself is also simple, without any CSS. It features a Logout button and other buttons whose event handler methods simply call the existing routes using AJAX, displaying responses in plain JavaScript dialogs.

The AJAX function, runAjaxEx(...), used by the home page, is also available on GitHub. It makes references to some Bootstrap CSS classes, but that should not be a problem for this example.

❷ There is also not much in the logout process handler, async fn logout(request: HttpRequest, user: Identity) -> impl Responder.

The code, especially user.logout(), is taken directly from the actix-identity crate.

The handler removes the application wide authorization cookie and redirects to the login page, delivering the HTML login page as is.

Integration Tests

Test and tests in this section mean integration test and integration tests, respectively.

Code has changed. Existing tests and some common test code must be updated. New tests are added to test new functionalities.

The application now uses cookies, all tests must enable cookie usage. We’ll also cover this in a later section.

❶ Common test code.

Now that an access_token is required to access protected routes. To log in every time to test is not always appropriate. We want to ensure that the code can extract the access_token from the authorization header.

I did look into the setup and tear down test setup in Rust. The intention is, in setup we’ll do a login, remember the access_token and use it in proper tests. In tear down, we log out. But this seems troublesome in Rust. I gave up on this idea.

Recall from this discussion that currently, anything that is non-blank is considered a valid access_token!

💥 We’ve settled on a compromise for this code revision: we will implement a method that returns a hard-coded access_token. As we proceed with the authentication refactoring, we’ll also update this method accordingly.

In the third post, we’ve incorporated tests following the approach outlined by Luca Palmieri in the 59-page sample extract of his book ZERO TO PRODUCTION IN RUST. Continuing with this approach, we’ll define a simple TestApp in tests/common.rs:

pub struct TestApp {
    pub app_url: String,
}

impl TestApp {
    pub fn mock_access_token(&self) -> String {
        String::from("chirstian.koblick.10004@gmail.com")
    }    
}

And spawn_app() now returns an instance of TestApp. We can then call the method mock_access_token() on this instance to use the hard-coded access_token.

❷ Enble cookies in tests.

We use the reqwest crate to send requests to the application. To enable cookies, we create a client using the builder method and chain to cookie_store(true):

    let client = reqwest::Client::builder()
        .cookie_store(true)
        .build()
        .unwrap();

❸ Existing tests.

All existing tests in tests/test_handlers.rs must be updated as outlined above, for example:

async fn get_helloemployee_has_data() {
    let test_app = &spawn_app().await;

    let client = reqwest::Client::builder()
        .cookie_store(true)
        .build()
        .unwrap();

    let response = client
        .get(make_full_url(&test_app.app_url, "/helloemployee/%chi/%ak"))
        .header(header::AUTHORIZATION, &test_app.mock_access_token())
        .send()
        .await
        .expect("Failed to execute request.");    

    assert_eq!(response.status(), StatusCode::OK);

    let res = response.text().await;
    assert!(res.is_ok(), "Should have a HTML response.");

    // This should now always succeed.
    if let Ok(html) = res {
        assert!(html.contains("Hi first employee found"), "HTML response error.");
    }
}

❹ New tests.

⓵ We have a new test module, tests/test_auth_handlers.rs, exclusively for testing the newly added authentication routes. There are a total of eleven tests, with eight dedicated to login and six focused on accessing existing protected routes without the authorization header set.

⓶ In the existing test module, tests/test_handlers.rs, we’ve added six more tests. These tests focused on accessing existing protected routes without the authorization header set. These test functions ended with _no_access_token.

These new tests should be self-explanatory. We will not go into detail.

Some Manual Tests

❶ Home page: demonstrating the project as an application server.

The gallery below shows the home page, and responses from some of the routes:

❷ While logged in, enter http://192.168.0.16:5000/data/employees/%chi/%ak in the browser address bar, we get the JSON response as expected:

Next, enter http://192.168.0.16:5000/ui/login directly in the browser address bar. This should bring us back to the home page.

❸ While not logged in, enter http://192.168.0.16:5000/data/employees/%chi/%ak directly in the browser address bar. This redirects us to the login page with an appropriate message:

❹ Attempt to log in with an incorrect email and/or password:

❺ Access the JSON response route http://192.168.0.16:5000/data/employees with the authorization header. This usage demonstrate the application as an API-like server or a service:

❻ Access http://192.168.0.16:5000/data/employees/%chi/%ak without the authorization header. While the successful response is in JSON, the request lacks a content type. Request authentication fails, the response is the HTML login page

❼ Access the same http://192.168.0.16:5000/data/employees/%chi/%ak with the authorization header. This should result in a successful JSON response as expected:

Rust Users Forum Helps

I received a lot of help from the Rust Users Forum while learning actix-web and Rust:

Some Current Issues

println! should be replaced with proper logging. I plan to implement logging to files later on.

❷ The fn first_stage_login_error_response(request: &HttpRequest, message: &str) -> HttpResponse helper function, discussed in this section, redirects requests to the route /ui/login; whose handler is capable of handling both application/x-www-form-urlencoded and application/json. And for that reason, this helper function could be refactored to:

fn first_stage_login_error_response(
    request: &HttpRequest,
    message: &str
) -> HttpResponse {
	HttpResponse::Ok()
		.status(StatusCode::SEE_OTHER)
		.append_header((header::LOCATION, "/ui/login"))
		// Note this per-request server-side only cookie.
		.cookie(build_login_redirect_cookie(&request, message))
		.finish()
}

It seems logical, but it does not work when we log in using JSON with either an invalid email or password. The client tools simply report that the request could not be completed. I haven’t been able to work out why yet.

Concluding Remarks

I do apologise that this post is a bit too long. I can’t help it. I include all the details which I think are relevant. It has taken nearly two months for me to arrive at this point in the code. It is a significant learning progress for me.

We haven’t completed this project yet. I have several other objectives in mind. While I’m unsure about the content of the next post for this project, there will be one.

Thank you for reading. I hope you find this post useful. Stay safe, as always.

✿✿✿

Feature image source:

🦀 Index of the Complete Series.

Rust: actix-web endpoints which accept both application/x-www-form-urlencoded and application/json content types.

We’re implementing a login process for our actix-web learning application. We undertake some general updates to get ready to support login. We then implement a new /api/login route, which supports both application/x-www-form-urlencoded and application/json content types. In this post, we only implement deserialising the submitted request data, then echo some response. We also add a login page via route /ui/login.

🦀 Index of the Complete Series.

🚀 Please note, complete code for this post can be downloaded from GitHub with:

git clone -b v0.5.0 https://github.com/behai-nguyen/rust_web_01.git

The actix-web learning application mentioned above has been discussed in the following four (4) previous posts:

  1. Rust web application: MySQL server, sqlx, actix-web and tera.
  2. Rust: learning actix-web middleware 01.
  3. Rust: retrofit integration tests to an existing actix-web application.
  4. Rust: adding actix-session and actix-identity to an existing actix-web application.

The code we’re developing in this post is a continuation of the code from the fourth post above. 🚀 To get the code of this fourth post, please use the following command:

git clone -b v0.4.0 https://github.com/behai-nguyen/rust_web_01.git

— Note the tag v0.4.0.

As already mentioned in the introduction above, in this post, our main focus of the login process is deserialising both application/x-www-form-urlencoded and application/json into a struct ready to support login. I struggle with this issue a little, I document it as part of my Rust learning journey.

This post introduces a few new modules, some MySQL migration scripts, and a new login HTML page. The updated directory layout for the project is in the screenshot below:

Table of contents

❶ Update Rust to the latest version. At the time of this post, the latest version is 1.75.0. The command to update:

▶️Windows 10: rustup update
▶️Ubuntu 22.10: $ rustup update

We’ve taken CORS into account when we started out this project in this first post.

I’m not quite certain what’d happened, but all of a sudden, it just rejects requests with message Origin is not allowed to make this request.

— Browsers have been updated, perhaps?

Failing to troubleshoot the problem, and seeing that actix-cors is at version 0.7.0. I update it.

— It does not work with Rust version 1.74.0. This new version of actix-cors seems to fix the above request rejection issue.

❷ Update the employees table, adding new fields email and password.

Using the migration tool SQLx CLI, which we’ve covered in Rust SQLx CLI: database migration with MySQL and PostgreSQL, to update the employees table.

While inside the new directory migrations/mysql/, see project directory layout above, create empty migration files 99999999999999_emp_email_pwd.up.sql and 99999999999999_emp_email_pwd.down.sql using the command:

▶️Windows 10: sqlx migrate add -r emp_email_pwd
▶️Ubuntu 22.10: $ sqlx migrate add -r emp_email_pwd

Populate the two script files with what we would like to do. Please see their contents on GitHub. To apply, run the below command, it’ll take a little while to complete:

▶️Windows 10: sqlx migrate add -r emp_email_pwd
▶️Ubuntu 22.10: $ sqlx migrate add -r emp_email_pwd

❸ Update src/models.rs to manage new fields employees.email and employees.password.

If we run cargo test now, all integration tests should fail. All integration tests eventually call to get_employees(...), which does a select * from employees.... Since the two new fields’ve been added to a specific order, field indexes in get_employees(...) are out of order.

Module src/models.rs gets the following updates:

  1. pub email: String field added to struct Employee.
  2. pub async fn get_employees(...) updated to read Employee.email field. Other fields’ indexes also get updated.
  3. New pub struct EmployeeLogin.
  4. New pub async fn select_employee(...), which optionally selects an employee base on exact email match.
  5. New pub struct LoginSuccess.
  6. Add "email": "siamak.bernardeschi.67115@gmail.com" to existing tests.

Please see the updated src/models.rs on GitHub. The documentation should be sufficient to help reading the code.

❹ New module src/auth_handlers.rs, where new login routes /ui/login and /api/login are implemented.

http://0.0.0.0:5000/ui/login is a GET route, which just returns the login.html page as HTML.

http://0.0.0.0:5000/api/login is a POST route. This is effectively the application login handler.

💥 This http://0.0.0.0:5000/api/login route is the main focus of this post:

— Its handler method accepts both application/x-www-form-urlencoded and application/json content types, and deserialises the byte stream to struct EmployeeLogin mentioned above.

💥 Please also note that, as already mentioned, in this post, the login process does not do login, if successfully deserialised the submitted data, it’d just echo a confirmation response in the format of the request content type. If failed to deserialise, it’d send back a JSON response which has an error code and a text message.

Examples of valid submitted data for each content type:

✔️ Content type: application/x-www-form-urlencoded; data: email=chirstian.koblick.10004@gmail.com&password=password.

✔️ Content type: application/json; data: {"email": "chirstian.koblick.10004@gmail.com", "password": "password"}.

#[post("/login")]
pub async fn login(
    request: HttpRequest,
    body: Bytes
) -> HttpResponse {
...
    // Attempts to extract -- deserialising -- request body into EmployeeLogin.
    let api_status = extract_employee_login(&body, request.content_type());
    // Failed to deserialise request body. Returns the error as is.
    if api_status.is_err() {
        return HttpResponse::Ok()
            .content_type(ContentType::json())
            .body(serde_json::to_string(&api_status.err().unwrap()).unwrap());
    }

    // Succeeded to deserialise request body.
    let emp_login: EmployeeLogin = api_status.unwrap();
...	

Note the second parameter body, which is actix_web::web::Bytes, this is the byte stream presentation of the request body.

As an extractor, actix_web::web::Bytes has been mentioned in section Type-safe information extraction | Other. We’re providing our own implementation to do the deserialisation, method extract_employee_login(...) in new module src/helper/endpoint.rs.

pub fn extract_employee_login(
    body: &Bytes, 
    content_type: &str
) -> Result<EmployeeLogin, ApiStatus> {
...
    extractors.push(Extractor { 
        content_type: mime::APPLICATION_WWW_FORM_URLENCODED.to_string(), 
        handler: |body: &Bytes| -> Result<EmployeeLogin, ApiStatus> {
            match from_bytes::<EmployeeLogin>(&body.to_owned().to_vec()) {
                Ok(e) => Ok(e),
                Err(e) => Err(ApiStatus::new(err_code_500()).set_text(&e.to_string()))
            }
        }
    });
...
    extractors.push(Extractor {
        content_type: mime::APPLICATION_JSON.to_string(),
        handler: |body: &Bytes| -> Result<EmployeeLogin, ApiStatus> {
            // From https://stackoverflow.com/a/67340858
            match serde_json::from_slice(&body.to_owned()) {
                Ok(e) => Ok(e),
                Err(e) => Err(ApiStatus::new(err_code_500()).set_text(&e.to_string()))
            }
        }
    });

For application/x-www-form-urlencoded content type, we call method serde_html_form::from_bytes(…) from (new) crate serde_html_form to deserialise the byte stream to EmployeeLogin.

Cargo.toml has been updated to include crate serde_html_form.

And for application/json content type, we call to serde_json::from_slice(…) from the already included serde_json crate to do the work.

These’re the essential details of the code. The rest is fairly straightforward, and there’s also sufficient documentation to aid the reading of the code.

💥 Please also note that there’re also some more new modules, such as src/bh_libs/api_status.rs and src/helper/messages.rs, they’re very small, self-explanatory and have sufficient documentation where appropriate.

❺ Register new login routes /ui/login and /api/login.

Updated src/lib.rs:
pub async fn run(listener: TcpListener) -> Result<Server, std::io::Error> {
...
            .service(
                web::scope("/ui")
                    .service(handlers::employees_html1)
                    .service(handlers::employees_html2)
                    .service(auth_handlers::login_page)
                    // .service(auth_handlers::home_page),
            )
            .service(
                web::scope("/api")
                    .service(auth_handlers::login)
            )
            .service(
                web::resource("/helloemployee/{last_name}/{first_name}")
                    .wrap(middleware::SayHi)
                    .route(web::get().to(handlers::hi_first_employee_found))
            )
...			

❻ The last addition, the new templates/auth/login.html.

Please note, this login page has only HTML. There is no CSS at all. It looks like a dog’s breakfast, but it does work. There is no client-side validations either.

The Login button POSTs login requests to http://0.0.0.0:5000/api/login, the content type then is application/x-www-form-urlencoded.

For application/json content type, we can use Testfully. (We could also write our own AJAX requests to test.)

❼ As this is not yet the final version of the login process, we’re not writing any integration tests for it yet. We’ll do so in due course…

⓵ For the time being, we’ve written some new code and their associated unit tests. We have also written some documentation examples. The full test with the command cargo test should have all tests pass.

⓶ Manual tests of the new routes.

In the following two successful tests, I run the application server on an Ubuntu 22.10 machine, and run both the login page and Testfully on Windows 10.

Test application/x-www-form-urlencoded submission via login page:

Test application/json submission using Testfully:

In this failure test, I run the application server and Testfully on Windows 10. The submitted application/json data does not have an email field:

It’s been an interesting exercise for me. My understanding of Rust’s improved a little. I hope you find the information in this post useful. Thank you for reading and stay safe as always.

✿✿✿

Feature image source:

🦀 Index of the Complete Series.

Rust: adding actix-session and actix-identity to an existing actix-web application.

I’ve been studying user authentication with the actix-web framework. It seems that a popular choice is to use crate actix-identity, which requires crate actix-session. To add these two (2) crates, the code of the existing application must be refactored a little. We first look at code refactoring and integration. Then we briefly discuss the official examples given in the documentation of the 2 (two) mentioned crates.

🦀 Index of the Complete Series.

🚀 Please note, complete code for this post can be downloaded from GitHub with:

git clone -b v0.4.0 https://github.com/behai-nguyen/rust_web_01.git

The actix-web learning application mentioned above has been discussed in the following three (3) previous posts:

  1. Rust web application: MySQL server, sqlx, actix-web and tera.
  2. Rust: learning actix-web middleware 01.
  3. Rust: retrofit integration tests to an existing actix-web application.

The code we’re developing in this post is a continuation of the code from the third post above. 🚀 To get the code of this third post, please use the following command:

git clone -b v0.3.0 https://github.com/behai-nguyen/rust_web_01.git

— Note the tag v0.3.0.

The session storage backend we use with actix-session is RedisSessionStore, it requires Redis server. We use the Redis Official Docker Image as discussed in the following post:

Code Refactoring and Integration

❶ Adding the two (2) crates to Cargo.toml:

[dependencies]
...
actix-session = {version = "0.8.0", features = ["redis-rs-session"]}
actix-identity = "0.6.0"

For crate actix-session, we need to enable the redis-rs-session feature as per the official document instructions.

❷ Update function run(...) to async.

The current code as in the last (third) post has run(...) in src/lib.rs as a synchronous function:

pub fn run(listener: TcpListener) -> Result<Server, std::io::Error> {
...

This makes instantiating a required instance of RedisSessionStore impossible for me! “Impossible for me” because I tried and could not get it to work. I won’t list out what I’ve tried, it’d be a waste of time.

The next best option is to refactor it to async, and follow the official documentations to register IdentityMiddleware and SessionMiddleware.

Updated src/lib.rs:
pub async fn run(listener: TcpListener) -> Result<Server, std::io::Error> {
    ...

    let pool = database::get_mysql_pool(config.max_connections, &config.database_url).await;

    let secret_key = Key::generate();
    let redis_store = RedisSessionStore::new("redis://127.0.0.1:6379")
        .await
        .unwrap();

    let server = HttpServer::new(move || {
        ...

        App::new()
            .app_data(web::Data::new(AppState {
                db: pool.clone()
            }))
            .wrap(IdentityMiddleware::default())
            .wrap(SessionMiddleware::new(
                    redis_store.clone(),
                    secret_key.clone()
            ))
            .wrap(cors)
			
            ...
    })
    .listen(listener)?
    .run();

    Ok(server)
}

💥 Please note the following:

  1. The two (2) new middleware get registered before the existing Cors middleware, (i.e., .wrap(cors)). Recall from the actix-web middleware documenation that middleware get called in reverse order of registration, we want the Cors middleware to run first to reject invalid requests at an early stage.
  2. Now that run(...) is an async function, we can call .await on database::get_mysql_pool(...) instead of wrap it in the async_std::task::block_on function.
  3. Apart from the above refactorings, nothing else has been changed.

All functions who call run(...) must also be refactored now that run(...) is an async function. They are main(), spawn_app() and all integration test methods which call spawn_app().

❸ Update function main().

Updated src/main.rs:
...
#[actix_web::main]
async fn main() -> Result<(), std::io::Error> {
    ...

    let server = run(listener).await.unwrap();
    server.await
}

Note, the code in the previous version:

    ...
    run(listener)?.await

❹ Update function spawn_app().

Updated tests/common.rs:
pub async fn spawn_app() -> String {
    ...
    let server = run(listener).await.unwrap();
    let _ = tokio::spawn(server);
    ...
}

Note, the code in the previous version:

    ...
    let server = run(listener).expect("Failed to create server");
    let _ = tokio::spawn(server);
    ...

❺ Accordingly, in tests/test_handlers.rs all calls to spawn_app() updated to let root_url = &spawn_app().await;.

tests/test_handlers.rs with get_helloemployee_has_data() updated:
#[actix_web::test]
async fn get_helloemployee_has_data() {
    let root_url = &spawn_app().await;

    ...
}

❻ Other, unrelated and general refactorings.

⓵ This could be regarded as a bug fix.

In src/handlers.rs, endpoint handler methods are async, and so where get_employees(...) gets called, chain .await to it instead of wrapping it in the async_std::task::block_on function — which does not make any sense!

⓶ In modules src/database.rs and src/models.rs, the documentations now have both synchronous and asynchronous examples where appropriate.

Some Notes on Crates actix-session and actix-identity Examples

For each crate, I try out two (2) examples as listed in the documentations: one using cookie and one using Redis. I start off using Testfully as the client, and none works! And they are short and easy to understand examples!

Then I try using browsers. This also involves writing a simple HTML page. All examples work in browsers.

❶ The actix-session example using cookie.

Content of Cargo.toml:
[dependencies]
actix-session = {version = "0.8.0", features = ["cookie-session"]}
log = "0.4.20"
env_logger = "0.10.1"

The complete src/main.rs can be found on GitHub. Run http://localhost:8080/ on browsers to see how it works.

❷ The actix-session example using Redis.

Content of Cargo.toml:
[dependencies]
actix-web = "4.4.0"
actix-session = {version = "0.8.0", features = ["redis-actor-session"]}

The actual example code I put together from the examples listed in the document page:

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    // The secret key would usually be read from a configuration file/environment variables.
    let secret_key = Key::generate();
    let redis_connection_string = "127.0.0.1:6379";
    HttpServer::new(move ||
            App::new()
            // Add session management to your application using Redis for session state storage
            .wrap(
                SessionMiddleware::new(
                    RedisActorSessionStore::new(redis_connection_string),
                    secret_key.clone()
                )
            )
            .route("/index", web::get().to(index))
            .default_service(web::to(|| HttpResponse::Ok())))            
        .bind(("0.0.0.0", 8080))?
        .run()
        .await
}

async fn index(session: Session) -> Result<&'static str, Error> {    
    // access the session state
    if let Some(count) = session.get::<i32>("counter")? {
        println!("SESSION value: {}", count);
        // modify the session state
        session.insert("counter", count + 1)?;
    } else {
        session.insert("counter", 1)?;
    }

    Ok("Welcome!")
}

On browsers, repeatedly run http://localhost:8080/index, watch both the output on browsers and on the console.

❸ The actix-identity example using cookie.

Content of Cargo.toml:
[dependencies]
actix-web = "4.4.0"
actix-identity = "0.6.0"
actix-session = {version = "0.8.0", features = ["cookie-session"]}
env_logger = "0.10.1"

The complete src/main.rs can be found on GitHub.

We describe how to run this example after the listing of the next and last example. As both can be run using the same HTML page.

❹ The actix-identity example using Redis.

Content of Cargo.toml:
[dependencies]
actix-web = "4.4.0"
actix-session = {version = "0.8.0", features = ["redis-rs-session"]}
actix-identity = "0.6.0"

The actual example code I put together from the examples listed in the document page:

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    let secret_key = Key::generate();
    let redis_store = RedisSessionStore::new("redis://127.0.0.1:6379")
        .await
        .unwrap();

    HttpServer::new(move || {
        App::new()
            // Install the identity framework first.
            .wrap(IdentityMiddleware::default())
            // The identity system is built on top of sessions. You must install the session
            // middleware to leverage `actix-identity`. The session middleware must be mounted
            // AFTER the identity middleware: `actix-web` invokes middleware in the OPPOSITE
            // order of registration when it receives an incoming request.
            .wrap(SessionMiddleware::new(
                 redis_store.clone(),
                 secret_key.clone()
            ))
            .service(index)
            .service(login)
            .service(logout)
    })
    .bind(("0.0.0.0", 8080))?
    .run()
    .await
}

#[get("/")]
async fn index(user: Option<Identity>) -> impl Responder {
    if let Some(user) = user {
        format!("Welcome! {}", user.id().unwrap())
    } else {
        "Welcome Anonymous!".to_owned()
    }
}

#[post("/login")]
async fn login(request: HttpRequest) -> impl Responder {
    // Some kind of authentication should happen here
    // e.g. password-based, biometric, etc.
    // [...]

    let token = String::from("test");

    // attach a verified user identity to the active session
    Identity::login(&request.extensions(), token.into()).unwrap();

    HttpResponse::Ok()
}

#[post("/logout")]
async fn logout(user: Identity) -> impl Responder {
    user.logout();
    HttpResponse::Ok()
}

Both the above two examples can be tested using the following HTML page:

<!DOCTYPE html>
<html>
<head>
    <meta charset="utf-8" />
	<meta name="author" content="behai_nguyen@hotmail.com">
    <title>Test</title>
</head>

<body>
	<form method="post" action="http://localhost:8080/login" id="loginForm">
		<button type="submit">Login</button>
	</form>
	
	<form method="post" action="http://localhost:8080/logout" id="logoutForm">
		<button type="submit">Logout</button>
	</form>
</body>
</html>

Run the above HTML page, then:

  1. Click on the Login button
  2. Then run http://localhost:8080/
  3. Then click on the Logout button
  4. Then run http://localhost:8080/

Having been able to integrate these two (2) crates is a small step toward the authentication functionality which I’d like to build as a part of this learning application.

I write this post primarily just a record of what I’ve learned. I hope you somehow find the information helpful. Thank you for reading and stay safe as always.

✿✿✿

Feature image source:

🦀 Index of the Complete Series.

Design a site like this with WordPress.com
Get started