I wrote a blog post on getting started with Rust AWS SDK a few months ago. I also started my youtube channel with a video on the same topic. If you haven’t yet seen it, my channel is sliced array (the name would probably change, but this is good enough for now).

One of the comments asked about how to use different AWS CLI profiles in the Rust SDK, and I did a follow up video on the topic. This post is a supplementary post for that video.

The rust aws sdk is pretty straightforward when working on the majority use case. However, the SDK documentation can be a bit lacking when it comes to customising for specific scenarios. For anything outside the examples provided, you need to start digging into the source code.

Detailed Breakdown of the Code

For specifying CLI profiles, all I did was to look at the ConfigLoader source code in aws-config crate.

load_from_env() method is deprecated now and they advise to use load_defaults() instead. A few methods used in my earlier video on the rust aws sdk have been deprecated, but new ones are pretty easy to be adapted.

When looking at the source code, we can find that there is a field on the ConfigLoader struct called profile_name_override. And that leads us to the method profile_name() implemented for the ConfigLoader struct. This is what should be used.

So lets think of an example scenario. Let’s list the buckets for multiple profiles as specified in the CLI config files.

For this I’m going to use the same credentials for the same account, but this can easily be for different accounts or roles.

So the first thing I’m going to do is to set up the profile config in credentials and config files. These are the two AWS CLI configuration files that the SDKs and CLI works with. In Linux, these are usually found as $HOME/.aws/credentials and $HOME/.aws/config. I used the temporary credentials for my account under two different profiles, in the credentials file. The two profiles that I’m going to use, are test1 and test2.

[test1]
aws_access_key_id=ASIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
aws_session_token = IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZVERYLONGSTRINGEXAMPLE

[test2]
aws_access_key_id=ASIAI44QH8DHBEXAMPLE
aws_secret_access_key=je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
aws_session_token = fcZib3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZ2luX2IQoJb3JpZVERYLONGSTRINGEXAMPLE

I also specify the default region to use in the config file for these two profiles, in the config file.

[profile test]
region=ap-southeast-2

[profile test2]
region=ap-southeast-2

Now these profile names can be used in the Rust code I’m going to write and list the S3 buckets in the account using those credentials.

So let’s start with a new Cargo project.

cargo new aws-profiles

And let’s import the AWS SDK crates, config and the client crate for S3.

cargo add aws-config aws-sdk-s3 tokio --features tokio/full

After that I’m going to quickly test that my credentials work, by writing the basic code to list buckets using default credentials. For this, I’m going to use the load_defaults() method.

use aws_sdk_s3::{Client, Error};
use std::env;

#[tokio::main]
async fn main() -> Result<(), Error> {
    let shared_config = aws_config::load_defaults(BehaviorVersion::latest()).await;
    let client = Client::new(&shared_config);
    let req = client.list_buckets();
    let resp = req.send().await.unwrap();

    let buckets = resp.buckets();
    if buckets.len() >= 1 {
        for bucket in buckets {
            println!("{}", bucket.name().unwrap());
        }
    } else {
        println!("no buckets found");
    }

    println!("================================================================================");

    Ok(())
}

When running this with cargo run, we get the list of buckets, so that’s a good sign.

Now let’s modify this code so that it reads the list of profiles to use as a comma separated value from an environment variable.

One of my pet peeves is reviewing code and seeing someone not using a prefix for the specific env var name they are using. Using a prefix will instantly make your code predictable, as there is no way it will interfere with any other setting, even for libraries your own code might be loading. So instead of just PROFILES or even AWS_PROFILES I’m going to use MY_PROFILE_NAMES.

...
    let profile_names = match env::var("MY_PROFILE_NAMES") {
        Ok(v) => v,
        Err(_) => {
            eprintln!("env var MY_PROFILE_NAMES is not set");
            std::process::exit(1);
        },
    };
...

After reading the value from the env var, I’m going to split it on comma, and iterate the resulting list.

...
    let prof_names: Vec<&str> = profile_names.split(",").collect();
    for prof_name in prof_names {
        ...
    }
...

Inside the loop, I can start using the profile name to build a config struct with the SDK. The key difference from using the default credentials and config is the switch to from_env() method. Instead of the direct SdkConfig struct, we get the ConfigLoader struct that we can use to override values before starting the loading chain. With that I’m going to override the profile name with the loop value.

...
        let shared_config = aws_config::from_env().profile_name(prof_name).load().await;
...

The complete modified code looks like the following.

use aws_sdk_s3::{Client, Error};
use std::env;

#[tokio::main]
async fn main() -> Result<(), Error> {
    let profile_names = match env::var("MY_PROFILE_NAMES") {
        Ok(v) => v,
        Err(_) => {
            eprintln!("env var MY_PROFILE_NAMES is not set");
            std::process::exit(1);
        },
    };

    let prof_names: Vec<&str> = profile_names.split(",").collect();
    for prof_name in prof_names {
        let shared_config = aws_config::from_env().profile_name(prof_name).load().await;
        let client = Client::new(&shared_config);
        let req = client.list_buckets();
        let resp = req.send().await.unwrap();

        let buckets = resp.buckets();
        if buckets.len() >= 1 {
            for bucket in buckets {
                println!("profile {prof_name}: {}", bucket.name().unwrap());
            }
        } else {
            println!("no buckets found for profile {prof_name}");
        }

        println!("================================================================================");
    }

    Ok(())
}

When testing this I’m setting the MY_PROFILE_NAMES environment variable with the value of test1,test2.

export MY_PROFILE_NAMES=test1,test2

Because I used the credentials from the same account, the list of buckets are the same. But you get the idea.

tl:dr;

So in brief,

  1. Use from_env() from the aws_config struct
  2. Use profile_name() method in the ConfigLoader struct that is returned from the from_env() method