Use S3 with pixi#
If you want to use S3 object storage to fetch your packages, you can use the s3://
protocol as a channel.
In the bucket, your objects need to adhere to the standard conda repository structure:
Pixi supports two ways to configure access to your S3 bucket:
- Using AWS credentials from environment variables or AWS configuration files, like in any other AWS tool
- Using pixi's configuration in combination with storing the credentials in pixi's authentication storage
These two options are mutually exclusive!
Specifying s3-options
(see below) will deactivate the AWS credentials fetching.
You can either use the AWS credentials from the conventional locations (by not specifying s3-options
) or from pixi's authentication storage (by specifying s3-options
).
Using AWS configuration#
You can specify AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
in your environment variables for Pixi to use them.
You can also specify AWS_CONFIG_FILE
and AWS_PROFILE
to use a custom AWS configuration file and profile.
[profile conda]
sso_account_id = 123456789012
sso_role_name = PowerUserAccess
sso_start_url = https://my-company.awsapps.com/start
sso_region = eu-central-1
region = eu-central-1
output = json
$ export AWS_CONFIG_FILE=/path/to/aws.config
$ export AWS_PROFILE=conda
$ aws sso login
Attempting to automatically open the SSO authorization page in your default browser.
If the browser does not open or you wish to use a different device to authorize this request, open the following URL:
https://my-company.awsapps.com/start/#/device
Then enter the code:
DTBC-WFXC
Successfully logged into Start URL: https://my-company.awsapps.com/start
$ pixi search -c s3://my-s3-bucket/channel my-private-package
# ...
jobs:
ci:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
# temporary credentials via OIDC
- name: Log in to AWS
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/github-poweruser
aws-region: eu-central-1
- name: Set up pixi
# AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set by aws-actions/configure-aws-credentials
uses: prefix-dev/setup-pixi@v0.8.3
Using pixi's configuration#
You can specify the workspace.s3-options
in your pixi.toml
file.
This might be useful when you want to use a custom S3-compatible host and not AWS's configuration.
[workspace.s3-options.my-bucket]
endpoint-url = "https://my-s3-host"
region = "us-east-1"
force-path-style = false
You need to configure this per bucket you use, i.e. use [workspace.s3-options.<bucket-name>]
.
$ pixi auth login --aws-access-key-id=... --aws-secret-access-key=... s3://my-s3-bucket
Authenticating with s3://my-s3-bucket
$ pixi search my-private-package
# ...
You can also specify the s3-options
in your Pixi configuration.
[s3-options.my-bucket]
endpoint-url = "https://my-s3-host"
region = "us-east-1"
force-path-style = false
jobs:
ci:
runs-on: ubuntu-latest
steps:
- name: Check out repository
uses: actions/checkout@v4
# temporary credentials via OIDC
- name: Log in to AWS
uses: aws-actions/configure-aws-credentials@v4
id: aws
with:
role-to-assume: arn:aws:iam::123456789012:role/github-poweruser
aws-region: eu-central-1
- name: Set up pixi
# AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are set by aws-actions/configure-aws-credentials
uses: prefix-dev/setup-pixi@v0.8.3
with:
auth-s3-access-key-id: ${{ steps.aws.outputs.aws-access-key-id }}
auth-s3-secret-access-key: ${{ steps.aws.outputs.aws-secret-access-key }}
auth-s3-session-token: ${{ steps.aws.outputs.aws-session-token }}
auth-host: s3://my-s3-bucket
Public S3 buckets#
Public buckets that don't need authentication can be used by just specifying the endpoint as a regular https
URL.
For example, on AWS, you might have a bucket that is publicly accessible via https://my-public-bucket.s3.eu-central-1.amazonaws.com
.
Note that for this, you need to configure your S3 bucket in such a way that it allows public access.
On AWS, you need the GetObject
and ListBucket
permissions for this.
Here is an example policy for AWS S3:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-public-bucket/*"
},
{
"Sid": "PublicReadListBucket",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::my-public-bucket"
}
]
}
Cloudflare R2 also supports public buckets through a Cloudflare-managed r2.dev
subdomain or a custom domain under your control, see here.
S3-compatible storage#
Many other cloud providers offer S3-compatible storage APIs.
You can use them with Pixi by specifying the s3-options
in your manifest file.
MinIO#
[workspace.s3-options]
endpoint-url = "https://minio.example.com"
region = "us-east-1"
force-path-style = true
Cloudflare R2#
endpoint-url = "https://<account-id>.eu.r2.cloudflarestorage.com"
region = "WEUR"
force-path-style = false
Wasabi#
Backblaze B2#
endpoint-url = "https://s3.us-west-004.backblazeb2.com"
region = "us-west-004"
force-path-style = true
Google Cloud Storage#
Note Pixi also supports gcs://
URLs.
endpoint-url = "https://storage.googleapis.com"
region = "us-east-1"
force-path-style = false
Hetzner Object Storage#
endpoint-url = "https://fsn1.your-objectstorage.com"
region = "US"
force-path-style = false
Uploading to S3#
You can upload to S3 using rattler-build upload s3
.
For more information, see rattler-build's documentation.
Re-indexing S3 buckets after uploading new packages#
Every time you upload new packages to your package repository, the repodata.json
file needs to be updated.
This is done automatically for conda package servers like anaconda.org or prefix.dev.
For S3 buckets, on the other hand, we need to do this manually since an S3 bucket is only a storage system and not a package server.
To re-index an S3 bucket, you can use the rattler-index
package which is available on conda-forge.