S3
Aws S3 and compatible services (including minio, digitalocean space, Tencent Cloud Object Storage(COS) and so on) support.
For more information about s3-compatible services, refer to Compatible Services.
Capabilities
This service can be used to:
- stat
- read
- write
- create_dir
- delete
- copy
- rename
- list
- presign
- blocking
Configuration
root
: Set the work dir for backend.bucket
: Set the container name for backend.endpoint
: Set the endpoint for backend.region
: Set the region for backend.access_key_id
: Set the access_key_id for backend.secret_access_key
: Set the secret_access_key for backend.security_token
: Set the security_token for backend.default_storage_class
: Set the default storage_class for backend.server_side_encryption
: Set the server_side_encryption for backend.server_side_encryption_aws_kms_key_id
: Set the server_side_encryption_aws_kms_key_id for backend.server_side_encryption_customer_algorithm
: Set the server_side_encryption_customer_algorithm for backend.server_side_encryption_customer_key
: Set the server_side_encryption_customer_key for backend.server_side_encryption_customer_key_md5
: Set the server_side_encryption_customer_key_md5 for backend.disable_config_load
: Disable aws config load from envenable_virtual_host_style
: Enable virtual host style.
Refer to [S3Builder
]'s public API docs for more information.
Temporary security credentials
OpenDAL now provides support for S3 temporary security credentials in IAM.
The way to take advantage of this feature is to build your S3 backend with Builder::security_token
.
But OpenDAL will not refresh the temporary security credentials, please keep in mind to refresh those credentials in time.
Server Side Encryption
OpenDAL provides full support of S3 Server Side Encryption(SSE) features.
The easiest way to configure them is to use helper functions like
- SSE-KMS:
server_side_encryption_with_aws_managed_kms_key
- SSE-KMS:
server_side_encryption_with_customer_managed_kms_key
- SSE-S3:
server_side_encryption_with_s3_key
- SSE-C:
server_side_encryption_with_customer_key
If those functions don't fulfill need, low-level options are also provided:
- Use service managed kms key
server_side_encryption="aws:kms"
- Use customer provided kms key
server_side_encryption="aws:kms"
server_side_encryption_aws_kms_key_id="your-kms-key"
- Use S3 managed key
server_side_encryption="AES256"
- Use customer key
server_side_encryption_customer_algorithm="AES256"
server_side_encryption_customer_key="base64-of-your-aes256-key"
server_side_encryption_customer_key_md5="base64-of-your-aes256-key-md5"
After SSE have been configured, all requests send by this backed will attach those headers.
Reference: Protecting data using server-side encryption
Example
Via Builder
Basic Setup
use std::sync::Arc;
use anyhow::Result;
use opendal::services::S3;
use opendal::Operator;
#[tokio::main]
async fn main() -> Result<()> {
// Create s3 backend builder.
let mut builder = S3::default();
// Set the root for s3, all operations will happen under this root.
//
// NOTE: the root must be absolute path.
builder.root("/path/to/dir");
// Set the bucket name. This is required.
builder.bucket("test");
// Set the region. This is required for some services, if you don't care about it, for example Minio service, just set it to "auto", it will be ignored.
builder.region("us-east-1");
// Set the endpoint.
//
// For examples:
// - "https://s3.amazonaws.com"
// - "http://127.0.0.1:9000"
// - "https://oss-ap-northeast-1.aliyuncs.com"
// - "https://cos.ap-seoul.myqcloud.com"
//
// Default to "https://s3.amazonaws.com"
builder.endpoint("https://s3.amazonaws.com");
// Set the access_key_id and secret_access_key.
//
// OpenDAL will try load credential from the env.
// If credential not set and no valid credential in env, OpenDAL will
// send request without signing like anonymous user.
builder.access_key_id("access_key_id");
builder.secret_access_key("secret_access_key");
let op: Operator = Operator::new(builder)?.finish();
Ok(())
}
S3 with SSE-C
use anyhow::Result;
use log::info;
use opendal::services::S3;
use opendal::Operator;
#[tokio::main]
async fn main() -> Result<()> {
let mut builder = S3::default();
// Setup builders
builder.root("/path/to/dir");
builder.bucket("test");
builder.region("us-east-1");
builder.endpoint("https://s3.amazonaws.com");
builder.access_key_id("access_key_id");
builder.secret_access_key("secret_access_key");
// Enable SSE-C
builder.server_side_encryption_with_customer_key("AES256", "customer_key".as_bytes());
let op = Operator::new(builder)?.finish();
info!("operator: {:?}", op);
// Writing your testing code here.
Ok(())
}
S3 with SSE-KMS and aws managed kms key
use anyhow::Result;
use log::info;
use opendal::services::S3;
use opendal::Operator;
#[tokio::main]
async fn main() -> Result<()> {
let mut builder = S3::default();
// Setup builders
builder.root("/path/to/dir");
builder.bucket("test");
builder.region("us-east-1");
builder.endpoint("https://s3.amazonaws.com");
builder.access_key_id("access_key_id");
builder.secret_access_key("secret_access_key");
// Enable SSE-KMS with aws managed kms key
builder.server_side_encryption_with_aws_managed_kms_key();
let op = Operator::new(builder)?.finish();
info!("operator: {:?}", op);
// Writing your testing code here.
Ok(())
}
S3 with SSE-KMS and customer managed kms key
use anyhow::Result;
use log::info;
use opendal::services::S3;
use opendal::Operator;
#[tokio::main]
async fn main() -> Result<()> {
let mut builder = S3::default();
// Setup builders
builder.root("/path/to/dir");
builder.bucket("test");
builder.region("us-east-1");
builder.endpoint("https://s3.amazonaws.com");
builder.access_key_id("access_key_id");
builder.secret_access_key("secret_access_key");
// Enable SSE-KMS with customer managed kms key
builder.server_side_encryption_with_customer_managed_kms_key("aws_kms_key_id");
let op = Operator::new(builder)?.finish();
info!("operator: {:?}", op);
// Writing your testing code here.
Ok(())
}
S3 with SSE-S3
use anyhow::Result;
use log::info;
use opendal::services::S3;
use opendal::Operator;
#[tokio::main]
async fn main() -> Result<()> {
let mut builder = S3::default();
// Setup builders
builder.root("/path/to/dir");
builder.bucket("test");
builder.region("us-east-1");
builder.endpoint("https://s3.amazonaws.com");
builder.access_key_id("access_key_id");
builder.secret_access_key("secret_access_key");
// Enable SSE-S3
builder.server_side_encryption_with_s3_key();
let op = Operator::new(builder)?.finish();
info!("operator: {:?}", op);
// Writing your testing code here.
Ok(())
}
Via Config
- Rust
- Node.js
- Python
use anyhow::Result;
use opendal::Operator;
use opendal::Scheme;
use std::collections::HashMap;
#[tokio::main]
async fn main() -> Result<()> {
let mut map = HashMap::new();
map.insert("root".to_string(), "/path/to/dir".to_string());
map.insert("bucket".to_string(), "test".to_string());
map.insert("region".to_string(), "us-east-1".to_string());
map.insert("endpoint".to_string(), "https://s3.amazonaws.com".to_string());
map.insert("access_key_id".to_string(), "access_key_id".to_string());
map.insert("secret_access_key".to_string(), "secret_access_key".to_string());
let op: Operator = Operator::via_map(Scheme::S3, map)?;
Ok(())
}
import { Operator } from "opendal";
async function main() {
const op = new Operator("s3", {
root: "/path/to/dir",
bucket: "test",
region: "us-east-1",
endpoint: "https://s3.amazonaws.com",
access_key_id: "access_key_id",
secret_access_key: "secret_access_key",
});
}
import opendal
op = opendal.Operator("s3",
root="/path/to/dir",
bucket="test",
region="us-east-1",
endpoint="https://s3.amazonaws.com",
access_key_id="access_key_id",
secret_access_key="secret_access_key",
)
Compatible Services
AWS S3
AWS S3 is the default implementations of s3 services. Only bucket
is required.
builder.bucket("<bucket_name>");
Alibaba Object Storage Service (OSS)
OSS is a s3 compatible service provided by Alibaba Cloud.
To connect to OSS, we need to set:
endpoint
: The endpoint of oss, for example:https://oss-cn-hangzhou.aliyuncs.com
bucket
: The bucket name of oss.
OSS provide internal endpoint for used at alibabacloud internally, please visit OSS Regions and endpoints for more details.
OSS only supports the virtual host style, users could meet errors like:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>SecondLevelDomainForbidden</Code>
<Message>The bucket you are attempting to access must be addressed using OSS third level domain.</Message>
<RequestId>62A1C265292C0632377F021F</RequestId>
<HostId>oss-cn-hangzhou.aliyuncs.com</HostId>
</Error>In that case, please enable virtual host style for requesting.
builder.endpoint("https://oss-cn-hangzhou.aliyuncs.com");
builder.region("<region>");
builder.bucket("<bucket_name>");
builder.enable_virtual_host_style();
Minio
minio is an open-source s3 compatible services.
To connect to minio, we need to set:
endpoint
: The endpoint of minio, for example:http://127.0.0.1:9000
region
: The region of minio. If you don't care about it, just set it to "auto", it will be ignored.bucket
: The bucket name of minio.
builder.endpoint("http://127.0.0.1:9000");
builder.region("<region>");
builder.bucket("<bucket_name>");
QingStor Object Storage
QingStor Object Storage is a S3-compatible service provided by QingCloud.
To connect to QingStor Object Storage, we need to set:
endpoint
: The endpoint of QingStor s3 compatible endpoint, for example:https://s3.pek3b.qingstor.com
bucket
: The bucket name.
Scaleway Object Storage
Scaleway Object Storage is a S3-compatible and multi-AZ redundant object storage service.
To connect to Scaleway Object Storage, we need to set:
endpoint
: The endpoint of scaleway, for example:https://s3.nl-ams.scw.cloud
region
: The region of scaleway.bucket
: The bucket name of scaleway.
Tencent Cloud Object Storage (COS)
COS is a s3 compatible service provided by Tencent Cloud.
To connect to COS, we need to set:
endpoint
: The endpoint of cos, for example:https://cos.ap-beijing.myqcloud.com
bucket
: The bucket name of cos.
Wasabi Object Storage
Wasabi is a s3 compatible service.
Cloud storage pricing that is 80% less than Amazon S3.
To connect to wasabi, we need to set:
endpoint
: The endpoint of wasabi, for example:https://s3.us-east-2.wasabisys.com
bucket
: The bucket name of wasabi.
Refer to What are the service URLs for Wasabi's different storage regions? for more details.
Cloudflare R2
Cloudflare R2 provides s3 compatible API.
Cloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
To connect to r2, we need to set:
endpoint
: The endpoint of r2, for example:https://<account_id>.r2.cloudflarestorage.com
bucket
: The bucket name of r2.region
: When you create a new bucket, the data location is set to Automatic by default. So please useauto
for region.batch_max_operations
: R2's delete objects will returnInternal Error
if the batch is larger than700
. Please set this value<= 700
to make sure batch delete work as expected.enable_exact_buf_write
: R2 requires the non-tailing parts size to be exactly the same. Please enable this option to avoid the errorAll non-trailing parts must have the same length
.
Google Cloud Storage XML API
Google Cloud Storage XML API provides s3 compatible API.
endpoint
: The endpoint of Google Cloud Storage XML API, for example:https://storage.googleapis.com
bucket
: The bucket name.- To access GCS via S3 API, please enable
features = ["native-tls"]
in yourCargo.toml
to avoid connection being reset when usingrustls
. Tracking in https://github.com/seanmonstar/reqwest/issues/1809
Ceph Rados Gateway
Ceph supports a RESTful API that is compatible with the basic data access model of the Amazon S3 API.
For more information, refer: https://docs.ceph.com/en/latest/radosgw/s3/