mirror of
https://github.com/rustfs/rustfs.git
synced 2026-01-17 01:30:33 +00:00
Compare commits
48 Commits
1.0.0-alph
...
feature/me
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
705cc0c9f6 | ||
|
|
3e2252e4bb | ||
|
|
f3a1431fa5 | ||
|
|
3bd96bcf10 | ||
|
|
20ea591049 | ||
|
|
cc31e88c91 | ||
|
|
b5535083de | ||
|
|
1e35edf079 | ||
|
|
8dd3e8b534 | ||
|
|
8e0aeb4fdc | ||
|
|
abe8a50b5a | ||
|
|
61f4d307b5 | ||
|
|
3eafeb0ff0 | ||
|
|
4abfc9f554 | ||
|
|
1057953052 | ||
|
|
889c67f359 | ||
|
|
1d111464f9 | ||
|
|
a0b2f5a232 | ||
|
|
46557cddd1 | ||
|
|
443947e1ac | ||
|
|
8821fcc1e7 | ||
|
|
17828ec2a8 | ||
|
|
94d5b1c1e4 | ||
|
|
0bca1fbd56 | ||
|
|
52c2d15a4b | ||
|
|
352035a06f | ||
|
|
fe4fabb195 | ||
|
|
07c5e7997a | ||
|
|
0007b541cd | ||
|
|
0f2e4d124c | ||
|
|
2e4ce6921b | ||
|
|
7178a94792 | ||
|
|
e8fe9731fd | ||
|
|
3ba415740e | ||
|
|
aeccd14d99 | ||
|
|
89a155a35d | ||
|
|
67095c05f9 | ||
|
|
1229fddb5d | ||
|
|
08be8f5472 | ||
|
|
0bf25fdefa | ||
|
|
9e2fa148ee | ||
|
|
cb3e496b17 | ||
|
|
997f54e700 | ||
|
|
1a4e95e940 | ||
|
|
a3006ab407 | ||
|
|
e197486c8c | ||
|
|
0da943a6a4 | ||
|
|
6273b138f6 |
103
.github/s3tests/README.md
vendored
Normal file
103
.github/s3tests/README.md
vendored
Normal file
@@ -0,0 +1,103 @@
|
||||
# S3 Compatibility Tests Configuration
|
||||
|
||||
This directory contains the configuration for running [Ceph S3 compatibility tests](https://github.com/ceph/s3-tests) against RustFS.
|
||||
|
||||
## Configuration File
|
||||
|
||||
The `s3tests.conf` file is based on the official `s3tests.conf.SAMPLE` from the ceph/s3-tests repository. It uses environment variable substitution via `envsubst` to configure the endpoint and credentials.
|
||||
|
||||
### Key Configuration Points
|
||||
|
||||
- **Host**: Set via `${S3_HOST}` environment variable (e.g., `rustfs-single` for single-node, `lb` for multi-node)
|
||||
- **Port**: 9000 (standard RustFS port)
|
||||
- **Credentials**: Uses `${S3_ACCESS_KEY}` and `${S3_SECRET_KEY}` from workflow environment
|
||||
- **TLS**: Disabled (`is_secure = False`)
|
||||
|
||||
## Test Execution Strategy
|
||||
|
||||
### Network Connectivity Fix
|
||||
|
||||
Tests run inside a Docker container on the `rustfs-net` network, which allows them to resolve and connect to the RustFS container hostnames. This fixes the "Temporary failure in name resolution" error that occurred when tests ran on the GitHub runner host.
|
||||
|
||||
### Performance Optimizations
|
||||
|
||||
1. **Parallel Execution**: Uses `pytest-xdist` with `-n 4` to run tests in parallel across 4 workers
|
||||
2. **Load Distribution**: Uses `--dist=loadgroup` to distribute test groups across workers
|
||||
3. **Fail-Fast**: Uses `--maxfail=50` to stop after 50 failures, saving time on catastrophic failures
|
||||
|
||||
### Feature Filtering
|
||||
|
||||
Tests are filtered using pytest markers (`-m`) to skip features not yet supported by RustFS:
|
||||
|
||||
- `lifecycle` - Bucket lifecycle policies
|
||||
- `versioning` - Object versioning
|
||||
- `s3website` - Static website hosting
|
||||
- `bucket_logging` - Bucket logging
|
||||
- `encryption` / `sse_s3` - Server-side encryption
|
||||
- `cloud_transition` / `cloud_restore` - Cloud storage transitions
|
||||
- `lifecycle_expiration` / `lifecycle_transition` - Lifecycle operations
|
||||
|
||||
This filtering:
|
||||
1. Reduces test execution time significantly (from 1+ hour to ~10-15 minutes)
|
||||
2. Focuses on features RustFS currently supports
|
||||
3. Avoids hundreds of expected failures
|
||||
|
||||
## Running Tests Locally
|
||||
|
||||
### Single-Node Test
|
||||
|
||||
```bash
|
||||
# Set credentials
|
||||
export S3_ACCESS_KEY=rustfsadmin
|
||||
export S3_SECRET_KEY=rustfsadmin
|
||||
|
||||
# Start RustFS container
|
||||
docker run -d --name rustfs-single \
|
||||
--network rustfs-net \
|
||||
-e RUSTFS_ADDRESS=0.0.0.0:9000 \
|
||||
-e RUSTFS_ACCESS_KEY=$S3_ACCESS_KEY \
|
||||
-e RUSTFS_SECRET_KEY=$S3_SECRET_KEY \
|
||||
-e RUSTFS_VOLUMES="/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3" \
|
||||
rustfs-ci
|
||||
|
||||
# Generate config
|
||||
export S3_HOST=rustfs-single
|
||||
envsubst < .github/s3tests/s3tests.conf > /tmp/s3tests.conf
|
||||
|
||||
# Run tests
|
||||
docker run --rm \
|
||||
--network rustfs-net \
|
||||
-v /tmp/s3tests.conf:/etc/s3tests.conf:ro \
|
||||
python:3.12-slim \
|
||||
bash -c '
|
||||
apt-get update -qq && apt-get install -y -qq git
|
||||
git clone --depth 1 https://github.com/ceph/s3-tests.git /s3-tests
|
||||
cd /s3-tests
|
||||
pip install -q -r requirements.txt pytest-xdist
|
||||
S3TEST_CONF=/etc/s3tests.conf pytest -v -n 4 \
|
||||
s3tests/functional/test_s3.py \
|
||||
-m "not lifecycle and not versioning and not s3website and not bucket_logging and not encryption and not sse_s3"
|
||||
'
|
||||
```
|
||||
|
||||
## Test Results Interpretation
|
||||
|
||||
- **PASSED**: Test succeeded, feature works correctly
|
||||
- **FAILED**: Test failed, indicates a potential bug or incompatibility
|
||||
- **ERROR**: Test setup failed (e.g., network issues, missing dependencies)
|
||||
- **SKIPPED**: Test skipped due to marker filtering
|
||||
|
||||
## Adding New Feature Support
|
||||
|
||||
When adding support for a new S3 feature to RustFS:
|
||||
|
||||
1. Remove the corresponding marker from the filter in `.github/workflows/e2e-s3tests.yml`
|
||||
2. Run the tests to verify compatibility
|
||||
3. Fix any failing tests
|
||||
4. Update this README to reflect the newly supported feature
|
||||
|
||||
## References
|
||||
|
||||
- [Ceph S3 Tests Repository](https://github.com/ceph/s3-tests)
|
||||
- [S3 API Compatibility](https://docs.aws.amazon.com/AmazonS3/latest/API/)
|
||||
- [pytest-xdist Documentation](https://pytest-xdist.readthedocs.io/)
|
||||
185
.github/s3tests/s3tests.conf
vendored
Normal file
185
.github/s3tests/s3tests.conf
vendored
Normal file
@@ -0,0 +1,185 @@
|
||||
# RustFS s3-tests configuration
|
||||
# Based on: https://github.com/ceph/s3-tests/blob/master/s3tests.conf.SAMPLE
|
||||
#
|
||||
# Usage:
|
||||
# Single-node: S3_HOST=rustfs-single envsubst < s3tests.conf > /tmp/s3tests.conf
|
||||
# Multi-node: S3_HOST=lb envsubst < s3tests.conf > /tmp/s3tests.conf
|
||||
|
||||
[DEFAULT]
|
||||
## this section is just used for host, port and bucket_prefix
|
||||
|
||||
# host set for RustFS - will be substituted via envsubst
|
||||
host = ${S3_HOST}
|
||||
|
||||
# port for RustFS
|
||||
port = 9000
|
||||
|
||||
## say "False" to disable TLS
|
||||
is_secure = False
|
||||
|
||||
## say "False" to disable SSL Verify
|
||||
ssl_verify = False
|
||||
|
||||
[fixtures]
|
||||
## all the buckets created will start with this prefix;
|
||||
## {random} will be filled with random characters to pad
|
||||
## the prefix to 30 characters long, and avoid collisions
|
||||
bucket prefix = rustfs-{random}-
|
||||
|
||||
# all the iam account resources (users, roles, etc) created
|
||||
# will start with this name prefix
|
||||
iam name prefix = s3-tests-
|
||||
|
||||
# all the iam account resources (users, roles, etc) created
|
||||
# will start with this path prefix
|
||||
iam path prefix = /s3-tests/
|
||||
|
||||
[s3 main]
|
||||
# main display_name
|
||||
display_name = RustFS Tester
|
||||
|
||||
# main user_id
|
||||
user_id = rustfsadmin
|
||||
|
||||
# main email
|
||||
email = tester@rustfs.local
|
||||
|
||||
# zonegroup api_name for bucket location
|
||||
api_name = default
|
||||
|
||||
## main AWS access key
|
||||
access_key = ${S3_ACCESS_KEY}
|
||||
|
||||
## main AWS secret key
|
||||
secret_key = ${S3_SECRET_KEY}
|
||||
|
||||
## replace with key id obtained when secret is created, or delete if KMS not tested
|
||||
#kms_keyid = 01234567-89ab-cdef-0123-456789abcdef
|
||||
|
||||
## Storage classes
|
||||
#storage_classes = "LUKEWARM, FROZEN"
|
||||
|
||||
## Lifecycle debug interval (default: 10)
|
||||
#lc_debug_interval = 20
|
||||
## Restore debug interval (default: 100)
|
||||
#rgw_restore_debug_interval = 60
|
||||
#rgw_restore_processor_period = 60
|
||||
|
||||
[s3 alt]
|
||||
# alt display_name
|
||||
display_name = RustFS Alt Tester
|
||||
|
||||
## alt email
|
||||
email = alt@rustfs.local
|
||||
|
||||
# alt user_id
|
||||
user_id = rustfsalt
|
||||
|
||||
# alt AWS access key (must be different from s3 main for many tests)
|
||||
access_key = ${S3_ALT_ACCESS_KEY}
|
||||
|
||||
# alt AWS secret key
|
||||
secret_key = ${S3_ALT_SECRET_KEY}
|
||||
|
||||
#[s3 cloud]
|
||||
## to run the testcases with "cloud_transition" for transition
|
||||
## and "cloud_restore" for restore attribute.
|
||||
## Note: the waiting time may have to tweaked depending on
|
||||
## the I/O latency to the cloud endpoint.
|
||||
|
||||
## host set for cloud endpoint
|
||||
# host = localhost
|
||||
|
||||
## port set for cloud endpoint
|
||||
# port = 8001
|
||||
|
||||
## say "False" to disable TLS
|
||||
# is_secure = False
|
||||
|
||||
## cloud endpoint credentials
|
||||
# access_key = 0555b35654ad1656d804
|
||||
# secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
|
||||
|
||||
## storage class configured as cloud tier on local rgw server
|
||||
# cloud_storage_class = CLOUDTIER
|
||||
|
||||
## Below are optional -
|
||||
|
||||
## Above configured cloud storage class config options
|
||||
# retain_head_object = false
|
||||
# allow_read_through = false # change it to enable read_through
|
||||
# read_through_restore_days = 2
|
||||
# target_storage_class = Target_SC
|
||||
# target_path = cloud-bucket
|
||||
|
||||
## another regular storage class to test multiple transition rules,
|
||||
# storage_class = S1
|
||||
|
||||
[s3 tenant]
|
||||
# tenant display_name
|
||||
display_name = RustFS Tenant Tester
|
||||
|
||||
# tenant user_id
|
||||
user_id = rustfstenant
|
||||
|
||||
# tenant AWS access key
|
||||
access_key = ${S3_ACCESS_KEY}
|
||||
|
||||
# tenant AWS secret key
|
||||
secret_key = ${S3_SECRET_KEY}
|
||||
|
||||
# tenant email
|
||||
email = tenant@rustfs.local
|
||||
|
||||
# tenant name
|
||||
tenant = testx
|
||||
|
||||
#following section needs to be added for all sts-tests
|
||||
[iam]
|
||||
#used for iam operations in sts-tests
|
||||
#email
|
||||
email = s3@rustfs.local
|
||||
|
||||
#user_id
|
||||
user_id = rustfsiam
|
||||
|
||||
#access_key
|
||||
access_key = ${S3_ACCESS_KEY}
|
||||
|
||||
#secret_key
|
||||
secret_key = ${S3_SECRET_KEY}
|
||||
|
||||
#display_name
|
||||
display_name = RustFS IAM User
|
||||
|
||||
# iam account root user for iam_account tests
|
||||
[iam root]
|
||||
access_key = ${S3_ACCESS_KEY}
|
||||
secret_key = ${S3_SECRET_KEY}
|
||||
user_id = RGW11111111111111111
|
||||
email = account1@rustfs.local
|
||||
|
||||
# iam account root user in a different account than [iam root]
|
||||
[iam alt root]
|
||||
access_key = ${S3_ACCESS_KEY}
|
||||
secret_key = ${S3_SECRET_KEY}
|
||||
user_id = RGW22222222222222222
|
||||
email = account2@rustfs.local
|
||||
|
||||
#following section needs to be added when you want to run Assume Role With Webidentity test
|
||||
[webidentity]
|
||||
#used for assume role with web identity test in sts-tests
|
||||
#all parameters will be obtained from ceph/qa/tasks/keycloak.py
|
||||
#token=<access_token>
|
||||
|
||||
#aud=<obtained after introspecting token>
|
||||
|
||||
#sub=<obtained after introspecting token>
|
||||
|
||||
#azp=<obtained after introspecting token>
|
||||
|
||||
#user_token=<access token for a user, with attribute Department=[Engineering, Marketing>]
|
||||
|
||||
#thumbprint=<obtained from x509 certificate>
|
||||
|
||||
#KC_REALM=<name of the realm>
|
||||
8
.github/workflows/audit.yml
vendored
8
.github/workflows/audit.yml
vendored
@@ -40,11 +40,11 @@ env:
|
||||
jobs:
|
||||
security-audit:
|
||||
name: Security Audit
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
timeout-minutes: 15
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Install cargo-audit
|
||||
uses: taiki-e/install-action@v2
|
||||
@@ -65,14 +65,14 @@ jobs:
|
||||
|
||||
dependency-review:
|
||||
name: Dependency Review
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
if: github.event_name == 'pull_request'
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Dependency Review
|
||||
uses: actions/dependency-review-action@v4
|
||||
|
||||
32
.github/workflows/build.yml
vendored
32
.github/workflows/build.yml
vendored
@@ -83,7 +83,7 @@ jobs:
|
||||
# Build strategy check - determine build type based on trigger
|
||||
build-check:
|
||||
name: Build Strategy Check
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
outputs:
|
||||
should_build: ${{ steps.check.outputs.should_build }}
|
||||
build_type: ${{ steps.check.outputs.build_type }}
|
||||
@@ -92,7 +92,7 @@ jobs:
|
||||
is_prerelease: ${{ steps.check.outputs.is_prerelease }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
@@ -167,19 +167,19 @@ jobs:
|
||||
matrix:
|
||||
include:
|
||||
# Linux builds
|
||||
- os: ubuntu-latest
|
||||
- os: ubicloud-standard-4
|
||||
target: x86_64-unknown-linux-musl
|
||||
cross: false
|
||||
platform: linux
|
||||
- os: ubuntu-latest
|
||||
- os: ubicloud-standard-4
|
||||
target: aarch64-unknown-linux-musl
|
||||
cross: true
|
||||
platform: linux
|
||||
- os: ubuntu-latest
|
||||
- os: ubicloud-standard-4
|
||||
target: x86_64-unknown-linux-gnu
|
||||
cross: false
|
||||
platform: linux
|
||||
- os: ubuntu-latest
|
||||
- os: ubicloud-standard-4
|
||||
target: aarch64-unknown-linux-gnu
|
||||
cross: true
|
||||
platform: linux
|
||||
@@ -203,7 +203,7 @@ jobs:
|
||||
# platform: windows
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
@@ -454,7 +454,7 @@ jobs:
|
||||
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
|
||||
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
|
||||
OSS_REGION: cn-beijing
|
||||
OSS_ENDPOINT: https://oss-cn-beijing.aliyuncs.com
|
||||
OSS_ENDPOINT: https://oss-accelerate.aliyuncs.com
|
||||
shell: bash
|
||||
run: |
|
||||
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
|
||||
@@ -532,7 +532,7 @@ jobs:
|
||||
name: Build Summary
|
||||
needs: [ build-check, build-rustfs ]
|
||||
if: always() && needs.build-check.outputs.should_build == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
steps:
|
||||
- name: Build completion summary
|
||||
shell: bash
|
||||
@@ -584,7 +584,7 @@ jobs:
|
||||
name: Create GitHub Release
|
||||
needs: [ build-check, build-rustfs ]
|
||||
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
permissions:
|
||||
contents: write
|
||||
outputs:
|
||||
@@ -592,7 +592,7 @@ jobs:
|
||||
release_url: ${{ steps.create.outputs.release_url }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
@@ -670,13 +670,13 @@ jobs:
|
||||
name: Upload Release Assets
|
||||
needs: [ build-check, build-rustfs, create-release ]
|
||||
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
permissions:
|
||||
contents: write
|
||||
actions: read
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Download all build artifacts
|
||||
uses: actions/download-artifact@v5
|
||||
@@ -751,7 +751,7 @@ jobs:
|
||||
name: Update Latest Version
|
||||
needs: [ build-check, upload-release-assets ]
|
||||
if: startsWith(github.ref, 'refs/tags/')
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
steps:
|
||||
- name: Update latest.json
|
||||
env:
|
||||
@@ -801,12 +801,12 @@ jobs:
|
||||
name: Publish Release
|
||||
needs: [ build-check, create-release, upload-release-assets ]
|
||||
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
permissions:
|
||||
contents: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Update release notes and publish
|
||||
env:
|
||||
|
||||
37
.github/workflows/ci.yml
vendored
37
.github/workflows/ci.yml
vendored
@@ -4,7 +4,7 @@
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
@@ -62,17 +62,23 @@ on:
|
||||
permissions:
|
||||
contents: read
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.ref }}
|
||||
cancel-in-progress: true
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
RUST_BACKTRACE: 1
|
||||
CARGO_BUILD_JOBS: 8
|
||||
|
||||
jobs:
|
||||
|
||||
skip-check:
|
||||
name: Skip Duplicate Actions
|
||||
permissions:
|
||||
actions: write
|
||||
contents: read
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
outputs:
|
||||
should_skip: ${{ steps.skip_check.outputs.should_skip }}
|
||||
steps:
|
||||
@@ -83,15 +89,13 @@ jobs:
|
||||
concurrent_skipping: "same_content_newer"
|
||||
cancel_others: true
|
||||
paths_ignore: '["*.md", "docs/**", "deploy/**"]'
|
||||
# Never skip release events and tag pushes
|
||||
do_not_skip: '["workflow_dispatch", "schedule", "merge_group", "release", "push"]'
|
||||
|
||||
|
||||
typos:
|
||||
name: Typos
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
steps:
|
||||
- uses: actions/checkout@v5
|
||||
- uses: actions/checkout@v6
|
||||
- uses: dtolnay/rust-toolchain@stable
|
||||
- name: Typos check with custom config file
|
||||
uses: crate-ci/typos@master
|
||||
@@ -100,13 +104,11 @@ jobs:
|
||||
name: Test and Lint
|
||||
needs: skip-check
|
||||
if: needs.skip-check.outputs.should_skip != 'true'
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
timeout-minutes: 60
|
||||
steps:
|
||||
- name: Delete huge unnecessary tools folder
|
||||
run: rm -rf /opt/hostedtoolcache
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Setup Rust environment
|
||||
uses: ./.github/actions/setup
|
||||
@@ -116,6 +118,9 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
|
||||
- name: Install cargo-nextest
|
||||
uses: taiki-e/install-action@nextest
|
||||
|
||||
- name: Run tests
|
||||
run: |
|
||||
cargo nextest run --all --exclude e2e_test
|
||||
@@ -131,11 +136,16 @@ jobs:
|
||||
name: End-to-End Tests
|
||||
needs: skip-check
|
||||
if: needs.skip-check.outputs.should_skip != 'true'
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
timeout-minutes: 30
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Clean up previous test run
|
||||
run: |
|
||||
rm -rf /tmp/rustfs
|
||||
rm -f /tmp/rustfs.log
|
||||
|
||||
- name: Setup Rust environment
|
||||
uses: ./.github/actions/setup
|
||||
@@ -155,7 +165,8 @@ jobs:
|
||||
- name: Build debug binary
|
||||
run: |
|
||||
touch rustfs/build.rs
|
||||
cargo build -p rustfs --bins
|
||||
# Limit concurrency to prevent OOM
|
||||
cargo build -p rustfs --bins --jobs 4
|
||||
|
||||
- name: Run end-to-end tests
|
||||
run: |
|
||||
|
||||
34
.github/workflows/docker.yml
vendored
34
.github/workflows/docker.yml
vendored
@@ -72,7 +72,7 @@ jobs:
|
||||
# Check if we should build Docker images
|
||||
build-check:
|
||||
name: Docker Build Check
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
outputs:
|
||||
should_build: ${{ steps.check.outputs.should_build }}
|
||||
should_push: ${{ steps.check.outputs.should_push }}
|
||||
@@ -83,7 +83,7 @@ jobs:
|
||||
create_latest: ${{ steps.check.outputs.create_latest }}
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
fetch-depth: 0
|
||||
# For workflow_run events, checkout the specific commit that triggered the workflow
|
||||
@@ -162,11 +162,11 @@ jobs:
|
||||
if [[ "$version" == *"alpha"* ]] || [[ "$version" == *"beta"* ]] || [[ "$version" == *"rc"* ]]; then
|
||||
build_type="prerelease"
|
||||
is_prerelease=true
|
||||
# TODO: 临时修改 - 当前允许 alpha 版本也创建 latest 标签
|
||||
# 等版本稳定后,需要移除下面这行,恢复原有逻辑(只有稳定版本才创建 latest)
|
||||
# TODO: Temporary change - currently allows alpha versions to also create latest tags
|
||||
# After the version is stable, you need to remove the following line and restore the original logic (latest is created only for stable versions)
|
||||
if [[ "$version" == *"alpha"* ]]; then
|
||||
create_latest=true
|
||||
echo "🧪 Building Docker image for prerelease: $version (临时允许创建 latest 标签)"
|
||||
echo "🧪 Building Docker image for prerelease: $version (temporarily allowing creation of latest tag)"
|
||||
else
|
||||
echo "🧪 Building Docker image for prerelease: $version"
|
||||
fi
|
||||
@@ -215,11 +215,11 @@ jobs:
|
||||
v*alpha*|v*beta*|v*rc*|*alpha*|*beta*|*rc*)
|
||||
build_type="prerelease"
|
||||
is_prerelease=true
|
||||
# TODO: 临时修改 - 当前允许 alpha 版本也创建 latest 标签
|
||||
# 等版本稳定后,需要移除下面的 if 块,恢复原有逻辑
|
||||
# TODO: Temporary change - currently allows alpha versions to also create latest tags
|
||||
# After the version is stable, you need to remove the if block below and restore the original logic.
|
||||
if [[ "$input_version" == *"alpha"* ]]; then
|
||||
create_latest=true
|
||||
echo "🧪 Building with prerelease version: $input_version (临时允许创建 latest 标签)"
|
||||
echo "🧪 Building with prerelease version: $input_version (temporarily allowing creation of latest tag)"
|
||||
else
|
||||
echo "🧪 Building with prerelease version: $input_version"
|
||||
fi
|
||||
@@ -264,11 +264,11 @@ jobs:
|
||||
name: Build Docker Images
|
||||
needs: build-check
|
||||
if: needs.build-check.outputs.should_build == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
timeout-minutes: 60
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Login to Docker Hub
|
||||
uses: docker/login-action@v3
|
||||
@@ -330,9 +330,9 @@ jobs:
|
||||
|
||||
# Add channel tags for prereleases and latest for stable
|
||||
if [[ "$CREATE_LATEST" == "true" ]]; then
|
||||
# TODO: 临时修改 - 当前 alpha 版本也会创建 latest 标签
|
||||
# 等版本稳定后,这里的逻辑保持不变,但上游的 CREATE_LATEST 设置需要恢复
|
||||
# Stable release (以及临时的 alpha 版本)
|
||||
# TODO: Temporary change - the current alpha version will also create the latest tag
|
||||
# After the version is stabilized, the logic here remains unchanged, but the upstream CREATE_LATEST setting needs to be restored.
|
||||
# Stable release (and temporary alpha versions)
|
||||
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:latest"
|
||||
elif [[ "$BUILD_TYPE" == "prerelease" ]]; then
|
||||
# Prerelease channel tags (alpha, beta, rc)
|
||||
@@ -404,7 +404,7 @@ jobs:
|
||||
name: Docker Build Summary
|
||||
needs: [ build-check, build-docker ]
|
||||
if: always() && needs.build-check.outputs.should_build == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
steps:
|
||||
- name: Docker build completion summary
|
||||
run: |
|
||||
@@ -429,10 +429,10 @@ jobs:
|
||||
"prerelease")
|
||||
echo "🧪 Prerelease Docker image has been built with ${VERSION} tags"
|
||||
echo "⚠️ This is a prerelease image - use with caution"
|
||||
# TODO: 临时修改 - alpha 版本当前会创建 latest 标签
|
||||
# 等版本稳定后,需要恢复下面的提示信息
|
||||
# TODO: Temporary change - alpha versions currently create the latest tag
|
||||
# After the version is stable, you need to restore the following prompt information
|
||||
if [[ "$VERSION" == *"alpha"* ]] && [[ "$CREATE_LATEST" == "true" ]]; then
|
||||
echo "🏷️ Latest tag has been created for alpha version (临时措施)"
|
||||
echo "🏷️ Latest tag has been created for alpha version (temporary measures)"
|
||||
else
|
||||
echo "🚫 Latest tag NOT created for prerelease"
|
||||
fi
|
||||
|
||||
260
.github/workflows/e2e-mint.yml
vendored
Normal file
260
.github/workflows/e2e-mint.yml
vendored
Normal file
@@ -0,0 +1,260 @@
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name: e2e-mint
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
paths:
|
||||
- ".github/workflows/e2e-mint.yml"
|
||||
- "Dockerfile.source"
|
||||
- "rustfs/**"
|
||||
- "crates/**"
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
run-multi:
|
||||
description: "Run multi-node Mint as well"
|
||||
required: false
|
||||
default: "false"
|
||||
|
||||
env:
|
||||
ACCESS_KEY: rustfsadmin
|
||||
SECRET_KEY: rustfsadmin
|
||||
RUST_LOG: info
|
||||
PLATFORM: linux/amd64
|
||||
|
||||
jobs:
|
||||
mint-single:
|
||||
runs-on: ubicloud-standard-4
|
||||
timeout-minutes: 40
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Enable buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Build RustFS image (source)
|
||||
run: |
|
||||
DOCKER_BUILDKIT=1 docker buildx build --load \
|
||||
--platform ${PLATFORM} \
|
||||
-t rustfs-ci \
|
||||
-f Dockerfile.source .
|
||||
|
||||
- name: Create network
|
||||
run: |
|
||||
docker network inspect rustfs-net >/dev/null 2>&1 || docker network create rustfs-net
|
||||
|
||||
- name: Remove existing rustfs-single (if any)
|
||||
run: docker rm -f rustfs-single >/dev/null 2>&1 || true
|
||||
|
||||
- name: Start single RustFS
|
||||
run: |
|
||||
docker run -d --name rustfs-single \
|
||||
--network rustfs-net \
|
||||
-e RUSTFS_ADDRESS=0.0.0.0:9000 \
|
||||
-e RUSTFS_ACCESS_KEY=$ACCESS_KEY \
|
||||
-e RUSTFS_SECRET_KEY=$SECRET_KEY \
|
||||
-e RUSTFS_VOLUMES="/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3" \
|
||||
-v /tmp/rustfs-single:/data \
|
||||
rustfs-ci
|
||||
|
||||
- name: Wait for RustFS ready
|
||||
run: |
|
||||
for i in {1..30}; do
|
||||
if docker exec rustfs-single curl -sf http://localhost:9000/health >/dev/null; then
|
||||
exit 0
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
echo "RustFS did not become ready" >&2
|
||||
docker logs rustfs-single || true
|
||||
exit 1
|
||||
|
||||
- name: Run Mint (single, S3-only)
|
||||
run: |
|
||||
mkdir -p artifacts/mint-single
|
||||
docker run --rm --network rustfs-net \
|
||||
--platform ${PLATFORM} \
|
||||
-e SERVER_ENDPOINT=rustfs-single:9000 \
|
||||
-e ACCESS_KEY=$ACCESS_KEY \
|
||||
-e SECRET_KEY=$SECRET_KEY \
|
||||
-e ENABLE_HTTPS=0 \
|
||||
-e SERVER_REGION=us-east-1 \
|
||||
-e RUN_ON_FAIL=1 \
|
||||
-e MINT_MODE=core \
|
||||
-v ${GITHUB_WORKSPACE}/artifacts/mint-single:/mint/log \
|
||||
--entrypoint /mint/mint.sh \
|
||||
minio/mint:edge \
|
||||
awscli aws-sdk-go aws-sdk-java-v2 aws-sdk-php aws-sdk-ruby s3cmd s3select
|
||||
|
||||
- name: Collect RustFS logs
|
||||
run: |
|
||||
mkdir -p artifacts/rustfs-single
|
||||
docker logs rustfs-single > artifacts/rustfs-single/rustfs.log || true
|
||||
|
||||
- name: Upload artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: mint-single
|
||||
path: artifacts/**
|
||||
|
||||
mint-multi:
|
||||
if: github.event_name == 'workflow_dispatch' && github.event.inputs.run-multi == 'true'
|
||||
needs: mint-single
|
||||
runs-on: ubicloud-standard-4
|
||||
timeout-minutes: 60
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Enable buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Build RustFS image (source)
|
||||
run: |
|
||||
DOCKER_BUILDKIT=1 docker buildx build --load \
|
||||
--platform ${PLATFORM} \
|
||||
-t rustfs-ci \
|
||||
-f Dockerfile.source .
|
||||
|
||||
- name: Prepare cluster compose
|
||||
run: |
|
||||
cat > compose.yml <<'EOF'
|
||||
version: '3.8'
|
||||
services:
|
||||
rustfs1:
|
||||
image: rustfs-ci
|
||||
hostname: rustfs1
|
||||
networks: [rustfs-net]
|
||||
environment:
|
||||
- RUSTFS_ADDRESS=0.0.0.0:9000
|
||||
- RUSTFS_ACCESS_KEY=${ACCESS_KEY}
|
||||
- RUSTFS_SECRET_KEY=${SECRET_KEY}
|
||||
- RUSTFS_VOLUMES=/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3
|
||||
volumes:
|
||||
- rustfs1-data:/data
|
||||
rustfs2:
|
||||
image: rustfs-ci
|
||||
hostname: rustfs2
|
||||
networks: [rustfs-net]
|
||||
environment:
|
||||
- RUSTFS_ADDRESS=0.0.0.0:9000
|
||||
- RUSTFS_ACCESS_KEY=${ACCESS_KEY}
|
||||
- RUSTFS_SECRET_KEY=${SECRET_KEY}
|
||||
- RUSTFS_VOLUMES=/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3
|
||||
volumes:
|
||||
- rustfs2-data:/data
|
||||
rustfs3:
|
||||
image: rustfs-ci
|
||||
hostname: rustfs3
|
||||
networks: [rustfs-net]
|
||||
environment:
|
||||
- RUSTFS_ADDRESS=0.0.0.0:9000
|
||||
- RUSTFS_ACCESS_KEY=${ACCESS_KEY}
|
||||
- RUSTFS_SECRET_KEY=${SECRET_KEY}
|
||||
- RUSTFS_VOLUMES=/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3
|
||||
volumes:
|
||||
- rustfs3-data:/data
|
||||
rustfs4:
|
||||
image: rustfs-ci
|
||||
hostname: rustfs4
|
||||
networks: [rustfs-net]
|
||||
environment:
|
||||
- RUSTFS_ADDRESS=0.0.0.0:9000
|
||||
- RUSTFS_ACCESS_KEY=${ACCESS_KEY}
|
||||
- RUSTFS_SECRET_KEY=${SECRET_KEY}
|
||||
- RUSTFS_VOLUMES=/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3
|
||||
volumes:
|
||||
- rustfs4-data:/data
|
||||
lb:
|
||||
image: haproxy:2.9
|
||||
hostname: lb
|
||||
networks: [rustfs-net]
|
||||
ports:
|
||||
- "9000:9000"
|
||||
volumes:
|
||||
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
|
||||
networks:
|
||||
rustfs-net:
|
||||
name: rustfs-net
|
||||
volumes:
|
||||
rustfs1-data:
|
||||
rustfs2-data:
|
||||
rustfs3-data:
|
||||
rustfs4-data:
|
||||
EOF
|
||||
|
||||
cat > haproxy.cfg <<'EOF'
|
||||
defaults
|
||||
mode http
|
||||
timeout connect 5s
|
||||
timeout client 30s
|
||||
timeout server 30s
|
||||
|
||||
frontend fe_s3
|
||||
bind *:9000
|
||||
default_backend be_s3
|
||||
|
||||
backend be_s3
|
||||
balance roundrobin
|
||||
server s1 rustfs1:9000 check
|
||||
server s2 rustfs2:9000 check
|
||||
server s3 rustfs3:9000 check
|
||||
server s4 rustfs4:9000 check
|
||||
EOF
|
||||
|
||||
- name: Launch cluster
|
||||
run: docker compose -f compose.yml up -d
|
||||
|
||||
- name: Wait for LB ready
|
||||
run: |
|
||||
for i in {1..60}; do
|
||||
if docker run --rm --network rustfs-net curlimages/curl -sf http://lb:9000/health >/dev/null; then
|
||||
exit 0
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
echo "LB or backend not ready" >&2
|
||||
docker compose -f compose.yml logs --tail=200 || true
|
||||
exit 1
|
||||
|
||||
- name: Run Mint (multi, S3-only)
|
||||
run: |
|
||||
mkdir -p artifacts/mint-multi
|
||||
docker run --rm --network rustfs-net \
|
||||
--platform ${PLATFORM} \
|
||||
-e SERVER_ENDPOINT=lb:9000 \
|
||||
-e ACCESS_KEY=$ACCESS_KEY \
|
||||
-e SECRET_KEY=$SECRET_KEY \
|
||||
-e ENABLE_HTTPS=0 \
|
||||
-e SERVER_REGION=us-east-1 \
|
||||
-e RUN_ON_FAIL=1 \
|
||||
-e MINT_MODE=core \
|
||||
-v ${GITHUB_WORKSPACE}/artifacts/mint-multi:/mint/log \
|
||||
--entrypoint /mint/mint.sh \
|
||||
minio/mint:edge \
|
||||
awscli aws-sdk-go aws-sdk-java-v2 aws-sdk-php aws-sdk-ruby s3cmd s3select
|
||||
|
||||
- name: Collect logs
|
||||
run: |
|
||||
mkdir -p artifacts/cluster
|
||||
docker compose -f compose.yml logs --no-color > artifacts/cluster/cluster.log || true
|
||||
|
||||
- name: Upload artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: mint-multi
|
||||
path: artifacts/**
|
||||
422
.github/workflows/e2e-s3tests.yml
vendored
Normal file
422
.github/workflows/e2e-s3tests.yml
vendored
Normal file
@@ -0,0 +1,422 @@
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name: e2e-s3tests
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
test-mode:
|
||||
description: "Test mode to run"
|
||||
required: true
|
||||
type: choice
|
||||
default: "single"
|
||||
options:
|
||||
- single
|
||||
- multi
|
||||
xdist:
|
||||
description: "Enable pytest-xdist (parallel). '0' to disable."
|
||||
required: false
|
||||
default: "0"
|
||||
maxfail:
|
||||
description: "Stop after N failures (debug friendly)"
|
||||
required: false
|
||||
default: "1"
|
||||
markexpr:
|
||||
description: "pytest -m expression (feature filters)"
|
||||
required: false
|
||||
default: "not lifecycle and not versioning and not s3website and not bucket_logging and not encryption"
|
||||
|
||||
env:
|
||||
# main user
|
||||
S3_ACCESS_KEY: rustfsadmin
|
||||
S3_SECRET_KEY: rustfsadmin
|
||||
# alt user (must be different from main for many s3-tests)
|
||||
S3_ALT_ACCESS_KEY: rustfsalt
|
||||
S3_ALT_SECRET_KEY: rustfsalt
|
||||
|
||||
S3_REGION: us-east-1
|
||||
|
||||
RUST_LOG: info
|
||||
PLATFORM: linux/amd64
|
||||
|
||||
defaults:
|
||||
run:
|
||||
shell: bash
|
||||
|
||||
jobs:
|
||||
s3tests-single:
|
||||
if: github.event.inputs.test-mode == 'single'
|
||||
runs-on: ubicloud-standard-4
|
||||
timeout-minutes: 120
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
|
||||
- name: Enable buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Build RustFS image (source, cached)
|
||||
run: |
|
||||
DOCKER_BUILDKIT=1 docker buildx build --load \
|
||||
--platform ${PLATFORM} \
|
||||
--cache-from type=gha \
|
||||
--cache-to type=gha,mode=max \
|
||||
-t rustfs-ci \
|
||||
-f Dockerfile.source .
|
||||
|
||||
- name: Create network
|
||||
run: docker network inspect rustfs-net >/dev/null 2>&1 || docker network create rustfs-net
|
||||
|
||||
- name: Remove existing rustfs-single (if any)
|
||||
run: docker rm -f rustfs-single >/dev/null 2>&1 || true
|
||||
|
||||
- name: Start single RustFS
|
||||
run: |
|
||||
docker run -d --name rustfs-single \
|
||||
--network rustfs-net \
|
||||
-p 9000:9000 \
|
||||
-e RUSTFS_ADDRESS=0.0.0.0:9000 \
|
||||
-e RUSTFS_ACCESS_KEY=$S3_ACCESS_KEY \
|
||||
-e RUSTFS_SECRET_KEY=$S3_SECRET_KEY \
|
||||
-e RUSTFS_VOLUMES="/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3" \
|
||||
-v /tmp/rustfs-single:/data \
|
||||
rustfs-ci
|
||||
|
||||
- name: Wait for RustFS ready
|
||||
run: |
|
||||
for i in {1..60}; do
|
||||
if curl -sf http://127.0.0.1:9000/health >/dev/null 2>&1; then
|
||||
echo "RustFS is ready"
|
||||
exit 0
|
||||
fi
|
||||
|
||||
if [ "$(docker inspect -f '{{.State.Running}}' rustfs-single 2>/dev/null)" != "true" ]; then
|
||||
echo "RustFS container not running" >&2
|
||||
docker logs rustfs-single || true
|
||||
exit 1
|
||||
fi
|
||||
|
||||
sleep 2
|
||||
done
|
||||
|
||||
echo "Health check timed out" >&2
|
||||
docker logs rustfs-single || true
|
||||
exit 1
|
||||
|
||||
- name: Generate s3tests config
|
||||
run: |
|
||||
export S3_HOST=127.0.0.1
|
||||
envsubst < .github/s3tests/s3tests.conf > s3tests.conf
|
||||
|
||||
- name: Provision s3-tests alt user (required by suite)
|
||||
run: |
|
||||
python3 -m pip install --user --upgrade pip awscurl
|
||||
export PATH="$HOME/.local/bin:$PATH"
|
||||
|
||||
# Admin API requires AWS SigV4 signing. awscurl is used by RustFS codebase as well.
|
||||
awscurl \
|
||||
--service s3 \
|
||||
--region "${S3_REGION}" \
|
||||
--access_key "${S3_ACCESS_KEY}" \
|
||||
--secret_key "${S3_SECRET_KEY}" \
|
||||
-X PUT \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"secretKey":"'"${S3_ALT_SECRET_KEY}"'","status":"enabled","policy":"readwrite"}' \
|
||||
"http://127.0.0.1:9000/rustfs/admin/v3/add-user?accessKey=${S3_ALT_ACCESS_KEY}"
|
||||
|
||||
# Explicitly attach built-in policy via policy mapping.
|
||||
# s3-tests relies on alt client being able to ListBuckets during setup cleanup.
|
||||
awscurl \
|
||||
--service s3 \
|
||||
--region "${S3_REGION}" \
|
||||
--access_key "${S3_ACCESS_KEY}" \
|
||||
--secret_key "${S3_SECRET_KEY}" \
|
||||
-X PUT \
|
||||
"http://127.0.0.1:9000/rustfs/admin/v3/set-user-or-group-policy?policyName=readwrite&userOrGroup=${S3_ALT_ACCESS_KEY}&isGroup=false"
|
||||
|
||||
# Sanity check: alt user can list buckets (should not be AccessDenied).
|
||||
awscurl \
|
||||
--service s3 \
|
||||
--region "${S3_REGION}" \
|
||||
--access_key "${S3_ALT_ACCESS_KEY}" \
|
||||
--secret_key "${S3_ALT_SECRET_KEY}" \
|
||||
-X GET \
|
||||
"http://127.0.0.1:9000/" >/dev/null
|
||||
|
||||
- name: Prepare s3-tests
|
||||
run: |
|
||||
python3 -m pip install --user --upgrade pip tox
|
||||
export PATH="$HOME/.local/bin:$PATH"
|
||||
git clone --depth 1 https://github.com/ceph/s3-tests.git s3-tests
|
||||
|
||||
- name: Run ceph s3-tests (debug friendly)
|
||||
run: |
|
||||
export PATH="$HOME/.local/bin:$PATH"
|
||||
mkdir -p artifacts/s3tests-single
|
||||
|
||||
cd s3-tests
|
||||
|
||||
set -o pipefail
|
||||
|
||||
MAXFAIL="${{ github.event.inputs.maxfail }}"
|
||||
if [ -z "$MAXFAIL" ]; then MAXFAIL="1"; fi
|
||||
|
||||
MARKEXPR="${{ github.event.inputs.markexpr }}"
|
||||
if [ -z "$MARKEXPR" ]; then MARKEXPR="not lifecycle and not versioning and not s3website and not bucket_logging and not encryption"; fi
|
||||
|
||||
XDIST="${{ github.event.inputs.xdist }}"
|
||||
if [ -z "$XDIST" ]; then XDIST="0"; fi
|
||||
XDIST_ARGS=""
|
||||
if [ "$XDIST" != "0" ]; then
|
||||
# Add pytest-xdist to requirements.txt so tox installs it inside
|
||||
# its virtualenv. Installing outside tox does NOT work.
|
||||
echo "pytest-xdist" >> requirements.txt
|
||||
XDIST_ARGS="-n $XDIST --dist=loadgroup"
|
||||
fi
|
||||
|
||||
# Run tests from s3tests/functional (boto2+boto3 combined directory).
|
||||
S3TEST_CONF=${GITHUB_WORKSPACE}/s3tests.conf \
|
||||
tox -- \
|
||||
-vv -ra --showlocals --tb=long \
|
||||
--maxfail="$MAXFAIL" \
|
||||
--junitxml=${GITHUB_WORKSPACE}/artifacts/s3tests-single/junit.xml \
|
||||
$XDIST_ARGS \
|
||||
s3tests/functional/test_s3.py \
|
||||
-m "$MARKEXPR" \
|
||||
2>&1 | tee ${GITHUB_WORKSPACE}/artifacts/s3tests-single/pytest.log
|
||||
|
||||
- name: Collect RustFS logs
|
||||
if: always()
|
||||
run: |
|
||||
mkdir -p artifacts/rustfs-single
|
||||
docker logs rustfs-single > artifacts/rustfs-single/rustfs.log 2>&1 || true
|
||||
docker inspect rustfs-single > artifacts/rustfs-single/inspect.json || true
|
||||
|
||||
- name: Upload artifacts
|
||||
if: always() && env.ACT != 'true'
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: s3tests-single
|
||||
path: artifacts/**
|
||||
|
||||
s3tests-multi:
|
||||
if: github.event_name == 'workflow_dispatch' && github.event.inputs.test-mode == 'multi'
|
||||
runs-on: ubicloud-standard-4
|
||||
timeout-minutes: 150
|
||||
steps:
|
||||
- uses: actions/checkout@v6
|
||||
|
||||
- name: Enable buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
- name: Build RustFS image (source, cached)
|
||||
run: |
|
||||
DOCKER_BUILDKIT=1 docker buildx build --load \
|
||||
--platform ${PLATFORM} \
|
||||
--cache-from type=gha \
|
||||
--cache-to type=gha,mode=max \
|
||||
-t rustfs-ci \
|
||||
-f Dockerfile.source .
|
||||
|
||||
- name: Prepare cluster compose
|
||||
run: |
|
||||
cat > compose.yml <<'EOF'
|
||||
services:
|
||||
rustfs1:
|
||||
image: rustfs-ci
|
||||
hostname: rustfs1
|
||||
networks: [rustfs-net]
|
||||
environment:
|
||||
RUSTFS_ADDRESS: "0.0.0.0:9000"
|
||||
RUSTFS_ACCESS_KEY: ${S3_ACCESS_KEY}
|
||||
RUSTFS_SECRET_KEY: ${S3_SECRET_KEY}
|
||||
RUSTFS_VOLUMES: "/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3"
|
||||
volumes:
|
||||
- rustfs1-data:/data
|
||||
rustfs2:
|
||||
image: rustfs-ci
|
||||
hostname: rustfs2
|
||||
networks: [rustfs-net]
|
||||
environment:
|
||||
RUSTFS_ADDRESS: "0.0.0.0:9000"
|
||||
RUSTFS_ACCESS_KEY: ${S3_ACCESS_KEY}
|
||||
RUSTFS_SECRET_KEY: ${S3_SECRET_KEY}
|
||||
RUSTFS_VOLUMES: "/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3"
|
||||
volumes:
|
||||
- rustfs2-data:/data
|
||||
rustfs3:
|
||||
image: rustfs-ci
|
||||
hostname: rustfs3
|
||||
networks: [rustfs-net]
|
||||
environment:
|
||||
RUSTFS_ADDRESS: "0.0.0.0:9000"
|
||||
RUSTFS_ACCESS_KEY: ${S3_ACCESS_KEY}
|
||||
RUSTFS_SECRET_KEY: ${S3_SECRET_KEY}
|
||||
RUSTFS_VOLUMES: "/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3"
|
||||
volumes:
|
||||
- rustfs3-data:/data
|
||||
rustfs4:
|
||||
image: rustfs-ci
|
||||
hostname: rustfs4
|
||||
networks: [rustfs-net]
|
||||
environment:
|
||||
RUSTFS_ADDRESS: "0.0.0.0:9000"
|
||||
RUSTFS_ACCESS_KEY: ${S3_ACCESS_KEY}
|
||||
RUSTFS_SECRET_KEY: ${S3_SECRET_KEY}
|
||||
RUSTFS_VOLUMES: "/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3"
|
||||
volumes:
|
||||
- rustfs4-data:/data
|
||||
lb:
|
||||
image: haproxy:2.9
|
||||
hostname: lb
|
||||
networks: [rustfs-net]
|
||||
ports:
|
||||
- "9000:9000"
|
||||
volumes:
|
||||
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
|
||||
networks:
|
||||
rustfs-net:
|
||||
name: rustfs-net
|
||||
volumes:
|
||||
rustfs1-data:
|
||||
rustfs2-data:
|
||||
rustfs3-data:
|
||||
rustfs4-data:
|
||||
EOF
|
||||
|
||||
cat > haproxy.cfg <<'EOF'
|
||||
defaults
|
||||
mode http
|
||||
timeout connect 5s
|
||||
timeout client 30s
|
||||
timeout server 30s
|
||||
|
||||
frontend fe_s3
|
||||
bind *:9000
|
||||
default_backend be_s3
|
||||
|
||||
backend be_s3
|
||||
balance roundrobin
|
||||
server s1 rustfs1:9000 check
|
||||
server s2 rustfs2:9000 check
|
||||
server s3 rustfs3:9000 check
|
||||
server s4 rustfs4:9000 check
|
||||
EOF
|
||||
|
||||
- name: Launch cluster
|
||||
run: docker compose -f compose.yml up -d
|
||||
|
||||
- name: Wait for LB ready
|
||||
run: |
|
||||
for i in {1..90}; do
|
||||
if curl -sf http://127.0.0.1:9000/health >/dev/null 2>&1; then
|
||||
echo "Load balancer is ready"
|
||||
exit 0
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
echo "LB or backend not ready" >&2
|
||||
docker compose -f compose.yml logs --tail=200 || true
|
||||
exit 1
|
||||
|
||||
- name: Generate s3tests config
|
||||
run: |
|
||||
export S3_HOST=127.0.0.1
|
||||
envsubst < .github/s3tests/s3tests.conf > s3tests.conf
|
||||
|
||||
- name: Provision s3-tests alt user (required by suite)
|
||||
run: |
|
||||
python3 -m pip install --user --upgrade pip awscurl
|
||||
export PATH="$HOME/.local/bin:$PATH"
|
||||
|
||||
awscurl \
|
||||
--service s3 \
|
||||
--region "${S3_REGION}" \
|
||||
--access_key "${S3_ACCESS_KEY}" \
|
||||
--secret_key "${S3_SECRET_KEY}" \
|
||||
-X PUT \
|
||||
-H 'Content-Type: application/json' \
|
||||
-d '{"secretKey":"'"${S3_ALT_SECRET_KEY}"'","status":"enabled","policy":"readwrite"}' \
|
||||
"http://127.0.0.1:9000/rustfs/admin/v3/add-user?accessKey=${S3_ALT_ACCESS_KEY}"
|
||||
|
||||
awscurl \
|
||||
--service s3 \
|
||||
--region "${S3_REGION}" \
|
||||
--access_key "${S3_ACCESS_KEY}" \
|
||||
--secret_key "${S3_SECRET_KEY}" \
|
||||
-X PUT \
|
||||
"http://127.0.0.1:9000/rustfs/admin/v3/set-user-or-group-policy?policyName=readwrite&userOrGroup=${S3_ALT_ACCESS_KEY}&isGroup=false"
|
||||
|
||||
awscurl \
|
||||
--service s3 \
|
||||
--region "${S3_REGION}" \
|
||||
--access_key "${S3_ALT_ACCESS_KEY}" \
|
||||
--secret_key "${S3_ALT_SECRET_KEY}" \
|
||||
-X GET \
|
||||
"http://127.0.0.1:9000/" >/dev/null
|
||||
|
||||
- name: Prepare s3-tests
|
||||
run: |
|
||||
python3 -m pip install --user --upgrade pip tox
|
||||
export PATH="$HOME/.local/bin:$PATH"
|
||||
git clone --depth 1 https://github.com/ceph/s3-tests.git s3-tests
|
||||
|
||||
- name: Run ceph s3-tests (multi, debug friendly)
|
||||
run: |
|
||||
export PATH="$HOME/.local/bin:$PATH"
|
||||
mkdir -p artifacts/s3tests-multi
|
||||
|
||||
cd s3-tests
|
||||
|
||||
set -o pipefail
|
||||
|
||||
MAXFAIL="${{ github.event.inputs.maxfail }}"
|
||||
if [ -z "$MAXFAIL" ]; then MAXFAIL="1"; fi
|
||||
|
||||
MARKEXPR="${{ github.event.inputs.markexpr }}"
|
||||
if [ -z "$MARKEXPR" ]; then MARKEXPR="not lifecycle and not versioning and not s3website and not bucket_logging and not encryption"; fi
|
||||
|
||||
XDIST="${{ github.event.inputs.xdist }}"
|
||||
if [ -z "$XDIST" ]; then XDIST="0"; fi
|
||||
XDIST_ARGS=""
|
||||
if [ "$XDIST" != "0" ]; then
|
||||
# Add pytest-xdist to requirements.txt so tox installs it inside
|
||||
# its virtualenv. Installing outside tox does NOT work.
|
||||
echo "pytest-xdist" >> requirements.txt
|
||||
XDIST_ARGS="-n $XDIST --dist=loadgroup"
|
||||
fi
|
||||
|
||||
# Run tests from s3tests/functional (boto2+boto3 combined directory).
|
||||
S3TEST_CONF=${GITHUB_WORKSPACE}/s3tests.conf \
|
||||
tox -- \
|
||||
-vv -ra --showlocals --tb=long \
|
||||
--maxfail="$MAXFAIL" \
|
||||
--junitxml=${GITHUB_WORKSPACE}/artifacts/s3tests-multi/junit.xml \
|
||||
$XDIST_ARGS \
|
||||
s3tests/functional/test_s3.py \
|
||||
-m "$MARKEXPR" \
|
||||
2>&1 | tee ${GITHUB_WORKSPACE}/artifacts/s3tests-multi/pytest.log
|
||||
|
||||
- name: Collect logs
|
||||
if: always()
|
||||
run: |
|
||||
mkdir -p artifacts/cluster
|
||||
docker compose -f compose.yml logs --no-color > artifacts/cluster/cluster.log 2>&1 || true
|
||||
|
||||
- name: Upload artifacts
|
||||
if: always() && env.ACT != 'true'
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: s3tests-multi
|
||||
path: artifacts/**
|
||||
40
.github/workflows/helm-package.yml
vendored
40
.github/workflows/helm-package.yml
vendored
@@ -1,9 +1,23 @@
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name: Publish helm chart to artifacthub
|
||||
|
||||
on:
|
||||
workflow_run:
|
||||
workflows: ["Build and Release"]
|
||||
types: [completed]
|
||||
workflows: [ "Build and Release" ]
|
||||
types: [ completed ]
|
||||
|
||||
permissions:
|
||||
contents: read
|
||||
@@ -13,7 +27,7 @@ env:
|
||||
|
||||
jobs:
|
||||
build-helm-package:
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
# Only run on successful builds triggered by tag pushes (version format: x.y.z or x.y.z-suffix)
|
||||
if: |
|
||||
github.event.workflow_run.conclusion == 'success' &&
|
||||
@@ -22,9 +36,9 @@ jobs:
|
||||
|
||||
steps:
|
||||
- name: Checkout helm chart repo
|
||||
uses: actions/checkout@v2
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Replace chart appversion
|
||||
- name: Replace chart app version
|
||||
run: |
|
||||
set -e
|
||||
set -x
|
||||
@@ -40,7 +54,7 @@ jobs:
|
||||
cp helm/README.md helm/rustfs/
|
||||
package_version=$(echo $new_version | awk -F '-' '{print $2}' | awk -F '.' '{print $NF}')
|
||||
helm package ./helm/rustfs --destination helm/rustfs/ --version "0.0.$package_version"
|
||||
|
||||
|
||||
- name: Upload helm package as artifact
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
@@ -49,25 +63,25 @@ jobs:
|
||||
retention-days: 1
|
||||
|
||||
publish-helm-package:
|
||||
runs-on: ubuntu-latest
|
||||
needs: [build-helm-package]
|
||||
runs-on: ubicloud-standard-4
|
||||
needs: [ build-helm-package ]
|
||||
|
||||
steps:
|
||||
- name: Checkout helm package repo
|
||||
uses: actions/checkout@v2
|
||||
uses: actions/checkout@v6
|
||||
with:
|
||||
repository: rustfs/helm
|
||||
repository: rustfs/helm
|
||||
token: ${{ secrets.RUSTFS_HELM_PACKAGE }}
|
||||
|
||||
|
||||
- name: Download helm package
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
name: helm-package
|
||||
path: ./
|
||||
|
||||
|
||||
- name: Set up helm
|
||||
uses: azure/setup-helm@v4.3.0
|
||||
|
||||
|
||||
- name: Generate index
|
||||
run: helm repo index . --url https://charts.rustfs.com
|
||||
|
||||
|
||||
2
.github/workflows/issue-translator.yml
vendored
2
.github/workflows/issue-translator.yml
vendored
@@ -25,7 +25,7 @@ permissions:
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
steps:
|
||||
- uses: usthe/issues-translate-action@v2.7
|
||||
with:
|
||||
|
||||
8
.github/workflows/performance.yml
vendored
8
.github/workflows/performance.yml
vendored
@@ -40,11 +40,11 @@ env:
|
||||
jobs:
|
||||
performance-profile:
|
||||
name: Performance Profiling
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
timeout-minutes: 30
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Setup Rust environment
|
||||
uses: ./.github/actions/setup
|
||||
@@ -115,11 +115,11 @@ jobs:
|
||||
|
||||
benchmark:
|
||||
name: Benchmark Tests
|
||||
runs-on: ubuntu-latest
|
||||
runs-on: ubicloud-standard-4
|
||||
timeout-minutes: 45
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v5
|
||||
uses: actions/checkout@v6
|
||||
|
||||
- name: Setup Rust environment
|
||||
uses: ./.github/actions/setup
|
||||
|
||||
12
.gitignore
vendored
12
.gitignore
vendored
@@ -2,6 +2,7 @@
|
||||
.DS_Store
|
||||
.idea
|
||||
.vscode
|
||||
.direnv/
|
||||
/test
|
||||
/logs
|
||||
/data
|
||||
@@ -23,4 +24,13 @@ profile.json
|
||||
*.go
|
||||
*.pb
|
||||
*.svg
|
||||
deploy/logs/*.log.*
|
||||
deploy/logs/*.log.*
|
||||
|
||||
# s3-tests local artifacts (root directory only)
|
||||
/s3-tests/
|
||||
/s3-tests-local/
|
||||
/s3tests.conf
|
||||
/s3tests.conf.*
|
||||
*.events
|
||||
*.audit
|
||||
*.snappy
|
||||
32
.pre-commit-config.yaml
Normal file
32
.pre-commit-config.yaml
Normal file
@@ -0,0 +1,32 @@
|
||||
# See https://pre-commit.com for more information
|
||||
# See https://pre-commit.com/hooks.html for more hooks
|
||||
repos:
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: cargo-fmt
|
||||
name: cargo fmt
|
||||
entry: cargo fmt --all --check
|
||||
language: system
|
||||
types: [rust]
|
||||
pass_filenames: false
|
||||
|
||||
- id: cargo-clippy
|
||||
name: cargo clippy
|
||||
entry: cargo clippy --all-targets --all-features -- -D warnings
|
||||
language: system
|
||||
types: [rust]
|
||||
pass_filenames: false
|
||||
|
||||
- id: cargo-check
|
||||
name: cargo check
|
||||
entry: cargo check --all-targets
|
||||
language: system
|
||||
types: [rust]
|
||||
pass_filenames: false
|
||||
|
||||
- id: cargo-test
|
||||
name: cargo test
|
||||
entry: bash -c 'cargo test --workspace --exclude e2e_test && cargo test --all --doc'
|
||||
language: system
|
||||
types: [rust]
|
||||
pass_filenames: false
|
||||
37
.vscode/launch.json
vendored
37
.vscode/launch.json
vendored
@@ -1,9 +1,31 @@
|
||||
{
|
||||
// 使用 IntelliSense 了解相关属性。
|
||||
// 悬停以查看现有属性的描述。
|
||||
// 欲了解更多信息,请访问: https://go.microsoft.com/fwlink/?linkid=830387
|
||||
"version": "0.2.0",
|
||||
"configurations": [
|
||||
{
|
||||
"type": "lldb",
|
||||
"request": "launch",
|
||||
"name": "Debug(only) executable 'rustfs'",
|
||||
"env": {
|
||||
"RUST_LOG": "rustfs=info,ecstore=info,s3s=info,iam=info",
|
||||
"RUSTFS_SKIP_BACKGROUND_TASK": "on"
|
||||
//"RUSTFS_OBS_LOG_DIRECTORY": "./deploy/logs",
|
||||
// "RUSTFS_POLICY_PLUGIN_URL":"http://localhost:8181/v1/data/rustfs/authz/allow",
|
||||
// "RUSTFS_POLICY_PLUGIN_AUTH_TOKEN":"your-opa-token"
|
||||
},
|
||||
"program": "${workspaceFolder}/target/debug/rustfs",
|
||||
"args": [
|
||||
"--access-key",
|
||||
"rustfsadmin",
|
||||
"--secret-key",
|
||||
"rustfsadmin",
|
||||
"--address",
|
||||
"0.0.0.0:9010",
|
||||
"--server-domains",
|
||||
"127.0.0.1:9010",
|
||||
"./target/volume/test{1...4}"
|
||||
],
|
||||
"cwd": "${workspaceFolder}"
|
||||
},
|
||||
{
|
||||
"type": "lldb",
|
||||
"request": "launch",
|
||||
@@ -67,12 +89,8 @@
|
||||
"test",
|
||||
"--no-run",
|
||||
"--lib",
|
||||
"--package=ecstore"
|
||||
],
|
||||
"filter": {
|
||||
"name": "ecstore",
|
||||
"kind": "lib"
|
||||
}
|
||||
"--package=rustfs-ecstore"
|
||||
]
|
||||
},
|
||||
"args": [],
|
||||
"cwd": "${workspaceFolder}"
|
||||
@@ -95,6 +113,7 @@
|
||||
// "RUSTFS_OBS_TRACE_ENDPOINT": "http://127.0.0.1:4318/v1/traces", // jeager otlp http endpoint
|
||||
// "RUSTFS_OBS_METRIC_ENDPOINT": "http://127.0.0.1:4318/v1/metrics", // default otlp http endpoint
|
||||
// "RUSTFS_OBS_LOG_ENDPOINT": "http://127.0.0.1:4318/v1/logs", // default otlp http endpoint
|
||||
// "RUSTFS_COMPRESS_ENABLE": "true",
|
||||
"RUSTFS_CONSOLE_ADDRESS": "127.0.0.1:9001",
|
||||
"RUSTFS_OBS_LOG_DIRECTORY": "./target/logs",
|
||||
},
|
||||
|
||||
@@ -2,6 +2,7 @@
|
||||
|
||||
## Communication Rules
|
||||
- Respond to the user in Chinese; use English in all other contexts.
|
||||
- Code and documentation must be written in English only. Chinese text is allowed solely as test data/fixtures when a case explicitly requires Chinese-language content for validation.
|
||||
|
||||
## Project Structure & Module Organization
|
||||
The workspace root hosts shared dependencies in `Cargo.toml`. The service binary lives under `rustfs/src/main.rs`, while reusable crates sit in `crates/` (`crypto`, `iam`, `kms`, and `e2e_test`). Local fixtures for standalone flows reside in `test_standalone/`, deployment manifests are under `deploy/`, Docker assets sit at the root, and automation lives in `scripts/`. Skim each crate’s README or module docs before contributing changes.
|
||||
|
||||
@@ -2,6 +2,8 @@
|
||||
|
||||
## 📋 Code Quality Requirements
|
||||
|
||||
For instructions on setting up and running the local development environment, please see [Development Guide](docs/DEVELOPMENT.md).
|
||||
|
||||
### 🔧 Code Formatting Rules
|
||||
|
||||
**MANDATORY**: All code must be properly formatted before committing. This project enforces strict formatting standards to maintain code consistency and readability.
|
||||
|
||||
569
Cargo.lock
generated
569
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
45
Cargo.toml
45
Cargo.toml
@@ -97,18 +97,19 @@ async-channel = "2.5.0"
|
||||
async-compression = { version = "0.4.19" }
|
||||
async-recursion = "1.1.1"
|
||||
async-trait = "0.1.89"
|
||||
axum = "0.8.7"
|
||||
axum-extra = "0.12.2"
|
||||
axum = "0.8.8"
|
||||
axum-extra = "0.12.3"
|
||||
axum-server = { version = "0.8.0", features = ["tls-rustls-no-provider"], default-features = false }
|
||||
futures = "0.3.31"
|
||||
futures-core = "0.3.31"
|
||||
futures-util = "0.3.31"
|
||||
pollster = "0.4.0"
|
||||
hyper = { version = "1.8.1", features = ["http2", "http1", "server"] }
|
||||
hyper-rustls = { version = "0.27.7", default-features = false, features = ["native-tokio", "http1", "tls12", "logging", "http2", "ring", "webpki-roots"] }
|
||||
hyper-util = { version = "0.1.19", features = ["tokio", "server-auto", "server-graceful"] }
|
||||
http = "1.4.0"
|
||||
http-body = "1.0.1"
|
||||
reqwest = { version = "0.12.25", default-features = false, features = ["rustls-tls-webpki-roots", "charset", "http2", "system-proxy", "stream", "json", "blocking"] }
|
||||
reqwest = { version = "0.12.26", default-features = false, features = ["rustls-tls-webpki-roots", "charset", "http2", "system-proxy", "stream", "json", "blocking"] }
|
||||
socket2 = "0.6.1"
|
||||
tokio = { version = "1.48.0", features = ["fs", "rt-multi-thread"] }
|
||||
tokio-rustls = { version = "0.26.4", default-features = false, features = ["logging", "tls12", "ring"] }
|
||||
@@ -125,11 +126,11 @@ tower-http = { version = "0.6.8", features = ["cors"] }
|
||||
bytes = { version = "1.11.0", features = ["serde"] }
|
||||
bytesize = "2.3.1"
|
||||
byteorder = "1.5.0"
|
||||
flatbuffers = "25.9.23"
|
||||
flatbuffers = "25.12.19"
|
||||
form_urlencoded = "1.2.2"
|
||||
prost = "0.14.1"
|
||||
quick-xml = "0.38.4"
|
||||
rmcp = { version = "0.10.0" }
|
||||
rmcp = { version = "0.12.0" }
|
||||
rmp = { version = "0.8.14" }
|
||||
rmp-serde = { version = "1.3.0" }
|
||||
serde = { version = "1.0.228", features = ["derive"] }
|
||||
@@ -139,17 +140,17 @@ schemars = "1.1.0"
|
||||
|
||||
# Cryptography and Security
|
||||
aes-gcm = { version = "0.11.0-rc.2", features = ["rand_core"] }
|
||||
argon2 = { version = "0.6.0-rc.3", features = ["std"] }
|
||||
argon2 = { version = "0.6.0-rc.5" }
|
||||
blake3 = { version = "1.8.2", features = ["rayon", "mmap"] }
|
||||
chacha20poly1305 = { version = "0.11.0-rc.2" }
|
||||
crc-fast = "1.6.0"
|
||||
hmac = { version = "0.13.0-rc.3" }
|
||||
jsonwebtoken = { version = "10.2.0", features = ["rust_crypto"] }
|
||||
pbkdf2 = "0.13.0-rc.3"
|
||||
pbkdf2 = "0.13.0-rc.5"
|
||||
rsa = { version = "0.10.0-rc.10" }
|
||||
rustls = { version = "0.23.35", features = ["ring", "logging", "std", "tls12"], default-features = false }
|
||||
rustls-pemfile = "2.2.0"
|
||||
rustls-pki-types = "1.13.1"
|
||||
rustls-pki-types = "1.13.2"
|
||||
sha1 = "0.11.0-rc.3"
|
||||
sha2 = "0.11.0-rc.3"
|
||||
subtle = "2.6"
|
||||
@@ -166,16 +167,16 @@ arc-swap = "1.7.1"
|
||||
astral-tokio-tar = "0.5.6"
|
||||
atoi = "2.0.0"
|
||||
atomic_enum = "0.3.0"
|
||||
aws-config = { version = "1.8.11" }
|
||||
aws-credential-types = { version = "1.2.10" }
|
||||
aws-sdk-s3 = { version = "1.116.0", default-features = false, features = ["sigv4a", "rustls", "rt-tokio"] }
|
||||
aws-smithy-types = { version = "1.3.4" }
|
||||
aws-config = { version = "1.8.12" }
|
||||
aws-credential-types = { version = "1.2.11" }
|
||||
aws-sdk-s3 = { version = "1.117.0", default-features = false, features = ["sigv4a", "rustls", "rt-tokio"] }
|
||||
aws-smithy-types = { version = "1.3.5" }
|
||||
base64 = "0.22.1"
|
||||
base64-simd = "0.8.0"
|
||||
brotli = "8.0.2"
|
||||
cfg-if = "1.0.4"
|
||||
clap = { version = "4.5.53", features = ["derive", "env"] }
|
||||
const-str = { version = "0.7.0", features = ["std", "proc"] }
|
||||
const-str = { version = "0.7.1", features = ["std", "proc"] }
|
||||
convert_case = "0.10.0"
|
||||
criterion = { version = "0.8", features = ["html_reports"] }
|
||||
crossbeam-queue = "0.3.12"
|
||||
@@ -186,8 +187,8 @@ faster-hex = "0.10.0"
|
||||
flate2 = "1.1.5"
|
||||
flexi_logger = { version = "0.31.7", features = ["trc", "dont_minimize_extra_stacks", "compress", "kv", "json"] }
|
||||
glob = "0.3.3"
|
||||
google-cloud-storage = "1.4.0"
|
||||
google-cloud-auth = "1.2.0"
|
||||
google-cloud-storage = "1.5.0"
|
||||
google-cloud-auth = "1.3.0"
|
||||
hashbrown = { version = "0.16.1", features = ["serde", "rayon"] }
|
||||
heed = { version = "0.22.0" }
|
||||
hex-simd = "0.8.0"
|
||||
@@ -196,13 +197,13 @@ ipnetwork = { version = "0.21.1", features = ["serde"] }
|
||||
lazy_static = "1.5.0"
|
||||
libc = "0.2.178"
|
||||
libsystemd = "0.7.2"
|
||||
local-ip-address = "0.6.6"
|
||||
local-ip-address = "0.6.8"
|
||||
lz4 = "1.28.1"
|
||||
matchit = "0.9.0"
|
||||
md-5 = "0.11.0-rc.3"
|
||||
md5 = "0.8.0"
|
||||
mime_guess = "2.0.5"
|
||||
moka = { version = "0.12.11", features = ["future"] }
|
||||
moka = { version = "0.12.12", features = ["future"] }
|
||||
netif = "0.1.6"
|
||||
nix = { version = "0.30.1", features = ["fs"] }
|
||||
nu-ansi-term = "0.50.3"
|
||||
@@ -221,9 +222,9 @@ regex = { version = "1.12.2" }
|
||||
rumqttc = { version = "0.25.1" }
|
||||
rust-embed = { version = "8.9.0" }
|
||||
rustc-hash = { version = "2.1.1" }
|
||||
s3s = { version = "0.12.0-rc.4", features = ["minio"] }
|
||||
s3s = { version = "0.12.0-rc.6", features = ["minio"], git = "https://github.com/s3s-project/s3s.git", branch = "main" }
|
||||
serial_test = "3.2.0"
|
||||
shadow-rs = { version = "1.4.0", default-features = false }
|
||||
shadow-rs = { version = "1.5.0", default-features = false }
|
||||
siphasher = "1.0.1"
|
||||
smallvec = { version = "1.15.1", features = ["serde"] }
|
||||
smartstring = "1.0.1"
|
||||
@@ -237,7 +238,7 @@ temp-env = "0.3.6"
|
||||
tempfile = "3.23.0"
|
||||
test-case = "3.3.1"
|
||||
thiserror = "2.0.17"
|
||||
tracing = { version = "0.1.43" }
|
||||
tracing = { version = "0.1.44" }
|
||||
tracing-appender = "0.2.4"
|
||||
tracing-error = "0.2.1"
|
||||
tracing-opentelemetry = "0.32.0"
|
||||
@@ -251,7 +252,7 @@ walkdir = "2.5.0"
|
||||
wildmatch = { version = "2.6.1", features = ["serde"] }
|
||||
winapi = { version = "0.3.9" }
|
||||
xxhash-rust = { version = "0.8.15", features = ["xxh64", "xxh3"] }
|
||||
zip = "6.0.0"
|
||||
zip = "7.0.0"
|
||||
zstd = "0.13.3"
|
||||
|
||||
# Observability and Metrics
|
||||
@@ -277,7 +278,7 @@ pprof = { version = "0.15.0", features = ["flamegraph", "protobuf-codec"] }
|
||||
|
||||
|
||||
[workspace.metadata.cargo-shear]
|
||||
ignored = ["rustfs", "rustfs-mcp", "tokio-test"]
|
||||
ignored = ["rustfs", "rustfs-mcp"]
|
||||
|
||||
[profile.release]
|
||||
opt-level = 3
|
||||
|
||||
@@ -39,7 +39,9 @@ RUN set -eux; \
|
||||
libssl-dev \
|
||||
lld \
|
||||
protobuf-compiler \
|
||||
flatbuffers-compiler; \
|
||||
flatbuffers-compiler \
|
||||
gcc-aarch64-linux-gnu \
|
||||
gcc-x86-64-linux-gnu; \
|
||||
rm -rf /var/lib/apt/lists/*
|
||||
|
||||
# Optional: cross toolchain for aarch64 (only when targeting linux/arm64)
|
||||
@@ -51,18 +53,18 @@ RUN set -eux; \
|
||||
rm -rf /var/lib/apt/lists/*; \
|
||||
fi
|
||||
|
||||
# Add Rust targets based on TARGETPLATFORM
|
||||
# Add Rust targets for both arches (to support cross-builds on multi-arch runners)
|
||||
RUN set -eux; \
|
||||
case "${TARGETPLATFORM:-linux/amd64}" in \
|
||||
linux/amd64) rustup target add x86_64-unknown-linux-gnu ;; \
|
||||
linux/arm64) rustup target add aarch64-unknown-linux-gnu ;; \
|
||||
*) echo "Unsupported TARGETPLATFORM=${TARGETPLATFORM}" >&2; exit 1 ;; \
|
||||
esac
|
||||
rustup target add x86_64-unknown-linux-gnu aarch64-unknown-linux-gnu; \
|
||||
rustup component add rust-std-x86_64-unknown-linux-gnu rust-std-aarch64-unknown-linux-gnu
|
||||
|
||||
# Cross-compilation environment (used only when targeting aarch64)
|
||||
ENV CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc
|
||||
ENV CC_aarch64_unknown_linux_gnu=aarch64-linux-gnu-gcc
|
||||
ENV CXX_aarch64_unknown_linux_gnu=aarch64-linux-gnu-g++
|
||||
ENV CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER=x86_64-linux-gnu-gcc
|
||||
ENV CC_x86_64_unknown_linux_gnu=x86_64-linux-gnu-gcc
|
||||
ENV CXX_x86_64_unknown_linux_gnu=x86_64-linux-gnu-g++
|
||||
|
||||
WORKDIR /usr/src/rustfs
|
||||
|
||||
@@ -72,7 +74,6 @@ COPY Cargo.toml Cargo.lock ./
|
||||
# 2) workspace member manifests (adjust if workspace layout changes)
|
||||
COPY rustfs/Cargo.toml rustfs/Cargo.toml
|
||||
COPY crates/*/Cargo.toml crates/
|
||||
COPY cli/rustfs-gui/Cargo.toml cli/rustfs-gui/Cargo.toml
|
||||
|
||||
# Pre-fetch dependencies for better caching
|
||||
RUN --mount=type=cache,target=/usr/local/cargo/registry \
|
||||
@@ -117,6 +118,49 @@ RUN --mount=type=cache,target=/usr/local/cargo/registry \
|
||||
;; \
|
||||
esac
|
||||
|
||||
# -----------------------------
|
||||
# Development stage (keeps toolchain)
|
||||
# -----------------------------
|
||||
FROM builder AS dev
|
||||
|
||||
ARG BUILD_DATE
|
||||
ARG VCS_REF
|
||||
|
||||
LABEL name="RustFS (dev-source)" \
|
||||
maintainer="RustFS Team" \
|
||||
build-date="${BUILD_DATE}" \
|
||||
vcs-ref="${VCS_REF}" \
|
||||
description="RustFS - local development with Rust toolchain."
|
||||
|
||||
# Install runtime dependencies that might be missing in partial builder
|
||||
# (builder already has build-essential, lld, etc.)
|
||||
WORKDIR /app
|
||||
|
||||
ENV CARGO_INCREMENTAL=1
|
||||
|
||||
# Ensure we have the same default env vars available
|
||||
ENV RUSTFS_ADDRESS=":9000" \
|
||||
RUSTFS_ACCESS_KEY="rustfsadmin" \
|
||||
RUSTFS_SECRET_KEY="rustfsadmin" \
|
||||
RUSTFS_CONSOLE_ENABLE="true" \
|
||||
RUSTFS_VOLUMES="/data" \
|
||||
RUST_LOG="warn" \
|
||||
RUSTFS_OBS_LOG_DIRECTORY="/logs" \
|
||||
RUSTFS_USERNAME="rustfs" \
|
||||
RUSTFS_GROUPNAME="rustfs" \
|
||||
RUSTFS_UID="1000" \
|
||||
RUSTFS_GID="1000"
|
||||
|
||||
# Note: We don't COPY source here because we expect it to be mounted at /app
|
||||
# We rely on cargo run to build and run
|
||||
EXPOSE 9000 9001
|
||||
|
||||
COPY entrypoint.sh /entrypoint.sh
|
||||
RUN chmod +x /entrypoint.sh
|
||||
|
||||
ENTRYPOINT ["/entrypoint.sh"]
|
||||
CMD ["cargo", "run", "--bin", "rustfs", "--"]
|
||||
|
||||
# -----------------------------
|
||||
# Runtime stage (Ubuntu minimal)
|
||||
# -----------------------------
|
||||
|
||||
19
README.md
19
README.md
@@ -103,7 +103,7 @@ The RustFS container runs as a non-root user `rustfs` (UID `10001`). If you run
|
||||
docker run -d -p 9000:9000 -p 9001:9001 -v $(pwd)/data:/data -v $(pwd)/logs:/logs rustfs/rustfs:latest
|
||||
|
||||
# Using specific version
|
||||
docker run -d -p 9000:9000 -p 9001:9001 -v $(pwd)/data:/data -v $(pwd)/logs:/logs rustfs/rustfs:1.0.0.alpha.68
|
||||
docker run -d -p 9000:9000 -p 9001:9001 -v $(pwd)/data:/data -v $(pwd)/logs:/logs rustfs/rustfs:1.0.0-alpha.76
|
||||
```
|
||||
|
||||
You can also use Docker Compose. Using the `docker-compose.yml` file in the root directory:
|
||||
@@ -153,6 +153,23 @@ make help-docker # Show all Docker-related commands
|
||||
|
||||
Follow the instructions in the [Helm Chart README](https://charts.rustfs.com/) to install RustFS on a Kubernetes cluster.
|
||||
|
||||
### 5\. Nix Flake (Option 5)
|
||||
|
||||
If you have [Nix with flakes enabled](https://nixos.wiki/wiki/Flakes#Enable_flakes):
|
||||
|
||||
```bash
|
||||
# Run directly without installing
|
||||
nix run github:rustfs/rustfs
|
||||
|
||||
# Build the binary
|
||||
nix build github:rustfs/rustfs
|
||||
./result/bin/rustfs --help
|
||||
|
||||
# Or from a local checkout
|
||||
nix build
|
||||
nix run
|
||||
```
|
||||
|
||||
-----
|
||||
|
||||
### Accessing RustFS
|
||||
|
||||
13
SECURITY.md
13
SECURITY.md
@@ -2,8 +2,7 @@
|
||||
|
||||
## Supported Versions
|
||||
|
||||
Use this section to tell people about which versions of your project are
|
||||
currently being supported with security updates.
|
||||
Security updates are provided for the latest released version of this project.
|
||||
|
||||
| Version | Supported |
|
||||
| ------- | ------------------ |
|
||||
@@ -11,8 +10,10 @@ currently being supported with security updates.
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
Use this section to tell people how to report a vulnerability.
|
||||
Please report security vulnerabilities **privately** via GitHub Security Advisories:
|
||||
|
||||
Tell them where to go, how often they can expect to get an update on a
|
||||
reported vulnerability, what to expect if the vulnerability is accepted or
|
||||
declined, etc.
|
||||
https://github.com/rustfs/rustfs/security/advisories/new
|
||||
|
||||
Do **not** open a public issue for security-sensitive bugs.
|
||||
|
||||
You can expect an initial response within a reasonable timeframe. Further updates will be provided as the report is triaged.
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
#!/bin/bash
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# RustFS Binary Build Script
|
||||
# This script compiles RustFS binaries for different platforms and architectures
|
||||
|
||||
@@ -29,6 +29,7 @@ categories = ["web-programming", "development-tools", "asynchronous", "api-bindi
|
||||
rustfs-targets = { workspace = true }
|
||||
rustfs-config = { workspace = true, features = ["audit", "constants"] }
|
||||
rustfs-ecstore = { workspace = true }
|
||||
async-trait = { workspace = true }
|
||||
chrono = { workspace = true }
|
||||
const-str = { workspace = true }
|
||||
futures = { workspace = true }
|
||||
|
||||
223
crates/audit/src/factory.rs
Normal file
223
crates/audit/src/factory.rs
Normal file
@@ -0,0 +1,223 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use crate::AuditEntry;
|
||||
use async_trait::async_trait;
|
||||
use hashbrown::HashSet;
|
||||
use rumqttc::QoS;
|
||||
use rustfs_config::audit::{AUDIT_MQTT_KEYS, AUDIT_WEBHOOK_KEYS, ENV_AUDIT_MQTT_KEYS, ENV_AUDIT_WEBHOOK_KEYS};
|
||||
use rustfs_config::{
|
||||
AUDIT_DEFAULT_DIR, DEFAULT_LIMIT, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR,
|
||||
MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN, WEBHOOK_CLIENT_CERT,
|
||||
WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
|
||||
};
|
||||
use rustfs_ecstore::config::KVS;
|
||||
use rustfs_targets::{
|
||||
Target,
|
||||
error::TargetError,
|
||||
target::{mqtt::MQTTArgs, webhook::WebhookArgs},
|
||||
};
|
||||
use std::time::Duration;
|
||||
use tracing::{debug, warn};
|
||||
use url::Url;
|
||||
|
||||
/// Trait for creating targets from configuration
|
||||
#[async_trait]
|
||||
pub trait TargetFactory: Send + Sync {
|
||||
/// Creates a target from configuration
|
||||
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target<AuditEntry> + Send + Sync>, TargetError>;
|
||||
|
||||
/// Validates target configuration
|
||||
fn validate_config(&self, id: &str, config: &KVS) -> Result<(), TargetError>;
|
||||
|
||||
/// Returns a set of valid configuration field names for this target type.
|
||||
/// This is used to filter environment variables.
|
||||
fn get_valid_fields(&self) -> HashSet<String>;
|
||||
|
||||
/// Returns a set of valid configuration env field names for this target type.
|
||||
/// This is used to filter environment variables.
|
||||
fn get_valid_env_fields(&self) -> HashSet<String>;
|
||||
}
|
||||
|
||||
/// Factory for creating Webhook targets
|
||||
pub struct WebhookTargetFactory;
|
||||
|
||||
#[async_trait]
|
||||
impl TargetFactory for WebhookTargetFactory {
|
||||
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target<AuditEntry> + Send + Sync>, TargetError> {
|
||||
// All config values are now read directly from the merged `config` KVS.
|
||||
let endpoint = config
|
||||
.lookup(WEBHOOK_ENDPOINT)
|
||||
.ok_or_else(|| TargetError::Configuration("Missing webhook endpoint".to_string()))?;
|
||||
let endpoint_url = Url::parse(&endpoint)
|
||||
.map_err(|e| TargetError::Configuration(format!("Invalid endpoint URL: {e} (value: '{endpoint}')")))?;
|
||||
|
||||
let args = WebhookArgs {
|
||||
enable: true, // If we are here, it's already enabled.
|
||||
endpoint: endpoint_url,
|
||||
auth_token: config.lookup(WEBHOOK_AUTH_TOKEN).unwrap_or_default(),
|
||||
queue_dir: config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or(AUDIT_DEFAULT_DIR.to_string()),
|
||||
queue_limit: config
|
||||
.lookup(WEBHOOK_QUEUE_LIMIT)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.unwrap_or(DEFAULT_LIMIT),
|
||||
client_cert: config.lookup(WEBHOOK_CLIENT_CERT).unwrap_or_default(),
|
||||
client_key: config.lookup(WEBHOOK_CLIENT_KEY).unwrap_or_default(),
|
||||
target_type: rustfs_targets::target::TargetType::AuditLog,
|
||||
};
|
||||
|
||||
let target = rustfs_targets::target::webhook::WebhookTarget::new(id, args)?;
|
||||
Ok(Box::new(target))
|
||||
}
|
||||
|
||||
fn validate_config(&self, _id: &str, config: &KVS) -> Result<(), TargetError> {
|
||||
// Validation also uses the merged `config` KVS directly.
|
||||
let endpoint = config
|
||||
.lookup(WEBHOOK_ENDPOINT)
|
||||
.ok_or_else(|| TargetError::Configuration("Missing webhook endpoint".to_string()))?;
|
||||
debug!("endpoint: {}", endpoint);
|
||||
let parsed_endpoint = endpoint.trim();
|
||||
Url::parse(parsed_endpoint)
|
||||
.map_err(|e| TargetError::Configuration(format!("Invalid endpoint URL: {e} (value: '{parsed_endpoint}')")))?;
|
||||
|
||||
let client_cert = config.lookup(WEBHOOK_CLIENT_CERT).unwrap_or_default();
|
||||
let client_key = config.lookup(WEBHOOK_CLIENT_KEY).unwrap_or_default();
|
||||
|
||||
if client_cert.is_empty() != client_key.is_empty() {
|
||||
return Err(TargetError::Configuration(
|
||||
"Both client_cert and client_key must be specified together".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
let queue_dir = config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or(AUDIT_DEFAULT_DIR.to_string());
|
||||
if !queue_dir.is_empty() && !std::path::Path::new(&queue_dir).is_absolute() {
|
||||
return Err(TargetError::Configuration("Webhook queue directory must be an absolute path".to_string()));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn get_valid_fields(&self) -> HashSet<String> {
|
||||
AUDIT_WEBHOOK_KEYS.iter().map(|s| s.to_string()).collect()
|
||||
}
|
||||
|
||||
fn get_valid_env_fields(&self) -> HashSet<String> {
|
||||
ENV_AUDIT_WEBHOOK_KEYS.iter().map(|s| s.to_string()).collect()
|
||||
}
|
||||
}
|
||||
|
||||
/// Factory for creating MQTT targets
|
||||
pub struct MQTTTargetFactory;
|
||||
|
||||
#[async_trait]
|
||||
impl TargetFactory for MQTTTargetFactory {
|
||||
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target<AuditEntry> + Send + Sync>, TargetError> {
|
||||
let broker = config
|
||||
.lookup(MQTT_BROKER)
|
||||
.ok_or_else(|| TargetError::Configuration("Missing MQTT broker".to_string()))?;
|
||||
let broker_url = Url::parse(&broker)
|
||||
.map_err(|e| TargetError::Configuration(format!("Invalid broker URL: {e} (value: '{broker}')")))?;
|
||||
|
||||
let topic = config
|
||||
.lookup(MQTT_TOPIC)
|
||||
.ok_or_else(|| TargetError::Configuration("Missing MQTT topic".to_string()))?;
|
||||
|
||||
let args = MQTTArgs {
|
||||
enable: true, // Assumed enabled.
|
||||
broker: broker_url,
|
||||
topic,
|
||||
qos: config
|
||||
.lookup(MQTT_QOS)
|
||||
.and_then(|v| v.parse::<u8>().ok())
|
||||
.map(|q| match q {
|
||||
0 => QoS::AtMostOnce,
|
||||
1 => QoS::AtLeastOnce,
|
||||
2 => QoS::ExactlyOnce,
|
||||
_ => QoS::AtLeastOnce,
|
||||
})
|
||||
.unwrap_or(QoS::AtLeastOnce),
|
||||
username: config.lookup(MQTT_USERNAME).unwrap_or_default(),
|
||||
password: config.lookup(MQTT_PASSWORD).unwrap_or_default(),
|
||||
max_reconnect_interval: config
|
||||
.lookup(MQTT_RECONNECT_INTERVAL)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.map(Duration::from_secs)
|
||||
.unwrap_or_else(|| Duration::from_secs(5)),
|
||||
keep_alive: config
|
||||
.lookup(MQTT_KEEP_ALIVE_INTERVAL)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.map(Duration::from_secs)
|
||||
.unwrap_or_else(|| Duration::from_secs(30)),
|
||||
queue_dir: config.lookup(MQTT_QUEUE_DIR).unwrap_or(AUDIT_DEFAULT_DIR.to_string()),
|
||||
queue_limit: config
|
||||
.lookup(MQTT_QUEUE_LIMIT)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.unwrap_or(DEFAULT_LIMIT),
|
||||
target_type: rustfs_targets::target::TargetType::AuditLog,
|
||||
};
|
||||
|
||||
let target = rustfs_targets::target::mqtt::MQTTTarget::new(id, args)?;
|
||||
Ok(Box::new(target))
|
||||
}
|
||||
|
||||
fn validate_config(&self, _id: &str, config: &KVS) -> Result<(), TargetError> {
|
||||
let broker = config
|
||||
.lookup(MQTT_BROKER)
|
||||
.ok_or_else(|| TargetError::Configuration("Missing MQTT broker".to_string()))?;
|
||||
let url = Url::parse(&broker)
|
||||
.map_err(|e| TargetError::Configuration(format!("Invalid broker URL: {e} (value: '{broker}')")))?;
|
||||
|
||||
match url.scheme() {
|
||||
"tcp" | "ssl" | "ws" | "wss" | "mqtt" | "mqtts" => {}
|
||||
_ => {
|
||||
return Err(TargetError::Configuration("Unsupported broker URL scheme".to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
if config.lookup(MQTT_TOPIC).is_none() {
|
||||
return Err(TargetError::Configuration("Missing MQTT topic".to_string()));
|
||||
}
|
||||
|
||||
if let Some(qos_str) = config.lookup(MQTT_QOS) {
|
||||
let qos = qos_str
|
||||
.parse::<u8>()
|
||||
.map_err(|_| TargetError::Configuration("Invalid QoS value".to_string()))?;
|
||||
if qos > 2 {
|
||||
return Err(TargetError::Configuration("QoS must be 0, 1, or 2".to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
let queue_dir = config.lookup(MQTT_QUEUE_DIR).unwrap_or_default();
|
||||
if !queue_dir.is_empty() {
|
||||
if !std::path::Path::new(&queue_dir).is_absolute() {
|
||||
return Err(TargetError::Configuration("MQTT queue directory must be an absolute path".to_string()));
|
||||
}
|
||||
if let Some(qos_str) = config.lookup(MQTT_QOS) {
|
||||
if qos_str == "0" {
|
||||
warn!("Using queue_dir with QoS 0 may result in event loss");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn get_valid_fields(&self) -> HashSet<String> {
|
||||
AUDIT_MQTT_KEYS.iter().map(|s| s.to_string()).collect()
|
||||
}
|
||||
|
||||
fn get_valid_env_fields(&self) -> HashSet<String> {
|
||||
ENV_AUDIT_MQTT_KEYS.iter().map(|s| s.to_string()).collect()
|
||||
}
|
||||
}
|
||||
@@ -20,6 +20,7 @@
|
||||
|
||||
pub mod entity;
|
||||
pub mod error;
|
||||
pub mod factory;
|
||||
pub mod global;
|
||||
pub mod observability;
|
||||
pub mod registry;
|
||||
|
||||
@@ -12,29 +12,26 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use crate::{AuditEntry, AuditError, AuditResult};
|
||||
use futures::{StreamExt, stream::FuturesUnordered};
|
||||
use crate::{
|
||||
AuditEntry, AuditError, AuditResult,
|
||||
factory::{MQTTTargetFactory, TargetFactory, WebhookTargetFactory},
|
||||
};
|
||||
use futures::StreamExt;
|
||||
use futures::stream::FuturesUnordered;
|
||||
use hashbrown::{HashMap, HashSet};
|
||||
use rustfs_config::{
|
||||
DEFAULT_DELIMITER, ENABLE_KEY, ENV_PREFIX, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR,
|
||||
MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN, WEBHOOK_BATCH_SIZE,
|
||||
WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_HTTP_TIMEOUT, WEBHOOK_MAX_RETRY, WEBHOOK_QUEUE_DIR,
|
||||
WEBHOOK_QUEUE_LIMIT, WEBHOOK_RETRY_INTERVAL, audit::AUDIT_ROUTE_PREFIX,
|
||||
};
|
||||
use rustfs_config::{DEFAULT_DELIMITER, ENABLE_KEY, ENV_PREFIX, EnableState, audit::AUDIT_ROUTE_PREFIX};
|
||||
use rustfs_ecstore::config::{Config, KVS};
|
||||
use rustfs_targets::{
|
||||
Target, TargetError,
|
||||
target::{ChannelTargetType, TargetType, mqtt::MQTTArgs, webhook::WebhookArgs},
|
||||
};
|
||||
use rustfs_targets::{Target, TargetError, target::ChannelTargetType};
|
||||
use std::str::FromStr;
|
||||
use std::sync::Arc;
|
||||
use std::time::Duration;
|
||||
use tracing::{debug, error, info, warn};
|
||||
use url::Url;
|
||||
|
||||
/// Registry for managing audit targets
|
||||
pub struct AuditRegistry {
|
||||
/// Storage for created targets
|
||||
targets: HashMap<String, Box<dyn Target<AuditEntry> + Send + Sync>>,
|
||||
/// Factories for creating targets
|
||||
factories: HashMap<String, Box<dyn TargetFactory>>,
|
||||
}
|
||||
|
||||
impl Default for AuditRegistry {
|
||||
@@ -46,162 +43,207 @@ impl Default for AuditRegistry {
|
||||
impl AuditRegistry {
|
||||
/// Creates a new AuditRegistry
|
||||
pub fn new() -> Self {
|
||||
Self { targets: HashMap::new() }
|
||||
let mut registry = AuditRegistry {
|
||||
factories: HashMap::new(),
|
||||
targets: HashMap::new(),
|
||||
};
|
||||
|
||||
// Register built-in factories
|
||||
registry.register(ChannelTargetType::Webhook.as_str(), Box::new(WebhookTargetFactory));
|
||||
registry.register(ChannelTargetType::Mqtt.as_str(), Box::new(MQTTTargetFactory));
|
||||
|
||||
registry
|
||||
}
|
||||
|
||||
/// Creates all audit targets from system configuration and environment variables.
|
||||
/// Registers a new factory for a target type
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `target_type` - The type of the target (e.g., "webhook", "mqtt").
|
||||
/// * `factory` - The factory instance to create targets of this type.
|
||||
pub fn register(&mut self, target_type: &str, factory: Box<dyn TargetFactory>) {
|
||||
self.factories.insert(target_type.to_string(), factory);
|
||||
}
|
||||
|
||||
/// Creates a target of the specified type with the given ID and configuration
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `target_type` - The type of the target (e.g., "webhook", "mqtt").
|
||||
/// * `id` - The identifier for the target instance.
|
||||
/// * `config` - The configuration key-value store for the target.
|
||||
///
|
||||
/// # Returns
|
||||
/// * `Result<Box<dyn Target<AuditEntry> + Send + Sync>, TargetError>` - The created target or an error.
|
||||
pub async fn create_target(
|
||||
&self,
|
||||
target_type: &str,
|
||||
id: String,
|
||||
config: &KVS,
|
||||
) -> Result<Box<dyn Target<AuditEntry> + Send + Sync>, TargetError> {
|
||||
let factory = self
|
||||
.factories
|
||||
.get(target_type)
|
||||
.ok_or_else(|| TargetError::Configuration(format!("Unknown target type: {target_type}")))?;
|
||||
|
||||
// Validate configuration before creating target
|
||||
factory.validate_config(&id, config)?;
|
||||
|
||||
// Create target
|
||||
factory.create_target(id, config).await
|
||||
}
|
||||
|
||||
/// Creates all targets from a configuration
|
||||
/// Create all notification targets from system configuration and environment variables.
|
||||
/// This method processes the creation of each target concurrently as follows:
|
||||
/// 1. Iterate through supported target types (webhook, mqtt).
|
||||
/// 2. For each type, resolve its configuration from file and environment variables.
|
||||
/// 1. Iterate through all registered target types (e.g. webhooks, mqtt).
|
||||
/// 2. For each type, resolve its configuration in the configuration file and environment variables.
|
||||
/// 3. Identify all target instance IDs that need to be created.
|
||||
/// 4. Merge configurations with precedence: ENV > file instance > file default.
|
||||
/// 5. Create async tasks for enabled instances.
|
||||
/// 6. Execute tasks concurrently and collect successful targets.
|
||||
/// 7. Persist successful configurations back to system storage.
|
||||
pub async fn create_targets_from_config(
|
||||
&mut self,
|
||||
/// 4. Combine the default configuration, file configuration, and environment variable configuration for each instance.
|
||||
/// 5. If the instance is enabled, create an asynchronous task for it to instantiate.
|
||||
/// 6. Concurrency executes all creation tasks and collects results.
|
||||
pub async fn create_audit_targets_from_config(
|
||||
&self,
|
||||
config: &Config,
|
||||
) -> AuditResult<Vec<Box<dyn Target<AuditEntry> + Send + Sync>>> {
|
||||
// Collect only environment variables with the relevant prefix to reduce memory usage
|
||||
let all_env: Vec<(String, String)> = std::env::vars().filter(|(key, _)| key.starts_with(ENV_PREFIX)).collect();
|
||||
|
||||
// A collection of asynchronous tasks for concurrently executing target creation
|
||||
let mut tasks = FuturesUnordered::new();
|
||||
// let final_config = config.clone();
|
||||
|
||||
// let final_config = config.clone(); // Clone a configuration for aggregating the final result
|
||||
// Record the defaults for each segment so that the segment can eventually be rebuilt
|
||||
let mut section_defaults: HashMap<String, KVS> = HashMap::new();
|
||||
|
||||
// Supported target types for audit
|
||||
let target_types = vec![ChannelTargetType::Webhook.as_str(), ChannelTargetType::Mqtt.as_str()];
|
||||
|
||||
// 1. Traverse all target types and process them
|
||||
for target_type in target_types {
|
||||
let span = tracing::Span::current();
|
||||
span.record("target_type", target_type);
|
||||
info!(target_type = %target_type, "Starting audit target type processing");
|
||||
// 1. Traverse all registered plants and process them by target type
|
||||
for (target_type, factory) in &self.factories {
|
||||
tracing::Span::current().record("target_type", target_type.as_str());
|
||||
info!("Start working on target types...");
|
||||
|
||||
// 2. Prepare the configuration source
|
||||
// 2.1. Get the configuration segment in the file, e.g. 'audit_webhook'
|
||||
let section_name = format!("{AUDIT_ROUTE_PREFIX}{target_type}").to_lowercase();
|
||||
let file_configs = config.0.get(§ion_name).cloned().unwrap_or_default();
|
||||
// 2.2. Get the default configuration for that type
|
||||
let default_cfg = file_configs.get(DEFAULT_DELIMITER).cloned().unwrap_or_default();
|
||||
debug!(?default_cfg, "Retrieved default configuration");
|
||||
debug!(?default_cfg, "Get the default configuration");
|
||||
|
||||
// Save defaults for eventual write back
|
||||
section_defaults.insert(section_name.clone(), default_cfg.clone());
|
||||
|
||||
// Get valid fields for the target type
|
||||
let valid_fields = match target_type {
|
||||
"webhook" => get_webhook_valid_fields(),
|
||||
"mqtt" => get_mqtt_valid_fields(),
|
||||
_ => {
|
||||
warn!(target_type = %target_type, "Unknown target type, skipping");
|
||||
continue;
|
||||
}
|
||||
};
|
||||
debug!(?valid_fields, "Retrieved valid configuration fields");
|
||||
// *** Optimization point 1: Get all legitimate fields of the current target type ***
|
||||
let valid_fields = factory.get_valid_fields();
|
||||
debug!(?valid_fields, "Get the legitimate configuration fields");
|
||||
|
||||
// 3. Resolve instance IDs and configuration overrides from environment variables
|
||||
let mut instance_ids_from_env = HashSet::new();
|
||||
let mut env_overrides: HashMap<String, HashMap<String, String>> = HashMap::new();
|
||||
|
||||
for (env_key, env_value) in &all_env {
|
||||
let audit_prefix = format!("{ENV_PREFIX}{AUDIT_ROUTE_PREFIX}{target_type}").to_uppercase();
|
||||
if !env_key.starts_with(&audit_prefix) {
|
||||
continue;
|
||||
}
|
||||
|
||||
let suffix = &env_key[audit_prefix.len()..];
|
||||
if suffix.is_empty() {
|
||||
continue;
|
||||
}
|
||||
|
||||
// Parse field and instance from suffix (FIELD_INSTANCE or FIELD)
|
||||
let (field_name, instance_id) = if let Some(last_underscore) = suffix.rfind('_') {
|
||||
let potential_field = &suffix[1..last_underscore]; // Skip leading _
|
||||
let potential_instance = &suffix[last_underscore + 1..];
|
||||
|
||||
// Check if the part before the last underscore is a valid field
|
||||
if valid_fields.contains(&potential_field.to_lowercase()) {
|
||||
(potential_field.to_lowercase(), potential_instance.to_lowercase())
|
||||
} else {
|
||||
// Treat the entire suffix as field name with default instance
|
||||
(suffix[1..].to_lowercase(), DEFAULT_DELIMITER.to_string())
|
||||
// 3.1. Instance discovery: Based on the '..._ENABLE_INSTANCEID' format
|
||||
let enable_prefix =
|
||||
format!("{ENV_PREFIX}{AUDIT_ROUTE_PREFIX}{target_type}{DEFAULT_DELIMITER}{ENABLE_KEY}{DEFAULT_DELIMITER}")
|
||||
.to_uppercase();
|
||||
for (key, value) in &all_env {
|
||||
if EnableState::from_str(value).ok().map(|s| s.is_enabled()).unwrap_or(false) {
|
||||
if let Some(id) = key.strip_prefix(&enable_prefix) {
|
||||
if !id.is_empty() {
|
||||
instance_ids_from_env.insert(id.to_lowercase());
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// No underscore, treat as field with default instance
|
||||
(suffix[1..].to_lowercase(), DEFAULT_DELIMITER.to_string())
|
||||
};
|
||||
|
||||
if valid_fields.contains(&field_name) {
|
||||
if instance_id != DEFAULT_DELIMITER {
|
||||
instance_ids_from_env.insert(instance_id.clone());
|
||||
}
|
||||
env_overrides
|
||||
.entry(instance_id)
|
||||
.or_default()
|
||||
.insert(field_name, env_value.clone());
|
||||
} else {
|
||||
debug!(
|
||||
env_key = %env_key,
|
||||
field_name = %field_name,
|
||||
"Ignoring environment variable field not found in valid fields for target type {}",
|
||||
target_type
|
||||
);
|
||||
}
|
||||
}
|
||||
debug!(?env_overrides, "Completed environment variable analysis");
|
||||
|
||||
// 3.2. Parse all relevant environment variable configurations
|
||||
// 3.2.1. Build environment variable prefixes such as 'RUSTFS_AUDIT_WEBHOOK_'
|
||||
let env_prefix = format!("{ENV_PREFIX}{AUDIT_ROUTE_PREFIX}{target_type}{DEFAULT_DELIMITER}").to_uppercase();
|
||||
// 3.2.2. 'env_overrides' is used to store configurations parsed from environment variables in the format: {instance id -> {field -> value}}
|
||||
let mut env_overrides: HashMap<String, HashMap<String, String>> = HashMap::new();
|
||||
for (key, value) in &all_env {
|
||||
if let Some(rest) = key.strip_prefix(&env_prefix) {
|
||||
// Use rsplitn to split from the right side to properly extract the INSTANCE_ID at the end
|
||||
// Format: <FIELD_NAME>_<INSTANCE_ID> or <FIELD_NAME>
|
||||
let mut parts = rest.rsplitn(2, DEFAULT_DELIMITER);
|
||||
|
||||
// The first part from the right is INSTANCE_ID
|
||||
let instance_id_part = parts.next().unwrap_or(DEFAULT_DELIMITER);
|
||||
// The remaining part is FIELD_NAME
|
||||
let field_name_part = parts.next();
|
||||
|
||||
let (field_name, instance_id) = match field_name_part {
|
||||
// Case 1: The format is <FIELD_NAME>_<INSTANCE_ID>
|
||||
// e.g., rest = "ENDPOINT_PRIMARY" -> field_name="ENDPOINT", instance_id="PRIMARY"
|
||||
Some(field) => (field.to_lowercase(), instance_id_part.to_lowercase()),
|
||||
// Case 2: The format is <FIELD_NAME> (without INSTANCE_ID)
|
||||
// e.g., rest = "ENABLE" -> field_name="ENABLE", instance_id="" (Universal configuration `_ DEFAULT_DELIMITER`)
|
||||
None => (instance_id_part.to_lowercase(), DEFAULT_DELIMITER.to_string()),
|
||||
};
|
||||
|
||||
// *** Optimization point 2: Verify whether the parsed field_name is legal ***
|
||||
if !field_name.is_empty() && valid_fields.contains(&field_name) {
|
||||
debug!(
|
||||
instance_id = %if instance_id.is_empty() { DEFAULT_DELIMITER } else { &instance_id },
|
||||
%field_name,
|
||||
%value,
|
||||
"Parsing to environment variables"
|
||||
);
|
||||
env_overrides
|
||||
.entry(instance_id)
|
||||
.or_default()
|
||||
.insert(field_name, value.clone());
|
||||
} else {
|
||||
// Ignore illegal field names
|
||||
warn!(
|
||||
field_name = %field_name,
|
||||
"Ignore environment variable fields, not found in the list of valid fields for target type {}",
|
||||
target_type
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
debug!(?env_overrides, "Complete the environment variable analysis");
|
||||
|
||||
// 4. Determine all instance IDs that need to be processed
|
||||
let mut all_instance_ids: HashSet<String> =
|
||||
file_configs.keys().filter(|k| *k != DEFAULT_DELIMITER).cloned().collect();
|
||||
all_instance_ids.extend(instance_ids_from_env);
|
||||
debug!(?all_instance_ids, "Determined all instance IDs");
|
||||
debug!(?all_instance_ids, "Determine all instance IDs");
|
||||
|
||||
// 5. Merge configurations and create tasks for each instance
|
||||
for id in all_instance_ids {
|
||||
// 5.1. Merge configuration, priority: Environment variables > File instance > File default
|
||||
// 5.1. Merge configuration, priority: Environment variables > File instance configuration > File default configuration
|
||||
let mut merged_config = default_cfg.clone();
|
||||
|
||||
// Apply file instance configuration if available
|
||||
// Instance-specific configuration in application files
|
||||
if let Some(file_instance_cfg) = file_configs.get(&id) {
|
||||
merged_config.extend(file_instance_cfg.clone());
|
||||
}
|
||||
|
||||
// Apply environment variable overrides
|
||||
// Application instance-specific environment variable configuration
|
||||
if let Some(env_instance_cfg) = env_overrides.get(&id) {
|
||||
// Convert HashMap<String, String> to KVS
|
||||
let mut kvs_from_env = KVS::new();
|
||||
for (k, v) in env_instance_cfg {
|
||||
kvs_from_env.insert(k.clone(), v.clone());
|
||||
}
|
||||
merged_config.extend(kvs_from_env);
|
||||
}
|
||||
debug!(instance_id = %id, ?merged_config, "Completed configuration merge");
|
||||
debug!(instance_id = %id, ?merged_config, "Complete configuration merge");
|
||||
|
||||
// 5.2. Check if the instance is enabled
|
||||
let enabled = merged_config
|
||||
.lookup(ENABLE_KEY)
|
||||
.map(|v| parse_enable_value(&v))
|
||||
.map(|v| {
|
||||
EnableState::from_str(v.as_str())
|
||||
.ok()
|
||||
.map(|s| s.is_enabled())
|
||||
.unwrap_or(false)
|
||||
})
|
||||
.unwrap_or(false);
|
||||
|
||||
if enabled {
|
||||
info!(instance_id = %id, "Creating audit target");
|
||||
|
||||
// Create task for concurrent execution
|
||||
let target_type_clone = target_type.to_string();
|
||||
let id_clone = id.clone();
|
||||
let merged_config_arc = Arc::new(merged_config.clone());
|
||||
let task = tokio::spawn(async move {
|
||||
let result = create_audit_target(&target_type_clone, &id_clone, &merged_config_arc).await;
|
||||
(target_type_clone, id_clone, result, merged_config_arc)
|
||||
info!(instance_id = %id, "Target is enabled, ready to create a task");
|
||||
// 5.3. Create asynchronous tasks for enabled instances
|
||||
let target_type_clone = target_type.clone();
|
||||
let tid = id.clone();
|
||||
let merged_config_arc = Arc::new(merged_config);
|
||||
tasks.push(async move {
|
||||
let result = factory.create_target(tid.clone(), &merged_config_arc).await;
|
||||
(target_type_clone, tid, result, Arc::clone(&merged_config_arc))
|
||||
});
|
||||
|
||||
tasks.push(task);
|
||||
|
||||
// Update final config with successful instance
|
||||
// final_config.0.entry(section_name.clone()).or_default().insert(id, merged_config);
|
||||
} else {
|
||||
info!(instance_id = %id, "Skipping disabled audit target, will be removed from final configuration");
|
||||
info!(instance_id = %id, "Skip the disabled target and will be removed from the final configuration");
|
||||
// Remove disabled target from final configuration
|
||||
// final_config.0.entry(section_name.clone()).or_default().remove(&id);
|
||||
}
|
||||
@@ -211,30 +253,28 @@ impl AuditRegistry {
|
||||
// 6. Concurrently execute all creation tasks and collect results
|
||||
let mut successful_targets = Vec::new();
|
||||
let mut successful_configs = Vec::new();
|
||||
while let Some(task_result) = tasks.next().await {
|
||||
match task_result {
|
||||
Ok((target_type, id, result, kvs_arc)) => match result {
|
||||
Ok(target) => {
|
||||
info!(target_type = %target_type, instance_id = %id, "Created audit target successfully");
|
||||
successful_targets.push(target);
|
||||
successful_configs.push((target_type, id, kvs_arc));
|
||||
}
|
||||
Err(e) => {
|
||||
error!(target_type = %target_type, instance_id = %id, error = %e, "Failed to create audit target");
|
||||
}
|
||||
},
|
||||
while let Some((target_type, id, result, final_config)) = tasks.next().await {
|
||||
match result {
|
||||
Ok(target) => {
|
||||
info!(target_type = %target_type, instance_id = %id, "Create a target successfully");
|
||||
successful_targets.push(target);
|
||||
successful_configs.push((target_type, id, final_config));
|
||||
}
|
||||
Err(e) => {
|
||||
error!(error = %e, "Task execution failed");
|
||||
error!(target_type = %target_type, instance_id = %id, error = %e, "Failed to create a target");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Rebuild in pieces based on "default items + successful instances" and overwrite writeback to ensure that deleted/disabled instances will not be "resurrected"
|
||||
// 7. Aggregate new configuration and write back to system configuration
|
||||
if !successful_configs.is_empty() || !section_defaults.is_empty() {
|
||||
info!("Prepare to rebuild and save target configurations to the system configuration...");
|
||||
info!(
|
||||
"Prepare to update {} successfully created target configurations to the system configuration...",
|
||||
successful_configs.len()
|
||||
);
|
||||
|
||||
// Aggregate successful instances into segments
|
||||
let mut successes_by_section: HashMap<String, HashMap<String, KVS>> = HashMap::new();
|
||||
|
||||
for (target_type, id, kvs) in successful_configs {
|
||||
let section_name = format!("{AUDIT_ROUTE_PREFIX}{target_type}").to_lowercase();
|
||||
successes_by_section
|
||||
@@ -244,76 +284,99 @@ impl AuditRegistry {
|
||||
}
|
||||
|
||||
let mut new_config = config.clone();
|
||||
|
||||
// Collection of segments that need to be processed: Collect all segments where default items exist or where successful instances exist
|
||||
let mut sections: HashSet<String> = HashSet::new();
|
||||
sections.extend(section_defaults.keys().cloned());
|
||||
sections.extend(successes_by_section.keys().cloned());
|
||||
|
||||
for section_name in sections {
|
||||
for section in sections {
|
||||
let mut section_map: std::collections::HashMap<String, KVS> = std::collections::HashMap::new();
|
||||
|
||||
// The default entry (if present) is written back to `_`
|
||||
if let Some(default_cfg) = section_defaults.get(§ion_name) {
|
||||
if !default_cfg.is_empty() {
|
||||
section_map.insert(DEFAULT_DELIMITER.to_string(), default_cfg.clone());
|
||||
// Add default item
|
||||
if let Some(default_kvs) = section_defaults.get(§ion) {
|
||||
if !default_kvs.is_empty() {
|
||||
section_map.insert(DEFAULT_DELIMITER.to_string(), default_kvs.clone());
|
||||
}
|
||||
}
|
||||
|
||||
// Successful instance write back
|
||||
if let Some(instances) = successes_by_section.get(§ion_name) {
|
||||
// Add successful instance item
|
||||
if let Some(instances) = successes_by_section.get(§ion) {
|
||||
for (id, kvs) in instances {
|
||||
section_map.insert(id.clone(), kvs.clone());
|
||||
}
|
||||
}
|
||||
|
||||
// Empty segments are removed and non-empty segments are replaced as a whole.
|
||||
// Empty breaks are removed and non-empty breaks are replaced entirely.
|
||||
if section_map.is_empty() {
|
||||
new_config.0.remove(§ion_name);
|
||||
new_config.0.remove(§ion);
|
||||
} else {
|
||||
new_config.0.insert(section_name, section_map);
|
||||
new_config.0.insert(section, section_map);
|
||||
}
|
||||
}
|
||||
|
||||
// 7. Save the new configuration to the system
|
||||
let Some(store) = rustfs_ecstore::new_object_layer_fn() else {
|
||||
let Some(store) = rustfs_ecstore::global::new_object_layer_fn() else {
|
||||
return Err(AuditError::StorageNotAvailable(
|
||||
"Failed to save target configuration: server storage not initialized".to_string(),
|
||||
));
|
||||
};
|
||||
|
||||
match rustfs_ecstore::config::com::save_server_config(store, &new_config).await {
|
||||
Ok(_) => info!("New audit configuration saved to system successfully"),
|
||||
Ok(_) => {
|
||||
info!("The new configuration was saved to the system successfully.")
|
||||
}
|
||||
Err(e) => {
|
||||
error!(error = %e, "Failed to save new audit configuration");
|
||||
error!("Failed to save the new configuration: {}", e);
|
||||
return Err(AuditError::SaveConfig(Box::new(e)));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
info!(count = successful_targets.len(), "All target processing completed");
|
||||
Ok(successful_targets)
|
||||
}
|
||||
|
||||
/// Adds a target to the registry
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `id` - The identifier for the target.
|
||||
/// * `target` - The target instance to be added.
|
||||
pub fn add_target(&mut self, id: String, target: Box<dyn Target<AuditEntry> + Send + Sync>) {
|
||||
self.targets.insert(id, target);
|
||||
}
|
||||
|
||||
/// Removes a target from the registry
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `id` - The identifier for the target to be removed.
|
||||
///
|
||||
/// # Returns
|
||||
/// * `Option<Box<dyn Target<AuditEntry> + Send + Sync>>` - The removed target if it existed.
|
||||
pub fn remove_target(&mut self, id: &str) -> Option<Box<dyn Target<AuditEntry> + Send + Sync>> {
|
||||
self.targets.remove(id)
|
||||
}
|
||||
|
||||
/// Gets a target from the registry
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `id` - The identifier for the target to be retrieved.
|
||||
///
|
||||
/// # Returns
|
||||
/// * `Option<&(dyn Target<AuditEntry> + Send + Sync)>` - The target if it exists.
|
||||
pub fn get_target(&self, id: &str) -> Option<&(dyn Target<AuditEntry> + Send + Sync)> {
|
||||
self.targets.get(id).map(|t| t.as_ref())
|
||||
}
|
||||
|
||||
/// Lists all target IDs
|
||||
///
|
||||
/// # Returns
|
||||
/// * `Vec<String>` - A vector of all target IDs in the registry.
|
||||
pub fn list_targets(&self) -> Vec<String> {
|
||||
self.targets.keys().cloned().collect()
|
||||
}
|
||||
|
||||
/// Closes all targets and clears the registry
|
||||
///
|
||||
/// # Returns
|
||||
/// * `AuditResult<()>` - Result indicating success or failure.
|
||||
pub async fn close_all(&mut self) -> AuditResult<()> {
|
||||
let mut errors = Vec::new();
|
||||
|
||||
@@ -331,152 +394,3 @@ impl AuditRegistry {
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
/// Creates an audit target based on type and configuration
|
||||
async fn create_audit_target(
|
||||
target_type: &str,
|
||||
id: &str,
|
||||
config: &KVS,
|
||||
) -> Result<Box<dyn Target<AuditEntry> + Send + Sync>, TargetError> {
|
||||
match target_type {
|
||||
val if val == ChannelTargetType::Webhook.as_str() => {
|
||||
let args = parse_webhook_args(id, config)?;
|
||||
let target = rustfs_targets::target::webhook::WebhookTarget::new(id.to_string(), args)?;
|
||||
Ok(Box::new(target))
|
||||
}
|
||||
val if val == ChannelTargetType::Mqtt.as_str() => {
|
||||
let args = parse_mqtt_args(id, config)?;
|
||||
let target = rustfs_targets::target::mqtt::MQTTTarget::new(id.to_string(), args)?;
|
||||
Ok(Box::new(target))
|
||||
}
|
||||
_ => Err(TargetError::Configuration(format!("Unknown target type: {target_type}"))),
|
||||
}
|
||||
}
|
||||
|
||||
/// Gets valid field names for webhook configuration
|
||||
fn get_webhook_valid_fields() -> HashSet<String> {
|
||||
vec![
|
||||
ENABLE_KEY.to_string(),
|
||||
WEBHOOK_ENDPOINT.to_string(),
|
||||
WEBHOOK_AUTH_TOKEN.to_string(),
|
||||
WEBHOOK_CLIENT_CERT.to_string(),
|
||||
WEBHOOK_CLIENT_KEY.to_string(),
|
||||
WEBHOOK_BATCH_SIZE.to_string(),
|
||||
WEBHOOK_QUEUE_LIMIT.to_string(),
|
||||
WEBHOOK_QUEUE_DIR.to_string(),
|
||||
WEBHOOK_MAX_RETRY.to_string(),
|
||||
WEBHOOK_RETRY_INTERVAL.to_string(),
|
||||
WEBHOOK_HTTP_TIMEOUT.to_string(),
|
||||
]
|
||||
.into_iter()
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Gets valid field names for MQTT configuration
|
||||
fn get_mqtt_valid_fields() -> HashSet<String> {
|
||||
vec![
|
||||
ENABLE_KEY.to_string(),
|
||||
MQTT_BROKER.to_string(),
|
||||
MQTT_TOPIC.to_string(),
|
||||
MQTT_USERNAME.to_string(),
|
||||
MQTT_PASSWORD.to_string(),
|
||||
MQTT_QOS.to_string(),
|
||||
MQTT_KEEP_ALIVE_INTERVAL.to_string(),
|
||||
MQTT_RECONNECT_INTERVAL.to_string(),
|
||||
MQTT_QUEUE_DIR.to_string(),
|
||||
MQTT_QUEUE_LIMIT.to_string(),
|
||||
]
|
||||
.into_iter()
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Parses webhook arguments from KVS configuration
|
||||
fn parse_webhook_args(_id: &str, config: &KVS) -> Result<WebhookArgs, TargetError> {
|
||||
let endpoint = config
|
||||
.lookup(WEBHOOK_ENDPOINT)
|
||||
.filter(|s| !s.is_empty())
|
||||
.ok_or_else(|| TargetError::Configuration("webhook endpoint is required".to_string()))?;
|
||||
|
||||
let endpoint_url =
|
||||
Url::parse(&endpoint).map_err(|e| TargetError::Configuration(format!("invalid webhook endpoint URL: {e}")))?;
|
||||
|
||||
let args = WebhookArgs {
|
||||
enable: true, // Already validated as enabled
|
||||
endpoint: endpoint_url,
|
||||
auth_token: config.lookup(WEBHOOK_AUTH_TOKEN).unwrap_or_default(),
|
||||
queue_dir: config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or_default(),
|
||||
queue_limit: config
|
||||
.lookup(WEBHOOK_QUEUE_LIMIT)
|
||||
.and_then(|s| s.parse().ok())
|
||||
.unwrap_or(100000),
|
||||
client_cert: config.lookup(WEBHOOK_CLIENT_CERT).unwrap_or_default(),
|
||||
client_key: config.lookup(WEBHOOK_CLIENT_KEY).unwrap_or_default(),
|
||||
target_type: TargetType::AuditLog,
|
||||
};
|
||||
|
||||
args.validate()?;
|
||||
Ok(args)
|
||||
}
|
||||
|
||||
/// Parses MQTT arguments from KVS configuration
|
||||
fn parse_mqtt_args(_id: &str, config: &KVS) -> Result<MQTTArgs, TargetError> {
|
||||
let broker = config
|
||||
.lookup(MQTT_BROKER)
|
||||
.filter(|s| !s.is_empty())
|
||||
.ok_or_else(|| TargetError::Configuration("MQTT broker is required".to_string()))?;
|
||||
|
||||
let broker_url = Url::parse(&broker).map_err(|e| TargetError::Configuration(format!("invalid MQTT broker URL: {e}")))?;
|
||||
|
||||
let topic = config
|
||||
.lookup(MQTT_TOPIC)
|
||||
.filter(|s| !s.is_empty())
|
||||
.ok_or_else(|| TargetError::Configuration("MQTT topic is required".to_string()))?;
|
||||
|
||||
let qos = config
|
||||
.lookup(MQTT_QOS)
|
||||
.and_then(|s| s.parse::<u8>().ok())
|
||||
.and_then(|q| match q {
|
||||
0 => Some(rumqttc::QoS::AtMostOnce),
|
||||
1 => Some(rumqttc::QoS::AtLeastOnce),
|
||||
2 => Some(rumqttc::QoS::ExactlyOnce),
|
||||
_ => None,
|
||||
})
|
||||
.unwrap_or(rumqttc::QoS::AtLeastOnce);
|
||||
|
||||
let args = MQTTArgs {
|
||||
enable: true, // Already validated as enabled
|
||||
broker: broker_url,
|
||||
topic,
|
||||
qos,
|
||||
username: config.lookup(MQTT_USERNAME).unwrap_or_default(),
|
||||
password: config.lookup(MQTT_PASSWORD).unwrap_or_default(),
|
||||
max_reconnect_interval: parse_duration(&config.lookup(MQTT_RECONNECT_INTERVAL).unwrap_or_else(|| "5s".to_string()))
|
||||
.unwrap_or(Duration::from_secs(5)),
|
||||
keep_alive: parse_duration(&config.lookup(MQTT_KEEP_ALIVE_INTERVAL).unwrap_or_else(|| "60s".to_string()))
|
||||
.unwrap_or(Duration::from_secs(60)),
|
||||
queue_dir: config.lookup(MQTT_QUEUE_DIR).unwrap_or_default(),
|
||||
queue_limit: config.lookup(MQTT_QUEUE_LIMIT).and_then(|s| s.parse().ok()).unwrap_or(100000),
|
||||
target_type: TargetType::AuditLog,
|
||||
};
|
||||
|
||||
args.validate()?;
|
||||
Ok(args)
|
||||
}
|
||||
|
||||
/// Parses enable value from string
|
||||
fn parse_enable_value(value: &str) -> bool {
|
||||
matches!(value.to_lowercase().as_str(), "1" | "on" | "true" | "yes")
|
||||
}
|
||||
|
||||
/// Parses duration from string (e.g., "3s", "5m")
|
||||
fn parse_duration(s: &str) -> Option<Duration> {
|
||||
if let Some(stripped) = s.strip_suffix('s') {
|
||||
stripped.parse::<u64>().ok().map(Duration::from_secs)
|
||||
} else if let Some(stripped) = s.strip_suffix('m') {
|
||||
stripped.parse::<u64>().ok().map(|m| Duration::from_secs(m * 60))
|
||||
} else if let Some(stripped) = s.strip_suffix("ms") {
|
||||
stripped.parse::<u64>().ok().map(Duration::from_millis)
|
||||
} else {
|
||||
s.parse::<u64>().ok().map(Duration::from_secs)
|
||||
}
|
||||
}
|
||||
|
||||
@@ -58,6 +58,12 @@ impl AuditSystem {
|
||||
}
|
||||
|
||||
/// Starts the audit system with the given configuration
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `config` - The configuration to use for starting the audit system
|
||||
///
|
||||
/// # Returns
|
||||
/// * `AuditResult<()>` - Result indicating success or failure
|
||||
pub async fn start(&self, config: Config) -> AuditResult<()> {
|
||||
let state = self.state.write().await;
|
||||
|
||||
@@ -87,7 +93,7 @@ impl AuditSystem {
|
||||
|
||||
// Create targets from configuration
|
||||
let mut registry = self.registry.lock().await;
|
||||
match registry.create_targets_from_config(&config).await {
|
||||
match registry.create_audit_targets_from_config(&config).await {
|
||||
Ok(targets) => {
|
||||
if targets.is_empty() {
|
||||
info!("No enabled audit targets found, keeping audit system stopped");
|
||||
@@ -143,6 +149,9 @@ impl AuditSystem {
|
||||
}
|
||||
|
||||
/// Pauses the audit system
|
||||
///
|
||||
/// # Returns
|
||||
/// * `AuditResult<()>` - Result indicating success or failure
|
||||
pub async fn pause(&self) -> AuditResult<()> {
|
||||
let mut state = self.state.write().await;
|
||||
|
||||
@@ -161,6 +170,9 @@ impl AuditSystem {
|
||||
}
|
||||
|
||||
/// Resumes the audit system
|
||||
///
|
||||
/// # Returns
|
||||
/// * `AuditResult<()>` - Result indicating success or failure
|
||||
pub async fn resume(&self) -> AuditResult<()> {
|
||||
let mut state = self.state.write().await;
|
||||
|
||||
@@ -179,6 +191,9 @@ impl AuditSystem {
|
||||
}
|
||||
|
||||
/// Stops the audit system and closes all targets
|
||||
///
|
||||
/// # Returns
|
||||
/// * `AuditResult<()>` - Result indicating success or failure
|
||||
pub async fn close(&self) -> AuditResult<()> {
|
||||
let mut state = self.state.write().await;
|
||||
|
||||
@@ -223,11 +238,20 @@ impl AuditSystem {
|
||||
}
|
||||
|
||||
/// Checks if the audit system is running
|
||||
///
|
||||
/// # Returns
|
||||
/// * `bool` - True if running, false otherwise
|
||||
pub async fn is_running(&self) -> bool {
|
||||
matches!(*self.state.read().await, AuditSystemState::Running)
|
||||
}
|
||||
|
||||
/// Dispatches an audit log entry to all active targets
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `entry` - The audit log entry to dispatch
|
||||
///
|
||||
/// # Returns
|
||||
/// * `AuditResult<()>` - Result indicating success or failure
|
||||
pub async fn dispatch(&self, entry: Arc<AuditEntry>) -> AuditResult<()> {
|
||||
let start_time = std::time::Instant::now();
|
||||
|
||||
@@ -319,6 +343,13 @@ impl AuditSystem {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Dispatches a batch of audit log entries to all active targets
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `entries` - A vector of audit log entries to dispatch
|
||||
///
|
||||
/// # Returns
|
||||
/// * `AuditResult<()>` - Result indicating success or failure
|
||||
pub async fn dispatch_batch(&self, entries: Vec<Arc<AuditEntry>>) -> AuditResult<()> {
|
||||
let start_time = std::time::Instant::now();
|
||||
|
||||
@@ -386,7 +417,13 @@ impl AuditSystem {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// New: Audit flow background tasks, based on send_from_store, including retries and exponential backoffs
|
||||
/// Starts the audit stream processing for a target with batching and retry logic
|
||||
/// # Arguments
|
||||
/// * `store` - The store from which to read audit entries
|
||||
/// * `target` - The target to which audit entries will be sent
|
||||
///
|
||||
/// This function spawns a background task that continuously reads audit entries from the provided store
|
||||
/// and attempts to send them to the specified target. It implements retry logic with exponential backoff
|
||||
fn start_audit_stream_with_batching(
|
||||
&self,
|
||||
store: Box<dyn Store<EntityTarget<AuditEntry>, Error = StoreError, Key = Key> + Send>,
|
||||
@@ -462,6 +499,12 @@ impl AuditSystem {
|
||||
}
|
||||
|
||||
/// Enables a specific target
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `target_id` - The ID of the target to enable
|
||||
///
|
||||
/// # Returns
|
||||
/// * `AuditResult<()>` - Result indicating success or failure
|
||||
pub async fn enable_target(&self, target_id: &str) -> AuditResult<()> {
|
||||
// This would require storing enabled/disabled state per target
|
||||
// For now, just check if target exists
|
||||
@@ -475,6 +518,12 @@ impl AuditSystem {
|
||||
}
|
||||
|
||||
/// Disables a specific target
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `target_id` - The ID of the target to disable
|
||||
///
|
||||
/// # Returns
|
||||
/// * `AuditResult<()>` - Result indicating success or failure
|
||||
pub async fn disable_target(&self, target_id: &str) -> AuditResult<()> {
|
||||
// This would require storing enabled/disabled state per target
|
||||
// For now, just check if target exists
|
||||
@@ -488,6 +537,12 @@ impl AuditSystem {
|
||||
}
|
||||
|
||||
/// Removes a target from the system
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `target_id` - The ID of the target to remove
|
||||
///
|
||||
/// # Returns
|
||||
/// * `AuditResult<()>` - Result indicating success or failure
|
||||
pub async fn remove_target(&self, target_id: &str) -> AuditResult<()> {
|
||||
let mut registry = self.registry.lock().await;
|
||||
if let Some(target) = registry.remove_target(target_id) {
|
||||
@@ -502,6 +557,13 @@ impl AuditSystem {
|
||||
}
|
||||
|
||||
/// Updates or inserts a target
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `target_id` - The ID of the target to upsert
|
||||
/// * `target` - The target instance to insert or update
|
||||
///
|
||||
/// # Returns
|
||||
/// * `AuditResult<()>` - Result indicating success or failure
|
||||
pub async fn upsert_target(&self, target_id: String, target: Box<dyn Target<AuditEntry> + Send + Sync>) -> AuditResult<()> {
|
||||
let mut registry = self.registry.lock().await;
|
||||
|
||||
@@ -523,18 +585,33 @@ impl AuditSystem {
|
||||
}
|
||||
|
||||
/// Lists all targets
|
||||
///
|
||||
/// # Returns
|
||||
/// * `Vec<String>` - List of target IDs
|
||||
pub async fn list_targets(&self) -> Vec<String> {
|
||||
let registry = self.registry.lock().await;
|
||||
registry.list_targets()
|
||||
}
|
||||
|
||||
/// Gets information about a specific target
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `target_id` - The ID of the target to retrieve
|
||||
///
|
||||
/// # Returns
|
||||
/// * `Option<String>` - Target ID if found
|
||||
pub async fn get_target(&self, target_id: &str) -> Option<String> {
|
||||
let registry = self.registry.lock().await;
|
||||
registry.get_target(target_id).map(|target| target.id().to_string())
|
||||
}
|
||||
|
||||
/// Reloads configuration and updates targets
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `new_config` - The new configuration to load
|
||||
///
|
||||
/// # Returns
|
||||
/// * `AuditResult<()>` - Result indicating success or failure
|
||||
pub async fn reload_config(&self, new_config: Config) -> AuditResult<()> {
|
||||
info!("Reloading audit system configuration");
|
||||
|
||||
@@ -554,7 +631,7 @@ impl AuditSystem {
|
||||
}
|
||||
|
||||
// Create new targets from updated configuration
|
||||
match registry.create_targets_from_config(&new_config).await {
|
||||
match registry.create_audit_targets_from_config(&new_config).await {
|
||||
Ok(targets) => {
|
||||
info!(target_count = targets.len(), "Reloaded audit targets successfully");
|
||||
|
||||
@@ -594,16 +671,22 @@ impl AuditSystem {
|
||||
}
|
||||
|
||||
/// Gets current audit system metrics
|
||||
///
|
||||
/// # Returns
|
||||
/// * `AuditMetricsReport` - Current metrics report
|
||||
pub async fn get_metrics(&self) -> observability::AuditMetricsReport {
|
||||
observability::get_metrics_report().await
|
||||
}
|
||||
|
||||
/// Validates system performance against requirements
|
||||
///
|
||||
/// # Returns
|
||||
/// * `PerformanceValidation` - Performance validation results
|
||||
pub async fn validate_performance(&self) -> observability::PerformanceValidation {
|
||||
observability::validate_performance().await
|
||||
}
|
||||
|
||||
/// Resets all metrics
|
||||
/// Resets all metrics to initial state
|
||||
pub async fn reset_metrics(&self) {
|
||||
observability::reset_metrics().await;
|
||||
}
|
||||
|
||||
@@ -43,11 +43,11 @@ async fn test_config_parsing_webhook() {
|
||||
audit_webhook_section.insert("_".to_string(), default_kvs);
|
||||
config.0.insert("audit_webhook".to_string(), audit_webhook_section);
|
||||
|
||||
let mut registry = AuditRegistry::new();
|
||||
let registry = AuditRegistry::new();
|
||||
|
||||
// This should not fail even if server storage is not initialized
|
||||
// as it's an integration test
|
||||
let result = registry.create_targets_from_config(&config).await;
|
||||
let result = registry.create_audit_targets_from_config(&config).await;
|
||||
|
||||
// We expect this to fail due to server storage not being initialized
|
||||
// but the parsing should work correctly
|
||||
|
||||
@@ -44,7 +44,7 @@ async fn test_audit_system_startup_performance() {
|
||||
#[tokio::test]
|
||||
async fn test_concurrent_target_creation() {
|
||||
// Test that multiple targets can be created concurrently
|
||||
let mut registry = AuditRegistry::new();
|
||||
let registry = AuditRegistry::new();
|
||||
|
||||
// Create config with multiple webhook instances
|
||||
let mut config = rustfs_ecstore::config::Config(std::collections::HashMap::new());
|
||||
@@ -63,7 +63,7 @@ async fn test_concurrent_target_creation() {
|
||||
let start = Instant::now();
|
||||
|
||||
// This will fail due to server storage not being initialized, but we can measure timing
|
||||
let result = registry.create_targets_from_config(&config).await;
|
||||
let result = registry.create_audit_targets_from_config(&config).await;
|
||||
let elapsed = start.elapsed();
|
||||
|
||||
println!("Concurrent target creation took: {elapsed:?}");
|
||||
|
||||
@@ -135,7 +135,7 @@ async fn test_global_audit_functions() {
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_config_parsing_with_multiple_instances() {
|
||||
let mut registry = AuditRegistry::new();
|
||||
let registry = AuditRegistry::new();
|
||||
|
||||
// Create config with multiple webhook instances
|
||||
let mut config = Config(HashMap::new());
|
||||
@@ -164,7 +164,7 @@ async fn test_config_parsing_with_multiple_instances() {
|
||||
config.0.insert("audit_webhook".to_string(), webhook_section);
|
||||
|
||||
// Try to create targets from config
|
||||
let result = registry.create_targets_from_config(&config).await;
|
||||
let result = registry.create_audit_targets_from_config(&config).await;
|
||||
|
||||
// Should fail due to server storage not initialized, but parsing should work
|
||||
match result {
|
||||
|
||||
@@ -19,21 +19,26 @@ use std::sync::LazyLock;
|
||||
use tokio::sync::RwLock;
|
||||
use tonic::transport::Channel;
|
||||
|
||||
pub static GLOBAL_Local_Node_Name: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
|
||||
pub static GLOBAL_Rustfs_Host: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
|
||||
pub static GLOBAL_Rustfs_Port: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("9000".to_string()));
|
||||
pub static GLOBAL_Rustfs_Addr: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
|
||||
pub static GLOBAL_Conn_Map: LazyLock<RwLock<HashMap<String, Channel>>> = LazyLock::new(|| RwLock::new(HashMap::new()));
|
||||
pub static GLOBAL_LOCAL_NODE_NAME: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
|
||||
pub static GLOBAL_RUSTFS_HOST: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
|
||||
pub static GLOBAL_RUSTFS_PORT: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("9000".to_string()));
|
||||
pub static GLOBAL_RUSTFS_ADDR: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
|
||||
pub static GLOBAL_CONN_MAP: LazyLock<RwLock<HashMap<String, Channel>>> = LazyLock::new(|| RwLock::new(HashMap::new()));
|
||||
pub static GLOBAL_ROOT_CERT: LazyLock<RwLock<Option<Vec<u8>>>> = LazyLock::new(|| RwLock::new(None));
|
||||
|
||||
pub async fn set_global_addr(addr: &str) {
|
||||
*GLOBAL_Rustfs_Addr.write().await = addr.to_string();
|
||||
*GLOBAL_RUSTFS_ADDR.write().await = addr.to_string();
|
||||
}
|
||||
|
||||
pub async fn set_global_root_cert(cert: Vec<u8>) {
|
||||
*GLOBAL_ROOT_CERT.write().await = Some(cert);
|
||||
}
|
||||
|
||||
/// Evict a stale/dead connection from the global connection cache.
|
||||
/// This is critical for cluster recovery when a node dies unexpectedly (e.g., power-off).
|
||||
/// By removing the cached connection, subsequent requests will establish a fresh connection.
|
||||
pub async fn evict_connection(addr: &str) {
|
||||
let removed = GLOBAL_Conn_Map.write().await.remove(addr);
|
||||
let removed = GLOBAL_CONN_MAP.write().await.remove(addr);
|
||||
if removed.is_some() {
|
||||
tracing::warn!("Evicted stale connection from cache: {}", addr);
|
||||
}
|
||||
@@ -41,12 +46,12 @@ pub async fn evict_connection(addr: &str) {
|
||||
|
||||
/// Check if a connection exists in the cache for the given address.
|
||||
pub async fn has_cached_connection(addr: &str) -> bool {
|
||||
GLOBAL_Conn_Map.read().await.contains_key(addr)
|
||||
GLOBAL_CONN_MAP.read().await.contains_key(addr)
|
||||
}
|
||||
|
||||
/// Clear all cached connections. Useful for full cluster reset/recovery.
|
||||
pub async fn clear_all_connections() {
|
||||
let mut map = GLOBAL_Conn_Map.write().await;
|
||||
let mut map = GLOBAL_CONN_MAP.write().await;
|
||||
let count = map.len();
|
||||
map.clear();
|
||||
if count > 0 {
|
||||
|
||||
@@ -29,7 +29,7 @@ pub const AUDIT_PREFIX: &str = "audit";
|
||||
pub const AUDIT_ROUTE_PREFIX: &str = const_str::concat!(AUDIT_PREFIX, DEFAULT_DELIMITER);
|
||||
|
||||
pub const AUDIT_WEBHOOK_SUB_SYS: &str = "audit_webhook";
|
||||
pub const AUDIT_MQTT_SUB_SYS: &str = "mqtt_webhook";
|
||||
pub const AUDIT_MQTT_SUB_SYS: &str = "audit_mqtt";
|
||||
|
||||
pub const AUDIT_STORE_EXTENSION: &str = ".audit";
|
||||
#[allow(dead_code)]
|
||||
|
||||
@@ -89,6 +89,30 @@ pub const RUSTFS_TLS_KEY: &str = "rustfs_key.pem";
|
||||
/// This is the default cert for TLS.
|
||||
pub const RUSTFS_TLS_CERT: &str = "rustfs_cert.pem";
|
||||
|
||||
/// Default public certificate filename for rustfs
|
||||
/// This is the default public certificate filename for rustfs.
|
||||
/// It is used to store the public certificate of the application.
|
||||
/// Default value: public.crt
|
||||
pub const RUSTFS_PUBLIC_CERT: &str = "public.crt";
|
||||
|
||||
/// Default CA certificate filename for rustfs
|
||||
/// This is the default CA certificate filename for rustfs.
|
||||
/// It is used to store the CA certificate of the application.
|
||||
/// Default value: ca.crt
|
||||
pub const RUSTFS_CA_CERT: &str = "ca.crt";
|
||||
|
||||
/// Default HTTP prefix for rustfs
|
||||
/// This is the default HTTP prefix for rustfs.
|
||||
/// It is used to identify HTTP URLs.
|
||||
/// Default value: http://
|
||||
pub const RUSTFS_HTTP_PREFIX: &str = "http://";
|
||||
|
||||
/// Default HTTPS prefix for rustfs
|
||||
/// This is the default HTTPS prefix for rustfs.
|
||||
/// It is used to identify HTTPS URLs.
|
||||
/// Default value: https://
|
||||
pub const RUSTFS_HTTPS_PREFIX: &str = "https://";
|
||||
|
||||
/// Default port for rustfs
|
||||
/// This is the default port for rustfs.
|
||||
/// This is used to bind the server to a specific port.
|
||||
|
||||
56
crates/config/src/constants/body_limits.rs
Normal file
56
crates/config/src/constants/body_limits.rs
Normal file
@@ -0,0 +1,56 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//! Request body size limits for admin API endpoints
|
||||
//!
|
||||
//! These limits prevent DoS attacks through unbounded memory allocation
|
||||
//! while allowing legitimate use cases.
|
||||
|
||||
/// Maximum size for standard admin API request bodies (1 MB)
|
||||
/// Used for: user creation/update, policies, tier config, KMS config, events, groups, service accounts
|
||||
/// Rationale: Admin API payloads are typically JSON/XML configs under 100KB.
|
||||
/// AWS IAM policy limit is 6KB-10KB. 1MB provides generous headroom.
|
||||
pub const MAX_ADMIN_REQUEST_BODY_SIZE: usize = 1024 * 1024; // 1 MB
|
||||
|
||||
/// Maximum size for IAM import/export operations (10 MB)
|
||||
/// Used for: IAM entity imports/exports containing multiple users, policies, groups
|
||||
/// Rationale: ZIP archives with hundreds of IAM entities. 10MB allows ~10,000 small configs.
|
||||
pub const MAX_IAM_IMPORT_SIZE: usize = 10 * 1024 * 1024; // 10 MB
|
||||
|
||||
/// Maximum size for bucket metadata import operations (100 MB)
|
||||
/// Used for: Bucket metadata import containing configurations for many buckets
|
||||
/// Rationale: Large deployments may have thousands of buckets with various configs.
|
||||
/// 100MB allows importing metadata for ~10,000 buckets with reasonable configs.
|
||||
pub const MAX_BUCKET_METADATA_IMPORT_SIZE: usize = 100 * 1024 * 1024; // 100 MB
|
||||
|
||||
/// Maximum size for healing operation requests (1 MB)
|
||||
/// Used for: Healing parameters and configuration
|
||||
/// Rationale: Healing requests contain bucket/object paths and options. Should be small.
|
||||
pub const MAX_HEAL_REQUEST_SIZE: usize = 1024 * 1024; // 1 MB
|
||||
|
||||
/// Maximum size for S3 client response bodies (10 MB)
|
||||
/// Used for: Reading responses from remote S3-compatible services (ACL, attributes, lists)
|
||||
/// Rationale: Responses from external services should be bounded.
|
||||
/// Large responses (>10MB) indicate misconfiguration or potential attack.
|
||||
/// Typical responses: ACL XML < 10KB, List responses < 1MB
|
||||
///
|
||||
/// Rationale: Responses from external S3-compatible services should be bounded.
|
||||
/// - ACL XML responses: typically < 10KB
|
||||
/// - Object attributes: typically < 100KB
|
||||
/// - List responses: typically < 1MB (1000 objects with metadata)
|
||||
/// - Location/error responses: typically < 10KB
|
||||
///
|
||||
/// 10MB provides generous headroom for legitimate responses while preventing
|
||||
/// memory exhaustion from malicious or misconfigured remote services.
|
||||
pub const MAX_S3_CLIENT_RESPONSE_SIZE: usize = 10 * 1024 * 1024; // 10 MB
|
||||
61
crates/config/src/constants/compress.rs
Normal file
61
crates/config/src/constants/compress.rs
Normal file
@@ -0,0 +1,61 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//! HTTP Response Compression Configuration
|
||||
//!
|
||||
//! This module provides configuration options for HTTP response compression.
|
||||
//! By default, compression is disabled (aligned with MinIO behavior).
|
||||
//! When enabled via `RUSTFS_COMPRESS_ENABLE=on`, compression can be configured
|
||||
//! to apply only to specific file extensions, MIME types, and minimum file sizes.
|
||||
|
||||
/// Environment variable to enable/disable HTTP response compression
|
||||
/// Default: off (disabled)
|
||||
/// Values: on, off, true, false, yes, no, 1, 0
|
||||
/// Example: RUSTFS_COMPRESS_ENABLE=on
|
||||
pub const ENV_COMPRESS_ENABLE: &str = "RUSTFS_COMPRESS_ENABLE";
|
||||
|
||||
/// Default compression enable state
|
||||
/// Aligned with MinIO behavior - compression is disabled by default
|
||||
pub const DEFAULT_COMPRESS_ENABLE: bool = false;
|
||||
|
||||
/// Environment variable for file extensions that should be compressed
|
||||
/// Comma-separated list of file extensions (with or without leading dot)
|
||||
/// Default: "" (empty, meaning use MIME type matching only)
|
||||
/// Example: RUSTFS_COMPRESS_EXTENSIONS=.txt,.log,.csv,.json,.xml,.html,.css,.js
|
||||
pub const ENV_COMPRESS_EXTENSIONS: &str = "RUSTFS_COMPRESS_EXTENSIONS";
|
||||
|
||||
/// Default file extensions for compression
|
||||
/// Empty by default - relies on MIME type matching
|
||||
pub const DEFAULT_COMPRESS_EXTENSIONS: &str = "";
|
||||
|
||||
/// Environment variable for MIME types that should be compressed
|
||||
/// Comma-separated list of MIME types, supports wildcard (*) for subtypes
|
||||
/// Default: "text/*,application/json,application/xml,application/javascript"
|
||||
/// Example: RUSTFS_COMPRESS_MIME_TYPES=text/*,application/json,application/xml
|
||||
pub const ENV_COMPRESS_MIME_TYPES: &str = "RUSTFS_COMPRESS_MIME_TYPES";
|
||||
|
||||
/// Default MIME types for compression
|
||||
/// Includes common text-based content types that benefit from compression
|
||||
pub const DEFAULT_COMPRESS_MIME_TYPES: &str = "text/*,application/json,application/xml,application/javascript";
|
||||
|
||||
/// Environment variable for minimum file size to apply compression
|
||||
/// Files smaller than this size will not be compressed
|
||||
/// Default: 1000 (bytes)
|
||||
/// Example: RUSTFS_COMPRESS_MIN_SIZE=1000
|
||||
pub const ENV_COMPRESS_MIN_SIZE: &str = "RUSTFS_COMPRESS_MIN_SIZE";
|
||||
|
||||
/// Default minimum file size for compression (in bytes)
|
||||
/// Files smaller than 1000 bytes typically don't benefit from compression
|
||||
/// and the compression overhead may outweigh the benefits
|
||||
pub const DEFAULT_COMPRESS_MIN_SIZE: u64 = 1000;
|
||||
@@ -16,7 +16,8 @@ pub const DEFAULT_DELIMITER: &str = "_";
|
||||
pub const ENV_PREFIX: &str = "RUSTFS_";
|
||||
pub const ENV_WORD_DELIMITER: &str = "_";
|
||||
|
||||
pub const DEFAULT_DIR: &str = "/opt/rustfs/events"; // Default directory for event store
|
||||
pub const EVENT_DEFAULT_DIR: &str = "/opt/rustfs/events"; // Default directory for event store
|
||||
pub const AUDIT_DEFAULT_DIR: &str = "/opt/rustfs/audit"; // Default directory for audit store
|
||||
pub const DEFAULT_LIMIT: u64 = 100000; // Default store limit
|
||||
|
||||
/// Standard config keys and values.
|
||||
|
||||
@@ -13,6 +13,8 @@
|
||||
// limitations under the License.
|
||||
|
||||
pub(crate) mod app;
|
||||
pub(crate) mod body_limits;
|
||||
pub(crate) mod compress;
|
||||
pub(crate) mod console;
|
||||
pub(crate) mod env;
|
||||
pub(crate) mod heal;
|
||||
|
||||
@@ -12,4 +12,26 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
/// TLS related environment variable names and default values
|
||||
/// Environment variable to enable TLS key logging
|
||||
/// When set to "1", RustFS will log TLS keys to the specified file for debugging purposes.
|
||||
/// By default, this is disabled.
|
||||
/// To enable, set the environment variable RUSTFS_TLS_KEYLOG=1
|
||||
pub const ENV_TLS_KEYLOG: &str = "RUSTFS_TLS_KEYLOG";
|
||||
|
||||
/// Default value for TLS key logging
|
||||
/// By default, RustFS does not log TLS keys.
|
||||
/// To change this behavior, set the environment variable RUSTFS_TLS_KEYLOG=1
|
||||
pub const DEFAULT_TLS_KEYLOG: bool = false;
|
||||
|
||||
/// Environment variable to trust system CA certificates
|
||||
/// When set to "1", RustFS will trust system CA certificates in addition to any
|
||||
/// custom CA certificates provided in the configuration.
|
||||
/// By default, this is disabled.
|
||||
/// To enable, set the environment variable RUSTFS_TRUST_SYSTEM_CA=1
|
||||
pub const ENV_TRUST_SYSTEM_CA: &str = "RUSTFS_TRUST_SYSTEM_CA";
|
||||
|
||||
/// Default value for trusting system CA certificates
|
||||
/// By default, RustFS does not trust system CA certificates.
|
||||
/// To change this behavior, set the environment variable RUSTFS_TRUST_SYSTEM_CA=1
|
||||
pub const DEFAULT_TRUST_SYSTEM_CA: bool = false;
|
||||
|
||||
@@ -17,6 +17,10 @@ pub mod constants;
|
||||
#[cfg(feature = "constants")]
|
||||
pub use constants::app::*;
|
||||
#[cfg(feature = "constants")]
|
||||
pub use constants::body_limits::*;
|
||||
#[cfg(feature = "constants")]
|
||||
pub use constants::compress::*;
|
||||
#[cfg(feature = "constants")]
|
||||
pub use constants::console::*;
|
||||
#[cfg(feature = "constants")]
|
||||
pub use constants::env::*;
|
||||
|
||||
@@ -24,13 +24,33 @@ pub use webhook::*;
|
||||
|
||||
use crate::DEFAULT_DELIMITER;
|
||||
|
||||
// --- Configuration Constants ---
|
||||
/// Default target identifier for notifications,
|
||||
/// Used in notification system when no specific target is provided,
|
||||
/// Represents the default target stream or endpoint for notifications when no specific target is provided.
|
||||
pub const DEFAULT_TARGET: &str = "1";
|
||||
|
||||
/// Notification prefix for routing and identification,
|
||||
/// Used in notification system,
|
||||
/// This prefix is utilized in constructing routes and identifiers related to notifications within the system.
|
||||
pub const NOTIFY_PREFIX: &str = "notify";
|
||||
|
||||
/// Notification route prefix combining the notification prefix and default delimiter
|
||||
/// Combines the notification prefix with the default delimiter
|
||||
/// Used in notification system for defining routes related to notifications.
|
||||
/// Example: "notify:/"
|
||||
pub const NOTIFY_ROUTE_PREFIX: &str = const_str::concat!(NOTIFY_PREFIX, DEFAULT_DELIMITER);
|
||||
|
||||
/// Name of the environment variable that configures target stream concurrency.
|
||||
/// Controls how many target streams are processed in parallel by the notification system.
|
||||
/// Defaults to [`DEFAULT_NOTIFY_TARGET_STREAM_CONCURRENCY`] if not set.
|
||||
/// Example: `RUSTFS_NOTIFY_TARGET_STREAM_CONCURRENCY=20`.
|
||||
pub const ENV_NOTIFY_TARGET_STREAM_CONCURRENCY: &str = "RUSTFS_NOTIFY_TARGET_STREAM_CONCURRENCY";
|
||||
|
||||
/// Default concurrency for target stream processing in the notification system
|
||||
/// This value is used if the environment variable `RUSTFS_NOTIFY_TARGET_STREAM_CONCURRENCY` is not set.
|
||||
/// It defines how many target streams can be processed in parallel by the notification system at any given time.
|
||||
/// Adjust this value based on your system's capabilities and expected load.
|
||||
pub const DEFAULT_NOTIFY_TARGET_STREAM_CONCURRENCY: usize = 20;
|
||||
|
||||
#[allow(dead_code)]
|
||||
pub const NOTIFY_SUB_SYSTEMS: &[&str] = &[NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS];
|
||||
|
||||
|
||||
@@ -15,5 +15,5 @@
|
||||
pub const DEFAULT_EXT: &str = ".unknown"; // Default file extension
|
||||
pub const COMPRESS_EXT: &str = ".snappy"; // Extension for compressed files
|
||||
|
||||
/// STORE_EXTENSION - file extension of an event file in store
|
||||
pub const STORE_EXTENSION: &str = ".event";
|
||||
/// NOTIFY_STORE_EXTENSION - file extension of an event file in store
|
||||
pub const NOTIFY_STORE_EXTENSION: &str = ".event";
|
||||
|
||||
@@ -30,7 +30,7 @@ workspace = true
|
||||
|
||||
[dependencies]
|
||||
aes-gcm = { workspace = true, optional = true }
|
||||
argon2 = { workspace = true, features = ["std"], optional = true }
|
||||
argon2 = { workspace = true, optional = true }
|
||||
cfg-if = { workspace = true }
|
||||
chacha20poly1305 = { workspace = true, optional = true }
|
||||
jsonwebtoken = { workspace = true }
|
||||
|
||||
@@ -327,7 +327,8 @@ pub async fn execute_awscurl(
|
||||
|
||||
if !output.status.success() {
|
||||
let stderr = String::from_utf8_lossy(&output.stderr);
|
||||
return Err(format!("awscurl failed: {stderr}").into());
|
||||
let stdout = String::from_utf8_lossy(&output.stdout);
|
||||
return Err(format!("awscurl failed: stderr='{stderr}', stdout='{stdout}'").into());
|
||||
}
|
||||
|
||||
let response = String::from_utf8_lossy(&output.stdout).to_string();
|
||||
@@ -352,3 +353,13 @@ pub async fn awscurl_get(
|
||||
) -> Result<String, Box<dyn std::error::Error + Send + Sync>> {
|
||||
execute_awscurl(url, "GET", None, access_key, secret_key).await
|
||||
}
|
||||
|
||||
/// Helper function for PUT requests
|
||||
pub async fn awscurl_put(
|
||||
url: &str,
|
||||
body: &str,
|
||||
access_key: &str,
|
||||
secret_key: &str,
|
||||
) -> Result<String, Box<dyn std::error::Error + Send + Sync>> {
|
||||
execute_awscurl(url, "PUT", Some(body), access_key, secret_key).await
|
||||
}
|
||||
|
||||
@@ -33,3 +33,7 @@ mod special_chars_test;
|
||||
// Content-Encoding header preservation test
|
||||
#[cfg(test)]
|
||||
mod content_encoding_test;
|
||||
|
||||
// Policy variables tests
|
||||
#[cfg(test)]
|
||||
mod policy;
|
||||
|
||||
39
crates/e2e_test/src/policy/README.md
Normal file
39
crates/e2e_test/src/policy/README.md
Normal file
@@ -0,0 +1,39 @@
|
||||
# RustFS Policy Variables Tests
|
||||
|
||||
This directory contains comprehensive end-to-end tests for AWS IAM policy variables in RustFS.
|
||||
|
||||
## Test Overview
|
||||
|
||||
The tests cover the following AWS policy variable scenarios:
|
||||
|
||||
1. **Single-value variables** - Basic variable resolution like `${aws:username}`
|
||||
2. **Multi-value variables** - Variables that can have multiple values
|
||||
3. **Variable concatenation** - Combining variables with static text like `prefix-${aws:username}-suffix`
|
||||
4. **Nested variables** - Complex nested variable patterns like `${${aws:username}-test}`
|
||||
5. **Deny scenarios** - Testing deny policies with variables
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- RustFS server binary
|
||||
- `awscurl` utility for admin API calls
|
||||
- AWS SDK for Rust (included in the project)
|
||||
|
||||
## Running Tests
|
||||
|
||||
### Run All Policy Tests Using Unified Test Runner
|
||||
|
||||
```bash
|
||||
# Run all policy tests with comprehensive reporting
|
||||
# Note: Requires a RustFS server running on localhost:9000
|
||||
cargo test -p e2e_test policy::test_runner::test_policy_full_suite -- --nocapture --ignored --test-threads=1
|
||||
|
||||
# Run only critical policy tests
|
||||
cargo test -p e2e_test policy::test_runner::test_policy_critical_suite -- --nocapture --ignored --test-threads=1
|
||||
```
|
||||
|
||||
### Run All Policy Tests
|
||||
|
||||
```bash
|
||||
# From the project root directory
|
||||
cargo test -p e2e_test policy:: -- --nocapture --ignored --test-threads=1
|
||||
```
|
||||
22
crates/e2e_test/src/policy/mod.rs
Normal file
22
crates/e2e_test/src/policy/mod.rs
Normal file
@@ -0,0 +1,22 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//! Policy-specific tests for RustFS
|
||||
//!
|
||||
//! This module provides comprehensive tests for AWS IAM policy variables
|
||||
//! including single-value, multi-value, and nested variable scenarios.
|
||||
|
||||
mod policy_variables_test;
|
||||
mod test_env;
|
||||
mod test_runner;
|
||||
798
crates/e2e_test/src/policy/policy_variables_test.rs
Normal file
798
crates/e2e_test/src/policy/policy_variables_test.rs
Normal file
@@ -0,0 +1,798 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//! Tests for AWS IAM policy variables with single-value, multi-value, and nested scenarios
|
||||
|
||||
use crate::common::{awscurl_put, init_logging};
|
||||
use crate::policy::test_env::PolicyTestEnvironment;
|
||||
use aws_sdk_s3::primitives::ByteStream;
|
||||
use serial_test::serial;
|
||||
use tracing::info;
|
||||
|
||||
/// Helper function to create a regular user with given credentials
|
||||
async fn create_user(
|
||||
env: &PolicyTestEnvironment,
|
||||
username: &str,
|
||||
password: &str,
|
||||
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
let create_user_body = serde_json::json!({
|
||||
"secretKey": password,
|
||||
"status": "enabled"
|
||||
})
|
||||
.to_string();
|
||||
|
||||
let create_user_url = format!("{}/rustfs/admin/v3/add-user?accessKey={}", env.url, username);
|
||||
awscurl_put(&create_user_url, &create_user_body, &env.access_key, &env.secret_key).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Helper function to create an STS user with given credentials
|
||||
async fn create_sts_user(
|
||||
env: &PolicyTestEnvironment,
|
||||
username: &str,
|
||||
password: &str,
|
||||
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
// For STS, we create a regular user first, then use it to assume roles
|
||||
create_user(env, username, password).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Helper function to create and attach a policy
|
||||
async fn create_and_attach_policy(
|
||||
env: &PolicyTestEnvironment,
|
||||
policy_name: &str,
|
||||
username: &str,
|
||||
policy_document: serde_json::Value,
|
||||
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
let policy_string = policy_document.to_string();
|
||||
|
||||
// Create policy
|
||||
let add_policy_url = format!("{}/rustfs/admin/v3/add-canned-policy?name={}", env.url, policy_name);
|
||||
awscurl_put(&add_policy_url, &policy_string, &env.access_key, &env.secret_key).await?;
|
||||
|
||||
// Attach policy to user
|
||||
let attach_policy_url = format!(
|
||||
"{}/rustfs/admin/v3/set-user-or-group-policy?policyName={}&userOrGroup={}&isGroup=false",
|
||||
env.url, policy_name, username
|
||||
);
|
||||
awscurl_put(&attach_policy_url, "", &env.access_key, &env.secret_key).await?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Helper function to clean up test resources
|
||||
async fn cleanup_user_and_policy(env: &PolicyTestEnvironment, username: &str, policy_name: &str) {
|
||||
// Create admin client for cleanup
|
||||
let admin_client = env.create_s3_client(&env.access_key, &env.secret_key);
|
||||
|
||||
// Delete buckets that might have been created by this user
|
||||
let bucket_patterns = [
|
||||
format!("{username}-test-bucket"),
|
||||
format!("{username}-bucket1"),
|
||||
format!("{username}-bucket2"),
|
||||
format!("{username}-bucket3"),
|
||||
format!("prefix-{username}-suffix"),
|
||||
format!("{username}-test"),
|
||||
format!("{username}-sts-bucket"),
|
||||
format!("{username}-service-bucket"),
|
||||
"private-test-bucket".to_string(), // For deny test
|
||||
];
|
||||
|
||||
// Try to delete objects and buckets
|
||||
for bucket_name in &bucket_patterns {
|
||||
let _ = admin_client
|
||||
.delete_object()
|
||||
.bucket(bucket_name)
|
||||
.key("test-object.txt")
|
||||
.send()
|
||||
.await;
|
||||
let _ = admin_client
|
||||
.delete_object()
|
||||
.bucket(bucket_name)
|
||||
.key("test-sts-object.txt")
|
||||
.send()
|
||||
.await;
|
||||
let _ = admin_client
|
||||
.delete_object()
|
||||
.bucket(bucket_name)
|
||||
.key("test-service-object.txt")
|
||||
.send()
|
||||
.await;
|
||||
let _ = admin_client.delete_bucket().bucket(bucket_name).send().await;
|
||||
}
|
||||
|
||||
// Remove user
|
||||
let remove_user_url = format!("{}/rustfs/admin/v3/remove-user?accessKey={}", env.url, username);
|
||||
let _ = awscurl_put(&remove_user_url, "", &env.access_key, &env.secret_key).await;
|
||||
|
||||
// Remove policy
|
||||
let remove_policy_url = format!("{}/rustfs/admin/v3/remove-canned-policy?name={}", env.url, policy_name);
|
||||
let _ = awscurl_put(&remove_policy_url, "", &env.access_key, &env.secret_key).await;
|
||||
}
|
||||
|
||||
/// Test AWS policy variables with single-value scenarios
|
||||
#[tokio::test(flavor = "multi_thread")]
|
||||
#[serial]
|
||||
#[ignore = "Starts a rustfs server; enable when running full E2E"]
|
||||
pub async fn test_aws_policy_variables_single_value() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
test_aws_policy_variables_single_value_impl().await
|
||||
}
|
||||
|
||||
/// Implementation function for single-value policy variables test
|
||||
pub async fn test_aws_policy_variables_single_value_impl() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
init_logging();
|
||||
info!("Starting AWS policy variables single-value test");
|
||||
|
||||
let env = PolicyTestEnvironment::with_address("127.0.0.1:9000").await?;
|
||||
|
||||
test_aws_policy_variables_single_value_impl_with_env(&env).await
|
||||
}
|
||||
|
||||
/// Implementation function for single-value policy variables test with shared environment
|
||||
pub async fn test_aws_policy_variables_single_value_impl_with_env(
|
||||
env: &PolicyTestEnvironment,
|
||||
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
// Create test user
|
||||
let test_user = "testuser1";
|
||||
let test_password = "testpassword123";
|
||||
let policy_name = "test-single-value-policy";
|
||||
|
||||
// Create cleanup function
|
||||
let cleanup = || async {
|
||||
cleanup_user_and_policy(env, test_user, policy_name).await;
|
||||
};
|
||||
|
||||
let create_user_body = serde_json::json!({
|
||||
"secretKey": test_password,
|
||||
"status": "enabled"
|
||||
})
|
||||
.to_string();
|
||||
|
||||
let create_user_url = format!("{}/rustfs/admin/v3/add-user?accessKey={}", env.url, test_user);
|
||||
awscurl_put(&create_user_url, &create_user_body, &env.access_key, &env.secret_key).await?;
|
||||
|
||||
// Create policy with single-value AWS variables
|
||||
let policy_document = serde_json::json!({
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListAllMyBuckets"],
|
||||
"Resource": ["arn:aws:s3:::*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:CreateBucket"],
|
||||
"Resource": [format!("arn:aws:s3:::{}-*", "${aws:username}")]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListBucket"],
|
||||
"Resource": [format!("arn:aws:s3:::{}-*", "${aws:username}")]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:PutObject", "s3:GetObject"],
|
||||
"Resource": [format!("arn:aws:s3:::{}-*/*", "${aws:username}")]
|
||||
}
|
||||
]
|
||||
})
|
||||
.to_string();
|
||||
|
||||
let add_policy_url = format!("{}/rustfs/admin/v3/add-canned-policy?name={}", env.url, policy_name);
|
||||
awscurl_put(&add_policy_url, &policy_document, &env.access_key, &env.secret_key).await?;
|
||||
|
||||
// Attach policy to user
|
||||
let attach_policy_url = format!(
|
||||
"{}/rustfs/admin/v3/set-user-or-group-policy?policyName={}&userOrGroup={}&isGroup=false",
|
||||
env.url, policy_name, test_user
|
||||
);
|
||||
awscurl_put(&attach_policy_url, "", &env.access_key, &env.secret_key).await?;
|
||||
|
||||
// Create S3 client for test user
|
||||
let test_client = env.create_s3_client(test_user, test_password);
|
||||
|
||||
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
|
||||
|
||||
// Test 1: User should be able to list buckets (allowed by policy)
|
||||
info!("Test 1: User listing buckets");
|
||||
let list_result = test_client.list_buckets().send().await;
|
||||
if let Err(e) = list_result {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to list buckets: {e}").into());
|
||||
}
|
||||
|
||||
// Test 2: User should be able to create bucket matching username pattern
|
||||
info!("Test 2: User creating bucket matching pattern");
|
||||
let bucket_name = format!("{test_user}-test-bucket");
|
||||
let create_result = test_client.create_bucket().bucket(&bucket_name).send().await;
|
||||
if let Err(e) = create_result {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to create bucket matching username pattern: {e}").into());
|
||||
}
|
||||
|
||||
// Test 3: User should be able to list objects in their own bucket
|
||||
info!("Test 3: User listing objects in their bucket");
|
||||
let list_objects_result = test_client.list_objects_v2().bucket(&bucket_name).send().await;
|
||||
if let Err(e) = list_objects_result {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to list objects in their own bucket: {e}").into());
|
||||
}
|
||||
|
||||
// Test 4: User should be able to put object in their own bucket
|
||||
info!("Test 4: User putting object in their bucket");
|
||||
let put_result = test_client
|
||||
.put_object()
|
||||
.bucket(&bucket_name)
|
||||
.key("test-object.txt")
|
||||
.body(ByteStream::from_static(b"Hello, Policy Variables!"))
|
||||
.send()
|
||||
.await;
|
||||
if let Err(e) = put_result {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to put object in their own bucket: {e}").into());
|
||||
}
|
||||
|
||||
// Test 5: User should be able to get object from their own bucket
|
||||
info!("Test 5: User getting object from their bucket");
|
||||
let get_result = test_client
|
||||
.get_object()
|
||||
.bucket(&bucket_name)
|
||||
.key("test-object.txt")
|
||||
.send()
|
||||
.await;
|
||||
if let Err(e) = get_result {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to get object from their own bucket: {e}").into());
|
||||
}
|
||||
|
||||
// Test 6: User should NOT be able to create bucket NOT matching username pattern
|
||||
info!("Test 6: User attempting to create bucket NOT matching pattern");
|
||||
let other_bucket_name = "other-user-bucket";
|
||||
let create_other_result = test_client.create_bucket().bucket(other_bucket_name).send().await;
|
||||
if create_other_result.is_ok() {
|
||||
cleanup().await;
|
||||
return Err("User should NOT be able to create bucket NOT matching username pattern".into());
|
||||
}
|
||||
|
||||
// Cleanup
|
||||
info!("Cleaning up test resources");
|
||||
cleanup().await;
|
||||
|
||||
info!("AWS policy variables single-value test completed successfully");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Test AWS policy variables with multi-value scenarios
|
||||
#[tokio::test(flavor = "multi_thread")]
|
||||
#[serial]
|
||||
#[ignore = "Starts a rustfs server; enable when running full E2E"]
|
||||
pub async fn test_aws_policy_variables_multi_value() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
test_aws_policy_variables_multi_value_impl().await
|
||||
}
|
||||
|
||||
/// Implementation function for multi-value policy variables test
|
||||
pub async fn test_aws_policy_variables_multi_value_impl() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
init_logging();
|
||||
info!("Starting AWS policy variables multi-value test");
|
||||
|
||||
let env = PolicyTestEnvironment::with_address("127.0.0.1:9000").await?;
|
||||
|
||||
test_aws_policy_variables_multi_value_impl_with_env(&env).await
|
||||
}
|
||||
|
||||
/// Implementation function for multi-value policy variables test with shared environment
|
||||
pub async fn test_aws_policy_variables_multi_value_impl_with_env(
|
||||
env: &PolicyTestEnvironment,
|
||||
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
// Create test user
|
||||
let test_user = "testuser2";
|
||||
let test_password = "testpassword123";
|
||||
let policy_name = "test-multi-value-policy";
|
||||
|
||||
// Create cleanup function
|
||||
let cleanup = || async {
|
||||
cleanup_user_and_policy(env, test_user, policy_name).await;
|
||||
};
|
||||
|
||||
// Create user
|
||||
create_user(env, test_user, test_password).await?;
|
||||
|
||||
// Create policy with multi-value AWS variables
|
||||
let policy_document = serde_json::json!({
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListAllMyBuckets"],
|
||||
"Resource": ["arn:aws:s3:::*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:CreateBucket"],
|
||||
"Resource": [
|
||||
format!("arn:aws:s3:::{}-bucket1", "${aws:username}"),
|
||||
format!("arn:aws:s3:::{}-bucket2", "${aws:username}"),
|
||||
format!("arn:aws:s3:::{}-bucket3", "${aws:username}")
|
||||
]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListBucket"],
|
||||
"Resource": [
|
||||
format!("arn:aws:s3:::{}-bucket1", "${aws:username}"),
|
||||
format!("arn:aws:s3:::{}-bucket2", "${aws:username}"),
|
||||
format!("arn:aws:s3:::{}-bucket3", "${aws:username}")
|
||||
]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
create_and_attach_policy(env, policy_name, test_user, policy_document).await?;
|
||||
|
||||
// Create S3 client for test user
|
||||
let test_client = env.create_s3_client(test_user, test_password);
|
||||
|
||||
// Test 1: User should be able to create buckets matching any of the multi-value patterns
|
||||
info!("Test 1: User creating first bucket matching multi-value pattern");
|
||||
let bucket1_name = format!("{test_user}-bucket1");
|
||||
let create_result1 = test_client.create_bucket().bucket(&bucket1_name).send().await;
|
||||
if let Err(e) = create_result1 {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to create first bucket matching multi-value pattern: {e}").into());
|
||||
}
|
||||
|
||||
info!("Test 2: User creating second bucket matching multi-value pattern");
|
||||
let bucket2_name = format!("{test_user}-bucket2");
|
||||
let create_result2 = test_client.create_bucket().bucket(&bucket2_name).send().await;
|
||||
if let Err(e) = create_result2 {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to create second bucket matching multi-value pattern: {e}").into());
|
||||
}
|
||||
|
||||
info!("Test 3: User creating third bucket matching multi-value pattern");
|
||||
let bucket3_name = format!("{test_user}-bucket3");
|
||||
let create_result3 = test_client.create_bucket().bucket(&bucket3_name).send().await;
|
||||
if let Err(e) = create_result3 {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to create third bucket matching multi-value pattern: {e}").into());
|
||||
}
|
||||
|
||||
// Test 4: User should NOT be able to create bucket NOT matching any multi-value pattern
|
||||
info!("Test 4: User attempting to create bucket NOT matching any pattern");
|
||||
let other_bucket_name = format!("{test_user}-other-bucket");
|
||||
let create_other_result = test_client.create_bucket().bucket(&other_bucket_name).send().await;
|
||||
if create_other_result.is_ok() {
|
||||
cleanup().await;
|
||||
return Err("User should NOT be able to create bucket NOT matching any multi-value pattern".into());
|
||||
}
|
||||
|
||||
// Test 5: User should be able to list objects in their allowed buckets
|
||||
info!("Test 5: User listing objects in allowed buckets");
|
||||
let list_objects_result1 = test_client.list_objects_v2().bucket(&bucket1_name).send().await;
|
||||
if let Err(e) = list_objects_result1 {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to list objects in first allowed bucket: {e}").into());
|
||||
}
|
||||
|
||||
let list_objects_result2 = test_client.list_objects_v2().bucket(&bucket2_name).send().await;
|
||||
if let Err(e) = list_objects_result2 {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to list objects in second allowed bucket: {e}").into());
|
||||
}
|
||||
|
||||
// Cleanup
|
||||
info!("Cleaning up test resources");
|
||||
cleanup().await;
|
||||
|
||||
info!("AWS policy variables multi-value test completed successfully");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Test AWS policy variables with variable concatenation
|
||||
#[tokio::test(flavor = "multi_thread")]
|
||||
#[serial]
|
||||
#[ignore = "Starts a rustfs server; enable when running full E2E"]
|
||||
pub async fn test_aws_policy_variables_concatenation() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
test_aws_policy_variables_concatenation_impl().await
|
||||
}
|
||||
|
||||
/// Implementation function for concatenation policy variables test
|
||||
pub async fn test_aws_policy_variables_concatenation_impl() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
init_logging();
|
||||
info!("Starting AWS policy variables concatenation test");
|
||||
|
||||
let env = PolicyTestEnvironment::with_address("127.0.0.1:9000").await?;
|
||||
|
||||
test_aws_policy_variables_concatenation_impl_with_env(&env).await
|
||||
}
|
||||
|
||||
/// Implementation function for concatenation policy variables test with shared environment
|
||||
pub async fn test_aws_policy_variables_concatenation_impl_with_env(
|
||||
env: &PolicyTestEnvironment,
|
||||
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
// Create test user
|
||||
let test_user = "testuser3";
|
||||
let test_password = "testpassword123";
|
||||
let policy_name = "test-concatenation-policy";
|
||||
|
||||
// Create cleanup function
|
||||
let cleanup = || async {
|
||||
cleanup_user_and_policy(env, test_user, policy_name).await;
|
||||
};
|
||||
|
||||
// Create user
|
||||
create_user(env, test_user, test_password).await?;
|
||||
|
||||
// Create policy with variable concatenation
|
||||
let policy_document = serde_json::json!({
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListAllMyBuckets"],
|
||||
"Resource": ["arn:aws:s3:::*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:CreateBucket"],
|
||||
"Resource": [format!("arn:aws:s3:::prefix-{}-suffix", "${aws:username}")]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListBucket"],
|
||||
"Resource": [format!("arn:aws:s3:::prefix-{}-suffix", "${aws:username}")]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
create_and_attach_policy(env, policy_name, test_user, policy_document).await?;
|
||||
|
||||
// Create S3 client for test user
|
||||
let test_client = env.create_s3_client(test_user, test_password);
|
||||
|
||||
// Add a small delay to allow policy to propagate
|
||||
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
|
||||
|
||||
// Test: User should be able to create bucket matching concatenated pattern
|
||||
info!("Test: User creating bucket matching concatenated pattern");
|
||||
let bucket_name = format!("prefix-{test_user}-suffix");
|
||||
let create_result = test_client.create_bucket().bucket(&bucket_name).send().await;
|
||||
if let Err(e) = create_result {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to create bucket matching concatenated pattern: {e}").into());
|
||||
}
|
||||
|
||||
// Test: User should be able to list objects in the concatenated pattern bucket
|
||||
info!("Test: User listing objects in concatenated pattern bucket");
|
||||
let list_objects_result = test_client.list_objects_v2().bucket(&bucket_name).send().await;
|
||||
if let Err(e) = list_objects_result {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to list objects in concatenated pattern bucket: {e}").into());
|
||||
}
|
||||
|
||||
// Cleanup
|
||||
info!("Cleaning up test resources");
|
||||
cleanup().await;
|
||||
|
||||
info!("AWS policy variables concatenation test completed successfully");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Test AWS policy variables with nested scenarios
|
||||
#[tokio::test(flavor = "multi_thread")]
|
||||
#[serial]
|
||||
#[ignore = "Starts a rustfs server; enable when running full E2E"]
|
||||
pub async fn test_aws_policy_variables_nested() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
test_aws_policy_variables_nested_impl().await
|
||||
}
|
||||
|
||||
/// Implementation function for nested policy variables test
|
||||
pub async fn test_aws_policy_variables_nested_impl() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
init_logging();
|
||||
info!("Starting AWS policy variables nested test");
|
||||
|
||||
let env = PolicyTestEnvironment::with_address("127.0.0.1:9000").await?;
|
||||
|
||||
test_aws_policy_variables_nested_impl_with_env(&env).await
|
||||
}
|
||||
|
||||
/// Test AWS policy variables with STS temporary credentials
|
||||
#[tokio::test(flavor = "multi_thread")]
|
||||
#[serial]
|
||||
#[ignore = "Starts a rustfs server; enable when running full E2E"]
|
||||
pub async fn test_aws_policy_variables_sts() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
test_aws_policy_variables_sts_impl().await
|
||||
}
|
||||
|
||||
/// Implementation function for STS policy variables test
|
||||
pub async fn test_aws_policy_variables_sts_impl() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
init_logging();
|
||||
info!("Starting AWS policy variables STS test");
|
||||
|
||||
let env = PolicyTestEnvironment::with_address("127.0.0.1:9000").await?;
|
||||
|
||||
test_aws_policy_variables_sts_impl_with_env(&env).await
|
||||
}
|
||||
|
||||
/// Implementation function for nested policy variables test with shared environment
|
||||
pub async fn test_aws_policy_variables_nested_impl_with_env(
|
||||
env: &PolicyTestEnvironment,
|
||||
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
// Create test user
|
||||
let test_user = "testuser4";
|
||||
let test_password = "testpassword123";
|
||||
let policy_name = "test-nested-policy";
|
||||
|
||||
// Create cleanup function
|
||||
let cleanup = || async {
|
||||
cleanup_user_and_policy(env, test_user, policy_name).await;
|
||||
};
|
||||
|
||||
// Create user
|
||||
create_user(env, test_user, test_password).await?;
|
||||
|
||||
// Create policy with nested variables - this tests complex variable resolution
|
||||
let policy_document = serde_json::json!({
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListAllMyBuckets"],
|
||||
"Resource": ["arn:aws:s3:::*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:CreateBucket"],
|
||||
"Resource": ["arn:aws:s3:::${${aws:username}-test}"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListBucket"],
|
||||
"Resource": ["arn:aws:s3:::${${aws:username}-test}"]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
create_and_attach_policy(env, policy_name, test_user, policy_document).await?;
|
||||
|
||||
// Create S3 client for test user
|
||||
let test_client = env.create_s3_client(test_user, test_password);
|
||||
|
||||
// Add a small delay to allow policy to propagate
|
||||
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
|
||||
|
||||
// Test nested variable resolution
|
||||
info!("Test: Nested variable resolution");
|
||||
|
||||
// Create bucket with expected resolved name
|
||||
let expected_bucket = format!("{test_user}-test");
|
||||
|
||||
// Attempt to create bucket with resolved name
|
||||
let create_result = test_client.create_bucket().bucket(&expected_bucket).send().await;
|
||||
|
||||
// Verify bucket creation succeeds (nested variable resolved correctly)
|
||||
if let Err(e) = create_result {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to create bucket with nested variable: {e}").into());
|
||||
}
|
||||
|
||||
// Verify bucket creation fails with unresolved variable
|
||||
let unresolved_bucket = format!("${{}}-test {test_user}");
|
||||
let create_unresolved = test_client.create_bucket().bucket(&unresolved_bucket).send().await;
|
||||
|
||||
if create_unresolved.is_ok() {
|
||||
cleanup().await;
|
||||
return Err("User should NOT be able to create bucket with unresolved variable".into());
|
||||
}
|
||||
|
||||
// Cleanup
|
||||
info!("Cleaning up test resources");
|
||||
cleanup().await;
|
||||
|
||||
info!("AWS policy variables nested test completed successfully");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Implementation function for STS policy variables test with shared environment
|
||||
pub async fn test_aws_policy_variables_sts_impl_with_env(
|
||||
env: &PolicyTestEnvironment,
|
||||
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
// Create test user for STS
|
||||
let test_user = "testuser-sts";
|
||||
let test_password = "testpassword123";
|
||||
let policy_name = "test-sts-policy";
|
||||
|
||||
// Create cleanup function
|
||||
let cleanup = || async {
|
||||
cleanup_user_and_policy(env, test_user, policy_name).await;
|
||||
};
|
||||
|
||||
// Create STS user
|
||||
create_sts_user(env, test_user, test_password).await?;
|
||||
|
||||
// Create policy with STS-compatible variables
|
||||
let policy_document = serde_json::json!({
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListAllMyBuckets"],
|
||||
"Resource": ["arn:aws:s3:::*"]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:CreateBucket"],
|
||||
"Resource": [format!("arn:aws:s3:::{}-sts-bucket", "${aws:username}")]
|
||||
},
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListBucket", "s3:PutObject", "s3:GetObject"],
|
||||
"Resource": [format!("arn:aws:s3:::{}-sts-bucket/*", "${aws:username}")]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
create_and_attach_policy(env, policy_name, test_user, policy_document).await?;
|
||||
|
||||
// Create S3 client for test user
|
||||
let test_client = env.create_s3_client(test_user, test_password);
|
||||
|
||||
// Add a small delay to allow policy to propagate
|
||||
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
|
||||
|
||||
// Test: User should be able to create bucket matching STS pattern
|
||||
info!("Test: User creating bucket matching STS pattern");
|
||||
let bucket_name = format!("{test_user}-sts-bucket");
|
||||
let create_result = test_client.create_bucket().bucket(&bucket_name).send().await;
|
||||
if let Err(e) = create_result {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to create STS bucket: {e}").into());
|
||||
}
|
||||
|
||||
// Test: User should be able to put object in STS bucket
|
||||
info!("Test: User putting object in STS bucket");
|
||||
let put_result = test_client
|
||||
.put_object()
|
||||
.bucket(&bucket_name)
|
||||
.key("test-sts-object.txt")
|
||||
.body(ByteStream::from_static(b"STS Test Object"))
|
||||
.send()
|
||||
.await;
|
||||
if let Err(e) = put_result {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to put object in STS bucket: {e}").into());
|
||||
}
|
||||
|
||||
// Test: User should be able to get object from STS bucket
|
||||
info!("Test: User getting object from STS bucket");
|
||||
let get_result = test_client
|
||||
.get_object()
|
||||
.bucket(&bucket_name)
|
||||
.key("test-sts-object.txt")
|
||||
.send()
|
||||
.await;
|
||||
if let Err(e) = get_result {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to get object from STS bucket: {e}").into());
|
||||
}
|
||||
|
||||
// Test: User should be able to list objects in STS bucket
|
||||
info!("Test: User listing objects in STS bucket");
|
||||
let list_result = test_client.list_objects_v2().bucket(&bucket_name).send().await;
|
||||
if let Err(e) = list_result {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to list objects in STS bucket: {e}").into());
|
||||
}
|
||||
|
||||
// Cleanup
|
||||
info!("Cleaning up test resources");
|
||||
cleanup().await;
|
||||
|
||||
info!("AWS policy variables STS test completed successfully");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Test AWS policy variables with deny scenarios
|
||||
#[tokio::test(flavor = "multi_thread")]
|
||||
#[serial]
|
||||
#[ignore = "Starts a rustfs server; enable when running full E2E"]
|
||||
pub async fn test_aws_policy_variables_deny() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
test_aws_policy_variables_deny_impl().await
|
||||
}
|
||||
|
||||
/// Implementation function for deny policy variables test
|
||||
pub async fn test_aws_policy_variables_deny_impl() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
init_logging();
|
||||
info!("Starting AWS policy variables deny test");
|
||||
|
||||
let env = PolicyTestEnvironment::with_address("127.0.0.1:9000").await?;
|
||||
|
||||
test_aws_policy_variables_deny_impl_with_env(&env).await
|
||||
}
|
||||
|
||||
/// Implementation function for deny policy variables test with shared environment
|
||||
pub async fn test_aws_policy_variables_deny_impl_with_env(
|
||||
env: &PolicyTestEnvironment,
|
||||
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
// Create test user
|
||||
let test_user = "testuser5";
|
||||
let test_password = "testpassword123";
|
||||
let policy_name = "test-deny-policy";
|
||||
|
||||
// Create cleanup function
|
||||
let cleanup = || async {
|
||||
cleanup_user_and_policy(env, test_user, policy_name).await;
|
||||
};
|
||||
|
||||
// Create user
|
||||
create_user(env, test_user, test_password).await?;
|
||||
|
||||
// Create policy with both allow and deny statements
|
||||
let policy_document = serde_json::json!({
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
// Allow general access
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListAllMyBuckets"],
|
||||
"Resource": ["arn:aws:s3:::*"]
|
||||
},
|
||||
// Allow creating buckets matching username pattern
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:CreateBucket"],
|
||||
"Resource": [format!("arn:aws:s3:::{}-*", "${aws:username}")]
|
||||
},
|
||||
// Deny creating buckets with "private" in the name
|
||||
{
|
||||
"Effect": "Deny",
|
||||
"Action": ["s3:CreateBucket"],
|
||||
"Resource": ["arn:aws:s3:::*private*"]
|
||||
}
|
||||
]
|
||||
});
|
||||
|
||||
create_and_attach_policy(env, policy_name, test_user, policy_document).await?;
|
||||
|
||||
// Create S3 client for test user
|
||||
let test_client = env.create_s3_client(test_user, test_password);
|
||||
|
||||
// Add a small delay to allow policy to propagate
|
||||
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
|
||||
|
||||
// Test 1: User should be able to create bucket matching username pattern
|
||||
info!("Test 1: User creating bucket matching username pattern");
|
||||
let bucket_name = format!("{test_user}-test-bucket");
|
||||
let create_result = test_client.create_bucket().bucket(&bucket_name).send().await;
|
||||
if let Err(e) = create_result {
|
||||
cleanup().await;
|
||||
return Err(format!("User should be able to create bucket matching username pattern: {e}").into());
|
||||
}
|
||||
|
||||
// Test 2: User should NOT be able to create bucket with "private" in the name (deny rule)
|
||||
info!("Test 2: User attempting to create bucket with 'private' in name (should be denied)");
|
||||
let private_bucket_name = "private-test-bucket";
|
||||
let create_private_result = test_client.create_bucket().bucket(private_bucket_name).send().await;
|
||||
if create_private_result.is_ok() {
|
||||
cleanup().await;
|
||||
return Err("User should NOT be able to create bucket with 'private' in name due to deny rule".into());
|
||||
}
|
||||
|
||||
// Cleanup
|
||||
info!("Cleaning up test resources");
|
||||
cleanup().await;
|
||||
|
||||
info!("AWS policy variables deny test completed successfully");
|
||||
Ok(())
|
||||
}
|
||||
100
crates/e2e_test/src/policy/test_env.rs
Normal file
100
crates/e2e_test/src/policy/test_env.rs
Normal file
@@ -0,0 +1,100 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
//! Custom test environment for policy variables tests
|
||||
//!
|
||||
//! This module provides a custom test environment that doesn't automatically
|
||||
//! stop servers when destroyed, addressing the server stopping issue.
|
||||
|
||||
use aws_sdk_s3::Client;
|
||||
use aws_sdk_s3::config::{Config, Credentials, Region};
|
||||
use std::net::TcpStream;
|
||||
use std::time::Duration;
|
||||
use tokio::time::sleep;
|
||||
use tracing::{info, warn};
|
||||
|
||||
// Default credentials
|
||||
const DEFAULT_ACCESS_KEY: &str = "rustfsadmin";
|
||||
const DEFAULT_SECRET_KEY: &str = "rustfsadmin";
|
||||
|
||||
/// Custom test environment that doesn't automatically stop servers
|
||||
pub struct PolicyTestEnvironment {
|
||||
pub temp_dir: String,
|
||||
pub address: String,
|
||||
pub url: String,
|
||||
pub access_key: String,
|
||||
pub secret_key: String,
|
||||
}
|
||||
|
||||
impl PolicyTestEnvironment {
|
||||
/// Create a new test environment with specific address
|
||||
/// This environment won't stop any server when dropped
|
||||
pub async fn with_address(address: &str) -> Result<Self, Box<dyn std::error::Error + Send + Sync>> {
|
||||
let temp_dir = format!("/tmp/rustfs_policy_test_{}", uuid::Uuid::new_v4());
|
||||
tokio::fs::create_dir_all(&temp_dir).await?;
|
||||
|
||||
let url = format!("http://{address}");
|
||||
|
||||
Ok(Self {
|
||||
temp_dir,
|
||||
address: address.to_string(),
|
||||
url,
|
||||
access_key: DEFAULT_ACCESS_KEY.to_string(),
|
||||
secret_key: DEFAULT_SECRET_KEY.to_string(),
|
||||
})
|
||||
}
|
||||
|
||||
/// Create an AWS S3 client configured for this RustFS instance
|
||||
pub fn create_s3_client(&self, access_key: &str, secret_key: &str) -> Client {
|
||||
let credentials = Credentials::new(access_key, secret_key, None, None, "policy-test");
|
||||
let config = Config::builder()
|
||||
.credentials_provider(credentials)
|
||||
.region(Region::new("us-east-1"))
|
||||
.endpoint_url(&self.url)
|
||||
.force_path_style(true)
|
||||
.behavior_version_latest()
|
||||
.build();
|
||||
Client::from_conf(config)
|
||||
}
|
||||
|
||||
/// Wait for RustFS server to be ready by checking TCP connectivity
|
||||
pub async fn wait_for_server_ready(&self) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
info!("Waiting for RustFS server to be ready on {}", self.address);
|
||||
|
||||
for i in 0..30 {
|
||||
if TcpStream::connect(&self.address).is_ok() {
|
||||
info!("✅ RustFS server is ready after {} attempts", i + 1);
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
if i == 29 {
|
||||
return Err("RustFS server failed to become ready within 30 seconds".into());
|
||||
}
|
||||
|
||||
sleep(Duration::from_secs(1)).await;
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
// Implement Drop trait that doesn't stop servers
|
||||
impl Drop for PolicyTestEnvironment {
|
||||
fn drop(&mut self) {
|
||||
// Clean up temp directory only, don't stop any server
|
||||
if let Err(e) = std::fs::remove_dir_all(&self.temp_dir) {
|
||||
warn!("Failed to clean up temp directory {}: {}", self.temp_dir, e);
|
||||
}
|
||||
}
|
||||
}
|
||||
247
crates/e2e_test/src/policy/test_runner.rs
Normal file
247
crates/e2e_test/src/policy/test_runner.rs
Normal file
@@ -0,0 +1,247 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use crate::common::init_logging;
|
||||
use crate::policy::test_env::PolicyTestEnvironment;
|
||||
use serial_test::serial;
|
||||
use std::time::Instant;
|
||||
use tokio::time::{Duration, sleep};
|
||||
use tracing::{error, info};
|
||||
|
||||
/// Core test categories
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub enum TestCategory {
|
||||
SingleValue,
|
||||
MultiValue,
|
||||
Concatenation,
|
||||
Nested,
|
||||
DenyScenarios,
|
||||
}
|
||||
|
||||
impl TestCategory {}
|
||||
|
||||
/// Test case definition
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct TestDefinition {
|
||||
pub name: String,
|
||||
#[allow(dead_code)]
|
||||
pub category: TestCategory,
|
||||
pub is_critical: bool,
|
||||
}
|
||||
|
||||
impl TestDefinition {
|
||||
pub fn new(name: impl Into<String>, category: TestCategory, is_critical: bool) -> Self {
|
||||
Self {
|
||||
name: name.into(),
|
||||
category,
|
||||
is_critical,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Test result
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct TestResult {
|
||||
pub test_name: String,
|
||||
pub success: bool,
|
||||
pub error_message: Option<String>,
|
||||
}
|
||||
|
||||
impl TestResult {
|
||||
pub fn success(test_name: String) -> Self {
|
||||
Self {
|
||||
test_name,
|
||||
success: true,
|
||||
error_message: None,
|
||||
}
|
||||
}
|
||||
|
||||
pub fn failure(test_name: String, error: String) -> Self {
|
||||
Self {
|
||||
test_name,
|
||||
success: false,
|
||||
error_message: Some(error),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Test suite configuration
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct TestSuiteConfig {
|
||||
pub include_critical_only: bool,
|
||||
}
|
||||
|
||||
/// Policy test suite
|
||||
pub struct PolicyTestSuite {
|
||||
tests: Vec<TestDefinition>,
|
||||
config: TestSuiteConfig,
|
||||
}
|
||||
|
||||
impl PolicyTestSuite {
|
||||
/// Create default test suite
|
||||
pub fn new() -> Self {
|
||||
let tests = vec![
|
||||
TestDefinition::new("test_aws_policy_variables_single_value", TestCategory::SingleValue, true),
|
||||
TestDefinition::new("test_aws_policy_variables_multi_value", TestCategory::MultiValue, true),
|
||||
TestDefinition::new("test_aws_policy_variables_concatenation", TestCategory::Concatenation, true),
|
||||
TestDefinition::new("test_aws_policy_variables_nested", TestCategory::Nested, true),
|
||||
TestDefinition::new("test_aws_policy_variables_deny", TestCategory::DenyScenarios, true),
|
||||
TestDefinition::new("test_aws_policy_variables_sts", TestCategory::SingleValue, true),
|
||||
];
|
||||
|
||||
Self {
|
||||
tests,
|
||||
config: TestSuiteConfig::default(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Configure test suite
|
||||
pub fn with_config(mut self, config: TestSuiteConfig) -> Self {
|
||||
self.config = config;
|
||||
self
|
||||
}
|
||||
|
||||
/// Run test suite
|
||||
pub async fn run_test_suite(&self) -> Vec<TestResult> {
|
||||
init_logging();
|
||||
info!("Starting Policy Variables test suite");
|
||||
|
||||
let start_time = Instant::now();
|
||||
let mut results = Vec::new();
|
||||
|
||||
// Create test environment
|
||||
let env = match PolicyTestEnvironment::with_address("127.0.0.1:9000").await {
|
||||
Ok(env) => env,
|
||||
Err(e) => {
|
||||
error!("Failed to create test environment: {}", e);
|
||||
return vec![TestResult::failure("env_creation".into(), e.to_string())];
|
||||
}
|
||||
};
|
||||
|
||||
// Wait for server to be ready
|
||||
if env.wait_for_server_ready().await.is_err() {
|
||||
error!("Server is not ready");
|
||||
return vec![TestResult::failure("server_check".into(), "Server not ready".into())];
|
||||
}
|
||||
|
||||
// Filter tests
|
||||
let tests_to_run: Vec<&TestDefinition> = self
|
||||
.tests
|
||||
.iter()
|
||||
.filter(|test| !self.config.include_critical_only || test.is_critical)
|
||||
.collect();
|
||||
|
||||
info!("Scheduled {} tests", tests_to_run.len());
|
||||
|
||||
// Run tests
|
||||
for (i, test_def) in tests_to_run.iter().enumerate() {
|
||||
info!("Running test {}/{}: {}", i + 1, tests_to_run.len(), test_def.name);
|
||||
let test_start = Instant::now();
|
||||
|
||||
let result = self.run_single_test(test_def, &env).await;
|
||||
let test_duration = test_start.elapsed();
|
||||
|
||||
match result {
|
||||
Ok(_) => {
|
||||
info!("Test passed: {} ({:.2}s)", test_def.name, test_duration.as_secs_f64());
|
||||
results.push(TestResult::success(test_def.name.clone()));
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Test failed: {} ({:.2}s): {}", test_def.name, test_duration.as_secs_f64(), e);
|
||||
results.push(TestResult::failure(test_def.name.clone(), e.to_string()));
|
||||
}
|
||||
}
|
||||
|
||||
// Delay between tests to avoid resource conflicts
|
||||
if i < tests_to_run.len() - 1 {
|
||||
sleep(Duration::from_secs(2)).await;
|
||||
}
|
||||
}
|
||||
|
||||
// Print summary
|
||||
self.print_summary(&results, start_time.elapsed());
|
||||
|
||||
results
|
||||
}
|
||||
|
||||
/// Run a single test
|
||||
async fn run_single_test(
|
||||
&self,
|
||||
test_def: &TestDefinition,
|
||||
env: &PolicyTestEnvironment,
|
||||
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
match test_def.name.as_str() {
|
||||
"test_aws_policy_variables_single_value" => {
|
||||
super::policy_variables_test::test_aws_policy_variables_single_value_impl_with_env(env).await
|
||||
}
|
||||
"test_aws_policy_variables_multi_value" => {
|
||||
super::policy_variables_test::test_aws_policy_variables_multi_value_impl_with_env(env).await
|
||||
}
|
||||
"test_aws_policy_variables_concatenation" => {
|
||||
super::policy_variables_test::test_aws_policy_variables_concatenation_impl_with_env(env).await
|
||||
}
|
||||
"test_aws_policy_variables_nested" => {
|
||||
super::policy_variables_test::test_aws_policy_variables_nested_impl_with_env(env).await
|
||||
}
|
||||
"test_aws_policy_variables_deny" => {
|
||||
super::policy_variables_test::test_aws_policy_variables_deny_impl_with_env(env).await
|
||||
}
|
||||
"test_aws_policy_variables_sts" => {
|
||||
super::policy_variables_test::test_aws_policy_variables_sts_impl_with_env(env).await
|
||||
}
|
||||
_ => Err(format!("Test {} not implemented", test_def.name).into()),
|
||||
}
|
||||
}
|
||||
|
||||
/// Print test summary
|
||||
fn print_summary(&self, results: &[TestResult], total_duration: Duration) {
|
||||
info!("=== Test Suite Summary ===");
|
||||
info!("Total duration: {:.2}s", total_duration.as_secs_f64());
|
||||
info!("Total tests: {}", results.len());
|
||||
|
||||
let passed = results.iter().filter(|r| r.success).count();
|
||||
let failed = results.len() - passed;
|
||||
let success_rate = (passed as f64 / results.len() as f64) * 100.0;
|
||||
|
||||
info!("Passed: {} | Failed: {}", passed, failed);
|
||||
info!("Success rate: {:.1}%", success_rate);
|
||||
|
||||
if failed > 0 {
|
||||
error!("Failed tests:");
|
||||
for result in results.iter().filter(|r| !r.success) {
|
||||
error!(" - {}: {}", result.test_name, result.error_message.as_ref().unwrap());
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Test suite
|
||||
#[tokio::test]
|
||||
#[serial]
|
||||
#[ignore = "Connects to existing rustfs server"]
|
||||
async fn test_policy_critical_suite() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
|
||||
let config = TestSuiteConfig {
|
||||
include_critical_only: true,
|
||||
};
|
||||
let suite = PolicyTestSuite::new().with_config(config);
|
||||
let results = suite.run_test_suite().await;
|
||||
|
||||
let failed = results.iter().filter(|r| !r.success).count();
|
||||
if failed > 0 {
|
||||
return Err(format!("Critical tests failed: {failed} failures").into());
|
||||
}
|
||||
|
||||
info!("All critical tests passed");
|
||||
Ok(())
|
||||
}
|
||||
@@ -127,12 +127,12 @@ async fn test_get_deleted_object_returns_nosuchkey() -> Result<(), Box<dyn std::
|
||||
info!("Service error code: {:?}", s3_err.meta().code());
|
||||
|
||||
// The error should be NoSuchKey
|
||||
assert!(s3_err.is_no_such_key(), "Error should be NoSuchKey, got: {:?}", s3_err);
|
||||
assert!(s3_err.is_no_such_key(), "Error should be NoSuchKey, got: {s3_err:?}");
|
||||
|
||||
info!("✅ Test passed: GetObject on deleted object correctly returns NoSuchKey");
|
||||
}
|
||||
other_err => {
|
||||
panic!("Expected ServiceError with NoSuchKey, but got: {:?}", other_err);
|
||||
panic!("Expected ServiceError with NoSuchKey, but got: {other_err:?}");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -182,13 +182,12 @@ async fn test_head_deleted_object_returns_nosuchkey() -> Result<(), Box<dyn std:
|
||||
let s3_err = service_err.into_err();
|
||||
assert!(
|
||||
s3_err.meta().code() == Some("NoSuchKey") || s3_err.meta().code() == Some("NotFound"),
|
||||
"Error should be NoSuchKey or NotFound, got: {:?}",
|
||||
s3_err
|
||||
"Error should be NoSuchKey or NotFound, got: {s3_err:?}"
|
||||
);
|
||||
info!("✅ HeadObject correctly returns NoSuchKey/NotFound");
|
||||
}
|
||||
other_err => {
|
||||
panic!("Expected ServiceError but got: {:?}", other_err);
|
||||
panic!("Expected ServiceError but got: {other_err:?}");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -220,11 +219,11 @@ async fn test_get_nonexistent_object_returns_nosuchkey() -> Result<(), Box<dyn s
|
||||
match get_result.unwrap_err() {
|
||||
SdkError::ServiceError(service_err) => {
|
||||
let s3_err = service_err.into_err();
|
||||
assert!(s3_err.is_no_such_key(), "Error should be NoSuchKey, got: {:?}", s3_err);
|
||||
assert!(s3_err.is_no_such_key(), "Error should be NoSuchKey, got: {s3_err:?}");
|
||||
info!("✅ GetObject correctly returns NoSuchKey for non-existent object");
|
||||
}
|
||||
other_err => {
|
||||
panic!("Expected ServiceError with NoSuchKey, but got: {:?}", other_err);
|
||||
panic!("Expected ServiceError with NoSuchKey, but got: {other_err:?}");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -266,15 +265,15 @@ async fn test_multiple_gets_deleted_object() -> Result<(), Box<dyn std::error::E
|
||||
info!("Attempt {} to get deleted object", i);
|
||||
let get_result = client.get_object().bucket(BUCKET).key(key).send().await;
|
||||
|
||||
assert!(get_result.is_err(), "Attempt {}: should return error", i);
|
||||
assert!(get_result.is_err(), "Attempt {i}: should return error");
|
||||
|
||||
match get_result.unwrap_err() {
|
||||
SdkError::ServiceError(service_err) => {
|
||||
let s3_err = service_err.into_err();
|
||||
assert!(s3_err.is_no_such_key(), "Attempt {}: Error should be NoSuchKey, got: {:?}", i, s3_err);
|
||||
assert!(s3_err.is_no_such_key(), "Attempt {i}: Error should be NoSuchKey, got: {s3_err:?}");
|
||||
}
|
||||
other_err => {
|
||||
panic!("Attempt {}: Expected ServiceError but got: {:?}", i, other_err);
|
||||
panic!("Attempt {i}: Expected ServiceError but got: {other_err:?}");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -256,7 +256,7 @@ mod tests {
|
||||
|
||||
let output = result.unwrap();
|
||||
let body_bytes = output.body.collect().await.unwrap().into_bytes();
|
||||
assert_eq!(body_bytes.as_ref(), *content, "Content mismatch for key '{}'", key);
|
||||
assert_eq!(body_bytes.as_ref(), *content, "Content mismatch for key '{key}'");
|
||||
|
||||
info!("✅ PUT/GET succeeded for key: {}", key);
|
||||
}
|
||||
@@ -472,7 +472,7 @@ mod tests {
|
||||
info!("Testing COPY from '{}' to '{}'", src_key, dest_key);
|
||||
|
||||
// COPY object
|
||||
let copy_source = format!("{}/{}", bucket, src_key);
|
||||
let copy_source = format!("{bucket}/{src_key}");
|
||||
let result = client
|
||||
.copy_object()
|
||||
.bucket(bucket)
|
||||
@@ -543,7 +543,7 @@ mod tests {
|
||||
|
||||
let output = result.unwrap();
|
||||
let body_bytes = output.body.collect().await.unwrap().into_bytes();
|
||||
assert_eq!(body_bytes.as_ref(), *content, "Content mismatch for Unicode key '{}'", key);
|
||||
assert_eq!(body_bytes.as_ref(), *content, "Content mismatch for Unicode key '{key}'");
|
||||
|
||||
info!("✅ PUT/GET succeeded for Unicode key: {}", key);
|
||||
}
|
||||
@@ -610,7 +610,7 @@ mod tests {
|
||||
|
||||
let output = result.unwrap();
|
||||
let body_bytes = output.body.collect().await.unwrap().into_bytes();
|
||||
assert_eq!(body_bytes.as_ref(), *content, "Content mismatch for key '{}'", key);
|
||||
assert_eq!(body_bytes.as_ref(), *content, "Content mismatch for key '{key}'");
|
||||
|
||||
info!("✅ PUT/GET succeeded for key: {}", key);
|
||||
}
|
||||
@@ -658,7 +658,7 @@ mod tests {
|
||||
// Note: The validation happens on the server side, so we expect an error
|
||||
// For null byte, newline, and carriage return
|
||||
if key.contains('\0') || key.contains('\n') || key.contains('\r') {
|
||||
assert!(result.is_err(), "Control character should be rejected for key: {:?}", key);
|
||||
assert!(result.is_err(), "Control character should be rejected for key: {key:?}");
|
||||
if let Err(e) = result {
|
||||
info!("✅ Control character correctly rejected: {:?}", e);
|
||||
}
|
||||
|
||||
@@ -108,12 +108,6 @@ google-cloud-auth = { workspace = true }
|
||||
aws-config = { workspace = true }
|
||||
faster-hex = { workspace = true }
|
||||
|
||||
[target.'cfg(not(windows))'.dependencies]
|
||||
nix = { workspace = true }
|
||||
|
||||
[target.'cfg(windows)'.dependencies]
|
||||
winapi = { workspace = true }
|
||||
|
||||
|
||||
[dev-dependencies]
|
||||
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
#!/bin/bash
|
||||
#!/usr/bin/env bash
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
|
||||
@@ -23,7 +23,7 @@ use crate::{
|
||||
};
|
||||
|
||||
use crate::data_usage::load_data_usage_cache;
|
||||
use rustfs_common::{globals::GLOBAL_Local_Node_Name, heal_channel::DriveState};
|
||||
use rustfs_common::{globals::GLOBAL_LOCAL_NODE_NAME, heal_channel::DriveState};
|
||||
use rustfs_madmin::{
|
||||
BackendDisks, Disk, ErasureSetInfo, ITEM_INITIALIZING, ITEM_OFFLINE, ITEM_ONLINE, InfoMessage, ServerProperties,
|
||||
};
|
||||
@@ -128,7 +128,7 @@ async fn is_server_resolvable(endpoint: &Endpoint) -> Result<()> {
|
||||
}
|
||||
|
||||
pub async fn get_local_server_property() -> ServerProperties {
|
||||
let addr = GLOBAL_Local_Node_Name.read().await.clone();
|
||||
let addr = GLOBAL_LOCAL_NODE_NAME.read().await.clone();
|
||||
let mut pool_numbers = HashSet::new();
|
||||
let mut network = HashMap::new();
|
||||
|
||||
|
||||
@@ -22,7 +22,7 @@ pub struct PolicySys {}
|
||||
impl PolicySys {
|
||||
pub async fn is_allowed(args: &BucketPolicyArgs<'_>) -> bool {
|
||||
match Self::get(args.bucket).await {
|
||||
Ok(cfg) => return cfg.is_allowed(args),
|
||||
Ok(cfg) => return cfg.is_allowed(args).await,
|
||||
Err(err) => {
|
||||
if err != StorageError::ConfigNotFound {
|
||||
info!("config get err {:?}", err);
|
||||
|
||||
@@ -18,19 +18,17 @@
|
||||
#![allow(unused_must_use)]
|
||||
#![allow(clippy::all)]
|
||||
|
||||
use crate::client::{
|
||||
api_error_response::http_resp_to_error_response,
|
||||
api_get_options::GetObjectOptions,
|
||||
transition_api::{ObjectInfo, ReaderImpl, RequestMetadata, TransitionClient},
|
||||
};
|
||||
use bytes::Bytes;
|
||||
use http::{HeaderMap, HeaderValue};
|
||||
use rustfs_config::MAX_S3_CLIENT_RESPONSE_SIZE;
|
||||
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
|
||||
use s3s::dto::Owner;
|
||||
use std::collections::HashMap;
|
||||
use std::io::Cursor;
|
||||
use tokio::io::BufReader;
|
||||
|
||||
use crate::client::{
|
||||
api_error_response::{err_invalid_argument, http_resp_to_error_response},
|
||||
api_get_options::GetObjectOptions,
|
||||
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
|
||||
};
|
||||
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
|
||||
|
||||
#[derive(Clone, Debug, Default, serde::Serialize, serde::Deserialize)]
|
||||
pub struct Grantee {
|
||||
@@ -90,7 +88,12 @@ impl TransitionClient {
|
||||
return Err(std::io::Error::other(http_resp_to_error_response(&resp, b, bucket_name, object_name)));
|
||||
}
|
||||
|
||||
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
|
||||
let b = resp
|
||||
.body_mut()
|
||||
.store_all_limited(MAX_S3_CLIENT_RESPONSE_SIZE)
|
||||
.await
|
||||
.unwrap()
|
||||
.to_vec();
|
||||
let mut res = match quick_xml::de::from_str::<AccessControlPolicy>(&String::from_utf8(b).unwrap()) {
|
||||
Ok(result) => result,
|
||||
Err(err) => {
|
||||
|
||||
@@ -21,24 +21,17 @@
|
||||
use bytes::Bytes;
|
||||
use http::{HeaderMap, HeaderValue};
|
||||
use std::collections::HashMap;
|
||||
use std::io::Cursor;
|
||||
use time::OffsetDateTime;
|
||||
use tokio::io::BufReader;
|
||||
|
||||
use crate::client::constants::{GET_OBJECT_ATTRIBUTES_MAX_PARTS, GET_OBJECT_ATTRIBUTES_TAGS, ISO8601_DATEFORMAT};
|
||||
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
|
||||
use s3s::header::{
|
||||
X_AMZ_DELETE_MARKER, X_AMZ_MAX_PARTS, X_AMZ_METADATA_DIRECTIVE, X_AMZ_OBJECT_ATTRIBUTES, X_AMZ_PART_NUMBER_MARKER,
|
||||
X_AMZ_REQUEST_CHARGED, X_AMZ_RESTORE, X_AMZ_VERSION_ID,
|
||||
};
|
||||
use s3s::{Body, dto::Owner};
|
||||
|
||||
use crate::client::{
|
||||
api_error_response::err_invalid_argument,
|
||||
api_get_object_acl::AccessControlPolicy,
|
||||
api_get_options::GetObjectOptions,
|
||||
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
|
||||
transition_api::{ReaderImpl, RequestMetadata, TransitionClient},
|
||||
};
|
||||
use rustfs_config::MAX_S3_CLIENT_RESPONSE_SIZE;
|
||||
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
|
||||
use s3s::Body;
|
||||
use s3s::header::{X_AMZ_MAX_PARTS, X_AMZ_OBJECT_ATTRIBUTES, X_AMZ_PART_NUMBER_MARKER, X_AMZ_VERSION_ID};
|
||||
|
||||
pub struct ObjectAttributesOptions {
|
||||
pub max_parts: i64,
|
||||
@@ -143,7 +136,12 @@ impl ObjectAttributes {
|
||||
self.last_modified = mod_time;
|
||||
self.version_id = h.get(X_AMZ_VERSION_ID).unwrap().to_str().unwrap().to_string();
|
||||
|
||||
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
|
||||
let b = resp
|
||||
.body_mut()
|
||||
.store_all_limited(MAX_S3_CLIENT_RESPONSE_SIZE)
|
||||
.await
|
||||
.unwrap()
|
||||
.to_vec();
|
||||
let mut response = match quick_xml::de::from_str::<ObjectAttributesResponse>(&String::from_utf8(b).unwrap()) {
|
||||
Ok(result) => result,
|
||||
Err(err) => {
|
||||
@@ -224,7 +222,12 @@ impl TransitionClient {
|
||||
}
|
||||
|
||||
if resp.status() != http::StatusCode::OK {
|
||||
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
|
||||
let b = resp
|
||||
.body_mut()
|
||||
.store_all_limited(MAX_S3_CLIENT_RESPONSE_SIZE)
|
||||
.await
|
||||
.unwrap()
|
||||
.to_vec();
|
||||
let err_body = String::from_utf8(b).unwrap();
|
||||
let mut er = match quick_xml::de::from_str::<AccessControlPolicy>(&err_body) {
|
||||
Ok(result) => result,
|
||||
|
||||
@@ -18,10 +18,6 @@
|
||||
#![allow(unused_must_use)]
|
||||
#![allow(clippy::all)]
|
||||
|
||||
use bytes::Bytes;
|
||||
use http::{HeaderMap, StatusCode};
|
||||
use std::collections::HashMap;
|
||||
|
||||
use crate::client::{
|
||||
api_error_response::http_resp_to_error_response,
|
||||
api_s3_datatypes::{
|
||||
@@ -31,7 +27,11 @@ use crate::client::{
|
||||
transition_api::{ReaderImpl, RequestMetadata, TransitionClient},
|
||||
};
|
||||
use crate::store_api::BucketInfo;
|
||||
use bytes::Bytes;
|
||||
use http::{HeaderMap, StatusCode};
|
||||
use rustfs_config::MAX_S3_CLIENT_RESPONSE_SIZE;
|
||||
use rustfs_utils::hash::EMPTY_STRING_SHA256_HASH;
|
||||
use std::collections::HashMap;
|
||||
|
||||
impl TransitionClient {
|
||||
pub fn list_buckets(&self) -> Result<Vec<BucketInfo>, std::io::Error> {
|
||||
@@ -102,7 +102,12 @@ impl TransitionClient {
|
||||
}
|
||||
|
||||
//let mut list_bucket_result = ListBucketV2Result::default();
|
||||
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
|
||||
let b = resp
|
||||
.body_mut()
|
||||
.store_all_limited(MAX_S3_CLIENT_RESPONSE_SIZE)
|
||||
.await
|
||||
.unwrap()
|
||||
.to_vec();
|
||||
let mut list_bucket_result = match quick_xml::de::from_str::<ListBucketV2Result>(&String::from_utf8(b).unwrap()) {
|
||||
Ok(result) => result,
|
||||
Err(err) => {
|
||||
|
||||
@@ -18,23 +18,19 @@
|
||||
#![allow(unused_must_use)]
|
||||
#![allow(clippy::all)]
|
||||
|
||||
use http::Request;
|
||||
use hyper::StatusCode;
|
||||
use hyper::body::Incoming;
|
||||
use std::{collections::HashMap, sync::Arc};
|
||||
use tracing::warn;
|
||||
use tracing::{debug, error, info};
|
||||
|
||||
use super::constants::UNSIGNED_PAYLOAD;
|
||||
use super::credentials::SignatureType;
|
||||
use crate::client::{
|
||||
api_error_response::{http_resp_to_error_response, to_error_response},
|
||||
api_error_response::http_resp_to_error_response,
|
||||
transition_api::{CreateBucketConfiguration, LocationConstraint, TransitionClient},
|
||||
};
|
||||
use http::Request;
|
||||
use hyper::StatusCode;
|
||||
use rustfs_config::MAX_S3_CLIENT_RESPONSE_SIZE;
|
||||
use rustfs_utils::hash::EMPTY_STRING_SHA256_HASH;
|
||||
use s3s::Body;
|
||||
use s3s::S3ErrorCode;
|
||||
|
||||
use super::constants::UNSIGNED_PAYLOAD;
|
||||
use super::credentials::SignatureType;
|
||||
use std::collections::HashMap;
|
||||
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct BucketLocationCache {
|
||||
@@ -212,7 +208,12 @@ async fn process_bucket_location_response(
|
||||
}
|
||||
//}
|
||||
|
||||
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
|
||||
let b = resp
|
||||
.body_mut()
|
||||
.store_all_limited(MAX_S3_CLIENT_RESPONSE_SIZE)
|
||||
.await
|
||||
.unwrap()
|
||||
.to_vec();
|
||||
let mut location = "".to_string();
|
||||
if tier_type == "huaweicloud" {
|
||||
let d = quick_xml::de::from_str::<CreateBucketConfiguration>(&String::from_utf8(b).unwrap()).unwrap();
|
||||
|
||||
@@ -18,6 +18,20 @@
|
||||
#![allow(unused_must_use)]
|
||||
#![allow(clippy::all)]
|
||||
|
||||
use crate::client::bucket_cache::BucketLocationCache;
|
||||
use crate::client::{
|
||||
api_error_response::{err_invalid_argument, http_resp_to_error_response, to_error_response},
|
||||
api_get_options::GetObjectOptions,
|
||||
api_put_object::PutObjectOptions,
|
||||
api_put_object_multipart::UploadPartParams,
|
||||
api_s3_datatypes::{
|
||||
CompleteMultipartUpload, CompletePart, ListBucketResult, ListBucketV2Result, ListMultipartUploadsResult,
|
||||
ListObjectPartsResult, ObjectPart,
|
||||
},
|
||||
constants::{UNSIGNED_PAYLOAD, UNSIGNED_PAYLOAD_TRAILER},
|
||||
credentials::{CredContext, Credentials, SignatureType, Static},
|
||||
};
|
||||
use crate::{client::checksum::ChecksumMode, store_api::GetObjectReader};
|
||||
use bytes::Bytes;
|
||||
use futures::{Future, StreamExt};
|
||||
use http::{HeaderMap, HeaderName};
|
||||
@@ -30,7 +44,18 @@ use hyper_util::{client::legacy::Client, client::legacy::connect::HttpConnector,
|
||||
use md5::Digest;
|
||||
use md5::Md5;
|
||||
use rand::Rng;
|
||||
use rustfs_config::MAX_S3_CLIENT_RESPONSE_SIZE;
|
||||
use rustfs_rio::HashReader;
|
||||
use rustfs_utils::HashAlgorithm;
|
||||
use rustfs_utils::{
|
||||
net::get_endpoint_url,
|
||||
retry::{
|
||||
DEFAULT_RETRY_CAP, DEFAULT_RETRY_UNIT, MAX_JITTER, MAX_RETRY, RetryTimer, is_http_status_retryable, is_s3code_retryable,
|
||||
},
|
||||
};
|
||||
use s3s::S3ErrorCode;
|
||||
use s3s::dto::ReplicationStatus;
|
||||
use s3s::{Body, dto::Owner};
|
||||
use serde::{Deserialize, Serialize};
|
||||
use sha2::Sha256;
|
||||
use std::io::Cursor;
|
||||
@@ -48,31 +73,6 @@ use tracing::{debug, error, warn};
|
||||
use url::{Url, form_urlencoded};
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::client::bucket_cache::BucketLocationCache;
|
||||
use crate::client::{
|
||||
api_error_response::{err_invalid_argument, http_resp_to_error_response, to_error_response},
|
||||
api_get_options::GetObjectOptions,
|
||||
api_put_object::PutObjectOptions,
|
||||
api_put_object_multipart::UploadPartParams,
|
||||
api_s3_datatypes::{
|
||||
CompleteMultipartUpload, CompletePart, ListBucketResult, ListBucketV2Result, ListMultipartUploadsResult,
|
||||
ListObjectPartsResult, ObjectPart,
|
||||
},
|
||||
constants::{UNSIGNED_PAYLOAD, UNSIGNED_PAYLOAD_TRAILER},
|
||||
credentials::{CredContext, Credentials, SignatureType, Static},
|
||||
};
|
||||
use crate::{client::checksum::ChecksumMode, store_api::GetObjectReader};
|
||||
use rustfs_rio::HashReader;
|
||||
use rustfs_utils::{
|
||||
net::get_endpoint_url,
|
||||
retry::{
|
||||
DEFAULT_RETRY_CAP, DEFAULT_RETRY_UNIT, MAX_JITTER, MAX_RETRY, RetryTimer, is_http_status_retryable, is_s3code_retryable,
|
||||
},
|
||||
};
|
||||
use s3s::S3ErrorCode;
|
||||
use s3s::dto::ReplicationStatus;
|
||||
use s3s::{Body, dto::Owner};
|
||||
|
||||
const C_USER_AGENT: &str = "RustFS (linux; x86)";
|
||||
|
||||
const SUCCESS_STATUS: [StatusCode; 3] = [StatusCode::OK, StatusCode::NO_CONTENT, StatusCode::PARTIAL_CONTENT];
|
||||
@@ -291,7 +291,12 @@ impl TransitionClient {
|
||||
//if self.is_trace_enabled && !(self.trace_errors_only && resp.status() == StatusCode::OK) {
|
||||
if resp.status() != StatusCode::OK {
|
||||
//self.dump_http(&cloned_req, &resp)?;
|
||||
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
|
||||
let b = resp
|
||||
.body_mut()
|
||||
.store_all_limited(MAX_S3_CLIENT_RESPONSE_SIZE)
|
||||
.await
|
||||
.unwrap()
|
||||
.to_vec();
|
||||
warn!("err_body: {}", String::from_utf8(b).unwrap());
|
||||
}
|
||||
|
||||
@@ -334,7 +339,12 @@ impl TransitionClient {
|
||||
}
|
||||
}
|
||||
|
||||
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
|
||||
let b = resp
|
||||
.body_mut()
|
||||
.store_all_limited(MAX_S3_CLIENT_RESPONSE_SIZE)
|
||||
.await
|
||||
.unwrap()
|
||||
.to_vec();
|
||||
let mut err_response = http_resp_to_error_response(&resp, b.clone(), &metadata.bucket_name, &metadata.object_name);
|
||||
err_response.message = format!("remote tier error: {}", err_response.message);
|
||||
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
|
||||
use crate::config::{KV, KVS};
|
||||
use rustfs_config::{
|
||||
COMMENT_KEY, DEFAULT_DIR, DEFAULT_LIMIT, ENABLE_KEY, EnableState, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD,
|
||||
COMMENT_KEY, DEFAULT_LIMIT, ENABLE_KEY, EVENT_DEFAULT_DIR, EnableState, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD,
|
||||
MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN,
|
||||
WEBHOOK_BATCH_SIZE, WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_HTTP_TIMEOUT, WEBHOOK_MAX_RETRY,
|
||||
WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT, WEBHOOK_RETRY_INTERVAL,
|
||||
@@ -63,7 +63,7 @@ pub static DEFAULT_AUDIT_WEBHOOK_KVS: LazyLock<KVS> = LazyLock::new(|| {
|
||||
},
|
||||
KV {
|
||||
key: WEBHOOK_QUEUE_DIR.to_owned(),
|
||||
value: DEFAULT_DIR.to_owned(),
|
||||
value: EVENT_DEFAULT_DIR.to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
@@ -131,7 +131,7 @@ pub static DEFAULT_AUDIT_MQTT_KVS: LazyLock<KVS> = LazyLock::new(|| {
|
||||
},
|
||||
KV {
|
||||
key: MQTT_QUEUE_DIR.to_owned(),
|
||||
value: DEFAULT_DIR.to_owned(),
|
||||
value: EVENT_DEFAULT_DIR.to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
|
||||
@@ -14,7 +14,7 @@
|
||||
|
||||
use crate::config::{KV, KVS};
|
||||
use rustfs_config::{
|
||||
COMMENT_KEY, DEFAULT_DIR, DEFAULT_LIMIT, ENABLE_KEY, EnableState, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD,
|
||||
COMMENT_KEY, DEFAULT_LIMIT, ENABLE_KEY, EVENT_DEFAULT_DIR, EnableState, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD,
|
||||
MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN,
|
||||
WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
|
||||
};
|
||||
@@ -47,7 +47,7 @@ pub static DEFAULT_NOTIFY_WEBHOOK_KVS: LazyLock<KVS> = LazyLock::new(|| {
|
||||
},
|
||||
KV {
|
||||
key: WEBHOOK_QUEUE_DIR.to_owned(),
|
||||
value: DEFAULT_DIR.to_owned(),
|
||||
value: EVENT_DEFAULT_DIR.to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
@@ -114,7 +114,7 @@ pub static DEFAULT_NOTIFY_MQTT_KVS: LazyLock<KVS> = LazyLock::new(|| {
|
||||
},
|
||||
KV {
|
||||
key: MQTT_QUEUE_DIR.to_owned(),
|
||||
value: DEFAULT_DIR.to_owned(),
|
||||
value: EVENT_DEFAULT_DIR.to_owned(),
|
||||
hidden_if_empty: false,
|
||||
},
|
||||
KV {
|
||||
|
||||
@@ -16,7 +16,6 @@
|
||||
use std::hash::{Hash, Hasher};
|
||||
use std::io::{self};
|
||||
use std::path::PathBuf;
|
||||
use tracing::error;
|
||||
|
||||
pub type Error = DiskError;
|
||||
pub type Result<T> = core::result::Result<T, Error>;
|
||||
|
||||
@@ -20,7 +20,7 @@ use crate::{
|
||||
};
|
||||
use chrono::Utc;
|
||||
use rustfs_common::{
|
||||
globals::{GLOBAL_Local_Node_Name, GLOBAL_Rustfs_Addr},
|
||||
globals::{GLOBAL_LOCAL_NODE_NAME, GLOBAL_RUSTFS_ADDR},
|
||||
heal_channel::DriveState,
|
||||
metrics::global_metrics,
|
||||
};
|
||||
@@ -86,7 +86,7 @@ pub async fn collect_local_metrics(types: MetricType, opts: &CollectMetricsOpts)
|
||||
return real_time_metrics;
|
||||
}
|
||||
|
||||
let mut by_host_name = GLOBAL_Rustfs_Addr.read().await.clone();
|
||||
let mut by_host_name = GLOBAL_RUSTFS_ADDR.read().await.clone();
|
||||
if !opts.hosts.is_empty() {
|
||||
let server = get_local_server_property().await;
|
||||
if opts.hosts.contains(&server.endpoint) {
|
||||
@@ -95,7 +95,7 @@ pub async fn collect_local_metrics(types: MetricType, opts: &CollectMetricsOpts)
|
||||
return real_time_metrics;
|
||||
}
|
||||
}
|
||||
let local_node_name = GLOBAL_Local_Node_Name.read().await.clone();
|
||||
let local_node_name = GLOBAL_LOCAL_NODE_NAME.read().await.clone();
|
||||
if by_host_name.starts_with(":") && !local_node_name.starts_with(":") {
|
||||
by_host_name = local_node_name;
|
||||
}
|
||||
|
||||
@@ -40,7 +40,7 @@ use futures::future::join_all;
|
||||
use http::HeaderMap;
|
||||
use rustfs_common::heal_channel::HealOpts;
|
||||
use rustfs_common::{
|
||||
globals::GLOBAL_Local_Node_Name,
|
||||
globals::GLOBAL_LOCAL_NODE_NAME,
|
||||
heal_channel::{DriveState, HealItemType},
|
||||
};
|
||||
use rustfs_filemeta::FileInfo;
|
||||
@@ -170,7 +170,7 @@ impl Sets {
|
||||
|
||||
let set_disks = SetDisks::new(
|
||||
fast_lock_manager.clone(),
|
||||
GLOBAL_Local_Node_Name.read().await.to_string(),
|
||||
GLOBAL_LOCAL_NODE_NAME.read().await.to_string(),
|
||||
Arc::new(RwLock::new(set_drive)),
|
||||
set_drive_count,
|
||||
parity_count,
|
||||
|
||||
@@ -55,7 +55,7 @@ use futures::future::join_all;
|
||||
use http::HeaderMap;
|
||||
use lazy_static::lazy_static;
|
||||
use rand::Rng as _;
|
||||
use rustfs_common::globals::{GLOBAL_Local_Node_Name, GLOBAL_Rustfs_Host, GLOBAL_Rustfs_Port};
|
||||
use rustfs_common::globals::{GLOBAL_LOCAL_NODE_NAME, GLOBAL_RUSTFS_HOST, GLOBAL_RUSTFS_PORT};
|
||||
use rustfs_common::heal_channel::{HealItemType, HealOpts};
|
||||
use rustfs_filemeta::FileInfo;
|
||||
use rustfs_madmin::heal_commands::HealResultItem;
|
||||
@@ -127,11 +127,11 @@ impl ECStore {
|
||||
info!("ECStore new address: {}", address.to_string());
|
||||
let mut host = address.ip().to_string();
|
||||
if host.is_empty() {
|
||||
host = GLOBAL_Rustfs_Host.read().await.to_string()
|
||||
host = GLOBAL_RUSTFS_HOST.read().await.to_string()
|
||||
}
|
||||
let mut port = address.port().to_string();
|
||||
if port.is_empty() {
|
||||
port = GLOBAL_Rustfs_Port.read().await.to_string()
|
||||
port = GLOBAL_RUSTFS_PORT.read().await.to_string()
|
||||
}
|
||||
info!("ECStore new host: {}, port: {}", host, port);
|
||||
init_local_peer(&endpoint_pools, &host, &port).await;
|
||||
@@ -2329,15 +2329,15 @@ async fn init_local_peer(endpoint_pools: &EndpointServerPools, host: &String, po
|
||||
|
||||
if peer_set.is_empty() {
|
||||
if !host.is_empty() {
|
||||
*GLOBAL_Local_Node_Name.write().await = format!("{host}:{port}");
|
||||
*GLOBAL_LOCAL_NODE_NAME.write().await = format!("{host}:{port}");
|
||||
return;
|
||||
}
|
||||
|
||||
*GLOBAL_Local_Node_Name.write().await = format!("127.0.0.1:{port}");
|
||||
*GLOBAL_LOCAL_NODE_NAME.write().await = format!("127.0.0.1:{port}");
|
||||
return;
|
||||
}
|
||||
|
||||
*GLOBAL_Local_Node_Name.write().await = peer_set[0].clone();
|
||||
*GLOBAL_LOCAL_NODE_NAME.write().await = peer_set[0].clone();
|
||||
}
|
||||
|
||||
pub fn is_valid_object_prefix(_object: &str) -> bool {
|
||||
|
||||
@@ -831,10 +831,16 @@ impl<T: Clone + Debug + Send + 'static> Cache<T> {
|
||||
}
|
||||
}
|
||||
|
||||
#[allow(unsafe_code)]
|
||||
async fn update(&self) -> std::io::Result<()> {
|
||||
match (self.update_fn)().await {
|
||||
Ok(val) => {
|
||||
self.val.store(Box::into_raw(Box::new(val)), AtomicOrdering::SeqCst);
|
||||
let old = self.val.swap(Box::into_raw(Box::new(val)), AtomicOrdering::SeqCst);
|
||||
if !old.is_null() {
|
||||
unsafe {
|
||||
drop(Box::from_raw(old));
|
||||
}
|
||||
}
|
||||
let now = SystemTime::now()
|
||||
.duration_since(UNIX_EPOCH)
|
||||
.expect("Time went backwards")
|
||||
|
||||
@@ -47,5 +47,7 @@ tracing.workspace = true
|
||||
rustfs-madmin.workspace = true
|
||||
rustfs-utils = { workspace = true, features = ["path"] }
|
||||
tokio-util.workspace = true
|
||||
pollster.workspace = true
|
||||
|
||||
[dev-dependencies]
|
||||
pollster.workspace = true
|
||||
|
||||
@@ -33,7 +33,7 @@ static IAM_SYS: OnceLock<Arc<IamSys<ObjectStore>>> = OnceLock::new();
|
||||
#[instrument(skip(ecstore))]
|
||||
pub async fn init_iam_sys(ecstore: Arc<ECStore>) -> Result<()> {
|
||||
debug!("init iam system");
|
||||
let s = IamCache::new(ObjectStore::new(ecstore)).await;
|
||||
let s = IamCache::new(ObjectStore::new(ecstore).await).await;
|
||||
|
||||
IAM_SYS.get_or_init(move || IamSys::new(s).into());
|
||||
Ok(())
|
||||
|
||||
@@ -23,6 +23,7 @@ use crate::{
|
||||
UpdateServiceAccountOpts,
|
||||
},
|
||||
};
|
||||
use futures::future::join_all;
|
||||
use rustfs_ecstore::global::get_global_action_cred;
|
||||
use rustfs_madmin::{AccountStatus, AddOrUpdateUserReq, GroupDesc};
|
||||
use rustfs_policy::{
|
||||
@@ -402,13 +403,25 @@ where
|
||||
|
||||
self.cache.policy_docs.store(Arc::new(cache));
|
||||
|
||||
let ret = m
|
||||
let items: Vec<_> = m.into_iter().map(|(k, v)| (k, v.policy.clone())).collect();
|
||||
|
||||
let futures: Vec<_> = items.iter().map(|(_, policy)| policy.match_resource(bucket_name)).collect();
|
||||
|
||||
let results = join_all(futures).await;
|
||||
|
||||
let filtered = items
|
||||
.into_iter()
|
||||
.filter(|(_, v)| bucket_name.is_empty() || v.policy.match_resource(bucket_name))
|
||||
.map(|(k, v)| (k, v.policy))
|
||||
.zip(results)
|
||||
.filter_map(|((k, policy), matches)| {
|
||||
if bucket_name.is_empty() || matches {
|
||||
Some((k, policy))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(ret)
|
||||
Ok(filtered)
|
||||
}
|
||||
|
||||
pub async fn merge_policies(&self, name: &str) -> (String, Policy) {
|
||||
@@ -456,22 +469,51 @@ where
|
||||
|
||||
self.cache.policy_docs.store(Arc::new(cache));
|
||||
|
||||
let ret = m
|
||||
.into_iter()
|
||||
.filter(|(_, v)| bucket_name.is_empty() || v.policy.match_resource(bucket_name))
|
||||
let items: Vec<_> = m.into_iter().map(|(k, v)| (k, v.clone())).collect();
|
||||
|
||||
let futures: Vec<_> = items
|
||||
.iter()
|
||||
.map(|(_, policy_doc)| policy_doc.policy.match_resource(bucket_name))
|
||||
.collect();
|
||||
|
||||
Ok(ret)
|
||||
let results = join_all(futures).await;
|
||||
|
||||
let filtered = items
|
||||
.into_iter()
|
||||
.zip(results)
|
||||
.filter_map(|((k, policy_doc), matches)| {
|
||||
if bucket_name.is_empty() || matches {
|
||||
Some((k, policy_doc))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(filtered)
|
||||
}
|
||||
|
||||
pub async fn list_policy_docs_internal(&self, bucket_name: &str) -> Result<HashMap<String, PolicyDoc>> {
|
||||
let ret = self
|
||||
.cache
|
||||
.policy_docs
|
||||
.load()
|
||||
let cache = self.cache.policy_docs.load();
|
||||
let items: Vec<_> = cache.iter().map(|(k, v)| (k.clone(), v.clone())).collect();
|
||||
|
||||
let futures: Vec<_> = items
|
||||
.iter()
|
||||
.filter(|(_, v)| bucket_name.is_empty() || v.policy.match_resource(bucket_name))
|
||||
.map(|(k, v)| (k.clone(), v.clone()))
|
||||
.map(|(_, policy_doc)| policy_doc.policy.match_resource(bucket_name))
|
||||
.collect();
|
||||
|
||||
let results = join_all(futures).await;
|
||||
|
||||
let ret = items
|
||||
.into_iter()
|
||||
.zip(results)
|
||||
.filter_map(|((k, policy_doc), matches)| {
|
||||
if bucket_name.is_empty() || matches {
|
||||
Some((k, policy_doc))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
|
||||
Ok(ret)
|
||||
@@ -1753,7 +1795,7 @@ fn filter_policies(cache: &Cache, policy_name: &str, bucket_name: &str) -> (Stri
|
||||
}
|
||||
|
||||
if let Some(p) = cache.policy_docs.load().get(&policy) {
|
||||
if bucket_name.is_empty() || p.policy.match_resource(bucket_name) {
|
||||
if bucket_name.is_empty() || pollster::block_on(p.policy.match_resource(bucket_name)) {
|
||||
policies.push(policy);
|
||||
to_merge.push(p.policy.clone());
|
||||
}
|
||||
|
||||
@@ -120,18 +120,52 @@ fn split_path(s: &str, last_index: bool) -> (&str, &str) {
|
||||
#[derive(Clone)]
|
||||
pub struct ObjectStore {
|
||||
object_api: Arc<ECStore>,
|
||||
prev_cred: Option<rustfs_policy::auth::Credentials>,
|
||||
}
|
||||
|
||||
impl ObjectStore {
|
||||
const BUCKET_NAME: &'static str = ".rustfs.sys";
|
||||
const PREV_CRED_FILE: &'static str = "config/iam/prev_cred.json";
|
||||
|
||||
pub fn new(object_api: Arc<ECStore>) -> Self {
|
||||
Self { object_api }
|
||||
/// Load previous credentials from persistent storage in .rustfs.sys bucket
|
||||
async fn load_prev_cred(object_api: Arc<ECStore>) -> Option<rustfs_policy::auth::Credentials> {
|
||||
match read_config(object_api, Self::PREV_CRED_FILE).await {
|
||||
Ok(data) => serde_json::from_slice::<rustfs_policy::auth::Credentials>(&data).ok(),
|
||||
Err(_) => None,
|
||||
}
|
||||
}
|
||||
|
||||
fn decrypt_data(data: &[u8]) -> Result<Vec<u8>> {
|
||||
let de = rustfs_crypto::decrypt_data(get_global_action_cred().unwrap_or_default().secret_key.as_bytes(), data)?;
|
||||
Ok(de)
|
||||
/// Save previous credentials to persistent storage in .rustfs.sys bucket
|
||||
async fn save_prev_cred(object_api: Arc<ECStore>, cred: &Option<rustfs_policy::auth::Credentials>) -> Result<()> {
|
||||
match cred {
|
||||
Some(c) => {
|
||||
let data = serde_json::to_vec(c).map_err(|e| Error::other(format!("Failed to serialize cred: {}", e)))?;
|
||||
save_config(object_api, Self::PREV_CRED_FILE, data)
|
||||
.await
|
||||
.map_err(|e| Error::other(format!("Failed to write cred to storage: {}", e)))
|
||||
}
|
||||
None => {
|
||||
// If no credentials, remove the config
|
||||
match delete_config(object_api, Self::PREV_CRED_FILE).await {
|
||||
Ok(_) => Ok(()),
|
||||
Err(e) => {
|
||||
// Ignore ConfigNotFound error when trying to delete non-existent config
|
||||
if matches!(e, rustfs_ecstore::error::StorageError::ConfigNotFound) {
|
||||
Ok(())
|
||||
} else {
|
||||
Err(Error::other(format!("Failed to delete cred from storage: {}", e)))
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn new(object_api: Arc<ECStore>) -> Self {
|
||||
// Load previous credentials from persistent storage in .rustfs.sys bucket
|
||||
let prev_cred = Self::load_prev_cred(object_api.clone()).await.or_else(get_global_action_cred);
|
||||
|
||||
Self { object_api, prev_cred }
|
||||
}
|
||||
|
||||
fn encrypt_data(data: &[u8]) -> Result<Vec<u8>> {
|
||||
@@ -139,10 +173,65 @@ impl ObjectStore {
|
||||
Ok(en)
|
||||
}
|
||||
|
||||
/// Decrypt data with credential fallback mechanism
|
||||
/// First tries current credentials, then falls back to previous credentials if available
|
||||
async fn decrypt_fallback(&self, data: &[u8], path: &str) -> Result<Vec<u8>> {
|
||||
let current_cred = get_global_action_cred().unwrap_or_default();
|
||||
|
||||
// Try current credentials first
|
||||
match rustfs_crypto::decrypt_data(current_cred.secret_key.as_bytes(), data) {
|
||||
Ok(decrypted) => {
|
||||
// Update persistent storage with current credentials for consistency
|
||||
let _ = Self::save_prev_cred(self.object_api.clone(), &Some(current_cred)).await;
|
||||
Ok(decrypted)
|
||||
}
|
||||
Err(_) => {
|
||||
// Current credentials failed, try previous credentials
|
||||
if let Some(ref prev_cred) = self.prev_cred {
|
||||
match rustfs_crypto::decrypt_data(prev_cred.secret_key.as_bytes(), data) {
|
||||
Ok(prev_decrypted) => {
|
||||
warn!("Decryption succeeded with previous credentials, path: {}", path);
|
||||
|
||||
// Re-encrypt with current credentials
|
||||
match rustfs_crypto::encrypt_data(current_cred.secret_key.as_bytes(), &prev_decrypted) {
|
||||
Ok(re_encrypted) => {
|
||||
let _ = save_config(self.object_api.clone(), path, re_encrypted).await;
|
||||
}
|
||||
Err(e) => {
|
||||
warn!("Failed to re-encrypt with current credentials: {}, path: {}", e, path);
|
||||
}
|
||||
}
|
||||
|
||||
// Update persistent storage with current credentials
|
||||
let _ = Self::save_prev_cred(self.object_api.clone(), &Some(current_cred)).await;
|
||||
Ok(prev_decrypted)
|
||||
}
|
||||
Err(_) => {
|
||||
// Both attempts failed
|
||||
warn!("Decryption failed with both current and previous credentials, deleting config: {}", path);
|
||||
let _ = self.delete_iam_config(path).await;
|
||||
Err(Error::ConfigNotFound)
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// No previous credentials available
|
||||
warn!(
|
||||
"Decryption failed with current credentials and no previous credentials available, deleting config: {}",
|
||||
path
|
||||
);
|
||||
let _ = self.delete_iam_config(path).await;
|
||||
Err(Error::ConfigNotFound)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn load_iamconfig_bytes_with_metadata(&self, path: impl AsRef<str> + Send) -> Result<(Vec<u8>, ObjectInfo)> {
|
||||
let (data, obj) = read_config_with_metadata(self.object_api.clone(), path.as_ref(), &ObjectOptions::default()).await?;
|
||||
|
||||
Ok((Self::decrypt_data(&data)?, obj))
|
||||
let decrypted_data = self.decrypt_fallback(&data, path.as_ref()).await?;
|
||||
|
||||
Ok((decrypted_data, obj))
|
||||
}
|
||||
|
||||
async fn list_iam_config_items(&self, prefix: &str, ctx: CancellationToken, sender: Sender<StringOrErr>) {
|
||||
@@ -386,15 +475,7 @@ impl Store for ObjectStore {
|
||||
async fn load_iam_config<Item: DeserializeOwned>(&self, path: impl AsRef<str> + Send) -> Result<Item> {
|
||||
let mut data = read_config(self.object_api.clone(), path.as_ref()).await?;
|
||||
|
||||
data = match Self::decrypt_data(&data) {
|
||||
Ok(v) => v,
|
||||
Err(err) => {
|
||||
warn!("delete the config file when decrypt failed failed: {}, path: {}", err, path.as_ref());
|
||||
// delete the config file when decrypt failed
|
||||
let _ = self.delete_iam_config(path.as_ref()).await;
|
||||
return Err(Error::ConfigNotFound);
|
||||
}
|
||||
};
|
||||
data = self.decrypt_fallback(&data, path.as_ref()).await?;
|
||||
|
||||
Ok(serde_json::from_slice(&data)?)
|
||||
}
|
||||
|
||||
@@ -755,10 +755,10 @@ impl<T: Store> IamSys<T> {
|
||||
|
||||
let (has_session_policy, is_allowed_sp) = is_allowed_by_session_policy(args);
|
||||
if has_session_policy {
|
||||
return is_allowed_sp && (is_owner || combined_policy.is_allowed(args));
|
||||
return is_allowed_sp && (is_owner || combined_policy.is_allowed(args).await);
|
||||
}
|
||||
|
||||
is_owner || combined_policy.is_allowed(args)
|
||||
is_owner || combined_policy.is_allowed(args).await
|
||||
}
|
||||
|
||||
pub async fn is_allowed_service_account(&self, args: &Args<'_>, parent_user: &str) -> bool {
|
||||
@@ -814,15 +814,15 @@ impl<T: Store> IamSys<T> {
|
||||
};
|
||||
|
||||
if sa_str == INHERITED_POLICY_TYPE {
|
||||
return is_owner || combined_policy.is_allowed(&parent_args);
|
||||
return is_owner || combined_policy.is_allowed(&parent_args).await;
|
||||
}
|
||||
|
||||
let (has_session_policy, is_allowed_sp) = is_allowed_by_session_policy_for_service_account(args);
|
||||
if has_session_policy {
|
||||
return is_allowed_sp && (is_owner || combined_policy.is_allowed(&parent_args));
|
||||
return is_allowed_sp && (is_owner || combined_policy.is_allowed(&parent_args).await);
|
||||
}
|
||||
|
||||
is_owner || combined_policy.is_allowed(&parent_args)
|
||||
is_owner || combined_policy.is_allowed(&parent_args).await
|
||||
}
|
||||
|
||||
pub async fn get_combined_policy(&self, policies: &[String]) -> Policy {
|
||||
@@ -857,7 +857,7 @@ impl<T: Store> IamSys<T> {
|
||||
return false;
|
||||
}
|
||||
|
||||
self.get_combined_policy(&policies).await.is_allowed(args)
|
||||
self.get_combined_policy(&policies).await.is_allowed(args).await
|
||||
}
|
||||
}
|
||||
|
||||
@@ -883,7 +883,7 @@ fn is_allowed_by_session_policy(args: &Args<'_>) -> (bool, bool) {
|
||||
let mut session_policy_args = args.clone();
|
||||
session_policy_args.is_owner = false;
|
||||
|
||||
(has_session_policy, sub_policy.is_allowed(&session_policy_args))
|
||||
(has_session_policy, pollster::block_on(sub_policy.is_allowed(&session_policy_args)))
|
||||
}
|
||||
|
||||
fn is_allowed_by_session_policy_for_service_account(args: &Args<'_>) -> (bool, bool) {
|
||||
@@ -909,7 +909,7 @@ fn is_allowed_by_session_policy_for_service_account(args: &Args<'_>) -> (bool, b
|
||||
let mut session_policy_args = args.clone();
|
||||
session_policy_args.is_owner = false;
|
||||
|
||||
(has_session_policy, sub_policy.is_allowed(&session_policy_args))
|
||||
(has_session_policy, pollster::block_on(sub_policy.is_allowed(&session_policy_args)))
|
||||
}
|
||||
|
||||
#[derive(Debug, Clone, Default)]
|
||||
|
||||
@@ -61,7 +61,6 @@ reqwest = { workspace = true }
|
||||
vaultrs = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
tokio-test = { workspace = true }
|
||||
tempfile = { workspace = true }
|
||||
|
||||
[features]
|
||||
|
||||
@@ -28,8 +28,8 @@ documentation = "https://docs.rs/rustfs-notify/latest/rustfs_notify/"
|
||||
[dependencies]
|
||||
rustfs-config = { workspace = true, features = ["notify", "constants"] }
|
||||
rustfs-ecstore = { workspace = true }
|
||||
rustfs-utils = { workspace = true, features = ["path", "sys"] }
|
||||
rustfs-targets = { workspace = true }
|
||||
rustfs-utils = { workspace = true }
|
||||
async-trait = { workspace = true }
|
||||
chrono = { workspace = true, features = ["serde"] }
|
||||
futures = { workspace = true }
|
||||
@@ -40,7 +40,6 @@ rayon = { workspace = true }
|
||||
rumqttc = { workspace = true }
|
||||
rustc-hash = { workspace = true }
|
||||
serde = { workspace = true }
|
||||
serde_json = { workspace = true }
|
||||
starshard = { workspace = true }
|
||||
thiserror = { workspace = true }
|
||||
tokio = { workspace = true, features = ["rt-multi-thread", "sync", "time"] }
|
||||
@@ -52,6 +51,8 @@ wildmatch = { workspace = true, features = ["serde"] }
|
||||
tokio = { workspace = true, features = ["test-util"] }
|
||||
tracing-subscriber = { workspace = true, features = ["env-filter"] }
|
||||
axum = { workspace = true }
|
||||
rustfs-utils = { workspace = true, features = ["path", "sys"] }
|
||||
serde_json = { workspace = true }
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
|
||||
@@ -110,20 +110,21 @@ async fn reset_webhook_count(Query(params): Query<ResetParams>, headers: HeaderM
|
||||
|
||||
let reason = params.reason.unwrap_or_else(|| "Reason not provided".to_string());
|
||||
println!("Reset webhook count, reason: {reason}");
|
||||
|
||||
let time_now = chrono::offset::Utc::now().to_string();
|
||||
for header in headers {
|
||||
let (key, value) = header;
|
||||
println!("Header: {key:?}: {value:?}");
|
||||
println!("Header: {key:?}: {value:?}, time: {time_now}");
|
||||
}
|
||||
|
||||
println!("Reset webhook count printed headers");
|
||||
// Reset the counter to 0
|
||||
WEBHOOK_COUNT.store(0, Ordering::SeqCst);
|
||||
println!("Webhook count has been reset to 0.");
|
||||
let time_now = chrono::offset::Utc::now().to_string();
|
||||
Response::builder()
|
||||
.header("Foo", "Bar")
|
||||
.status(StatusCode::OK)
|
||||
.body(format!("Webhook count reset successfully current_count:{current_count}"))
|
||||
.body(format!("Webhook count reset successfully current_count:{current_count},time: {time_now}"))
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
@@ -167,7 +168,11 @@ async fn receive_webhook(Json(payload): Json<Value>) -> StatusCode {
|
||||
serde_json::to_string_pretty(&payload).unwrap()
|
||||
);
|
||||
WEBHOOK_COUNT.fetch_add(1, Ordering::SeqCst);
|
||||
println!("Total webhook requests received: {}", WEBHOOK_COUNT.load(Ordering::SeqCst));
|
||||
println!(
|
||||
"Total webhook requests received: {} , Time: {}",
|
||||
WEBHOOK_COUNT.load(Ordering::SeqCst),
|
||||
chrono::offset::Utc::now()
|
||||
);
|
||||
StatusCode::OK
|
||||
}
|
||||
|
||||
|
||||
@@ -20,6 +20,7 @@ use url::form_urlencoded;
|
||||
|
||||
/// Represents the identity of the user who triggered the event
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct Identity {
|
||||
/// The principal ID of the user
|
||||
pub principal_id: String,
|
||||
@@ -27,6 +28,7 @@ pub struct Identity {
|
||||
|
||||
/// Represents the bucket that the object is in
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct Bucket {
|
||||
/// The name of the bucket
|
||||
pub name: String,
|
||||
@@ -38,6 +40,7 @@ pub struct Bucket {
|
||||
|
||||
/// Represents the object that the event occurred on
|
||||
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct Object {
|
||||
/// The key (name) of the object
|
||||
pub key: String,
|
||||
@@ -62,6 +65,7 @@ pub struct Object {
|
||||
|
||||
/// Metadata about the event
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct Metadata {
|
||||
/// The schema version of the event
|
||||
#[serde(rename = "s3SchemaVersion")]
|
||||
@@ -76,13 +80,13 @@ pub struct Metadata {
|
||||
|
||||
/// Information about the source of the event
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "camelCase")]
|
||||
pub struct Source {
|
||||
/// The host where the event originated
|
||||
pub host: String,
|
||||
/// The port on the host
|
||||
pub port: String,
|
||||
/// The user agent that caused the event
|
||||
#[serde(rename = "userAgent")]
|
||||
pub user_agent: String,
|
||||
}
|
||||
|
||||
|
||||
@@ -18,9 +18,9 @@ use hashbrown::HashSet;
|
||||
use rumqttc::QoS;
|
||||
use rustfs_config::notify::{ENV_NOTIFY_MQTT_KEYS, ENV_NOTIFY_WEBHOOK_KEYS, NOTIFY_MQTT_KEYS, NOTIFY_WEBHOOK_KEYS};
|
||||
use rustfs_config::{
|
||||
DEFAULT_DIR, DEFAULT_LIMIT, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR, MQTT_QUEUE_LIMIT,
|
||||
MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN, WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY,
|
||||
WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
|
||||
DEFAULT_LIMIT, EVENT_DEFAULT_DIR, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR,
|
||||
MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN, WEBHOOK_CLIENT_CERT,
|
||||
WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
|
||||
};
|
||||
use rustfs_ecstore::config::KVS;
|
||||
use rustfs_targets::{
|
||||
@@ -67,7 +67,7 @@ impl TargetFactory for WebhookTargetFactory {
|
||||
enable: true, // If we are here, it's already enabled.
|
||||
endpoint: endpoint_url,
|
||||
auth_token: config.lookup(WEBHOOK_AUTH_TOKEN).unwrap_or_default(),
|
||||
queue_dir: config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or(DEFAULT_DIR.to_string()),
|
||||
queue_dir: config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or(EVENT_DEFAULT_DIR.to_string()),
|
||||
queue_limit: config
|
||||
.lookup(WEBHOOK_QUEUE_LIMIT)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
@@ -100,7 +100,7 @@ impl TargetFactory for WebhookTargetFactory {
|
||||
));
|
||||
}
|
||||
|
||||
let queue_dir = config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or(DEFAULT_DIR.to_string());
|
||||
let queue_dir = config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or(EVENT_DEFAULT_DIR.to_string());
|
||||
if !queue_dir.is_empty() && !std::path::Path::new(&queue_dir).is_absolute() {
|
||||
return Err(TargetError::Configuration("Webhook queue directory must be an absolute path".to_string()));
|
||||
}
|
||||
@@ -159,7 +159,7 @@ impl TargetFactory for MQTTTargetFactory {
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
.map(Duration::from_secs)
|
||||
.unwrap_or_else(|| Duration::from_secs(30)),
|
||||
queue_dir: config.lookup(MQTT_QUEUE_DIR).unwrap_or(DEFAULT_DIR.to_string()),
|
||||
queue_dir: config.lookup(MQTT_QUEUE_DIR).unwrap_or(EVENT_DEFAULT_DIR.to_string()),
|
||||
queue_limit: config
|
||||
.lookup(MQTT_QUEUE_LIMIT)
|
||||
.and_then(|v| v.parse::<u64>().ok())
|
||||
|
||||
@@ -16,6 +16,7 @@ use crate::{
|
||||
Event, error::NotificationError, notifier::EventNotifier, registry::TargetRegistry, rules::BucketNotificationConfig, stream,
|
||||
};
|
||||
use hashbrown::HashMap;
|
||||
use rustfs_config::notify::{DEFAULT_NOTIFY_TARGET_STREAM_CONCURRENCY, ENV_NOTIFY_TARGET_STREAM_CONCURRENCY};
|
||||
use rustfs_ecstore::config::{Config, KVS};
|
||||
use rustfs_targets::EventName;
|
||||
use rustfs_targets::arn::TargetID;
|
||||
@@ -108,17 +109,14 @@ pub struct NotificationSystem {
|
||||
impl NotificationSystem {
|
||||
/// Creates a new NotificationSystem
|
||||
pub fn new(config: Config) -> Self {
|
||||
let concurrency_limiter =
|
||||
rustfs_utils::get_env_usize(ENV_NOTIFY_TARGET_STREAM_CONCURRENCY, DEFAULT_NOTIFY_TARGET_STREAM_CONCURRENCY);
|
||||
NotificationSystem {
|
||||
notifier: Arc::new(EventNotifier::new()),
|
||||
registry: Arc::new(TargetRegistry::new()),
|
||||
config: Arc::new(RwLock::new(config)),
|
||||
stream_cancellers: Arc::new(RwLock::new(HashMap::new())),
|
||||
concurrency_limiter: Arc::new(Semaphore::new(
|
||||
std::env::var("RUSTFS_TARGET_STREAM_CONCURRENCY")
|
||||
.ok()
|
||||
.and_then(|s| s.parse().ok())
|
||||
.unwrap_or(20),
|
||||
)), // Limit the maximum number of concurrent processing events to 20
|
||||
concurrency_limiter: Arc::new(Semaphore::new(concurrency_limiter)), // Limit the maximum number of concurrent processing events to 20
|
||||
metrics: Arc::new(NotificationMetrics::new()),
|
||||
}
|
||||
}
|
||||
@@ -214,6 +212,11 @@ impl NotificationSystem {
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Save the modified configuration to storage
|
||||
rustfs_ecstore::config::com::save_server_config(store, &new_config)
|
||||
.await
|
||||
.map_err(|e| NotificationError::SaveConfig(e.to_string()))?;
|
||||
|
||||
info!("Configuration updated. Reloading system...");
|
||||
self.reload_config(new_config).await
|
||||
}
|
||||
@@ -269,9 +272,9 @@ impl NotificationSystem {
|
||||
self.update_config_and_reload(|config| {
|
||||
config
|
||||
.0
|
||||
.entry(target_type.to_string())
|
||||
.entry(target_type.to_lowercase())
|
||||
.or_default()
|
||||
.insert(target_name.to_string(), kvs.clone());
|
||||
.insert(target_name.to_lowercase(), kvs.clone());
|
||||
true // The configuration is always modified
|
||||
})
|
||||
.await
|
||||
@@ -296,23 +299,35 @@ impl NotificationSystem {
|
||||
/// If the target configuration does not exist, it returns Ok(()) without making any changes.
|
||||
pub async fn remove_target_config(&self, target_type: &str, target_name: &str) -> Result<(), NotificationError> {
|
||||
info!("Removing config for target {} of type {}", target_name, target_type);
|
||||
self.update_config_and_reload(|config| {
|
||||
let mut changed = false;
|
||||
if let Some(targets) = config.0.get_mut(&target_type.to_lowercase()) {
|
||||
if targets.remove(&target_name.to_lowercase()).is_some() {
|
||||
changed = true;
|
||||
let config_result = self
|
||||
.update_config_and_reload(|config| {
|
||||
let mut changed = false;
|
||||
if let Some(targets) = config.0.get_mut(&target_type.to_lowercase()) {
|
||||
if targets.remove(&target_name.to_lowercase()).is_some() {
|
||||
changed = true;
|
||||
}
|
||||
if targets.is_empty() {
|
||||
config.0.remove(target_type);
|
||||
}
|
||||
}
|
||||
if targets.is_empty() {
|
||||
config.0.remove(target_type);
|
||||
if !changed {
|
||||
info!("Target {} of type {} not found, no changes made.", target_name, target_type);
|
||||
}
|
||||
}
|
||||
if !changed {
|
||||
info!("Target {} of type {} not found, no changes made.", target_name, target_type);
|
||||
}
|
||||
debug!("Config after remove: {:?}", config);
|
||||
changed
|
||||
})
|
||||
.await
|
||||
debug!("Config after remove: {:?}", config);
|
||||
changed
|
||||
})
|
||||
.await;
|
||||
|
||||
if config_result.is_ok() {
|
||||
let target_id = TargetID::new(target_name.to_string(), target_type.to_string());
|
||||
|
||||
// Remove from target list
|
||||
let target_list = self.notifier.target_list();
|
||||
let mut target_list_guard = target_list.write().await;
|
||||
let _ = target_list_guard.remove_target_only(&target_id).await;
|
||||
}
|
||||
|
||||
config_result
|
||||
}
|
||||
|
||||
/// Enhanced event stream startup function, including monitoring and concurrency control
|
||||
|
||||
@@ -195,6 +195,10 @@ impl EventNotifier {
|
||||
) -> Result<(), NotificationError> {
|
||||
// Currently active, simpler logic
|
||||
let mut target_list_guard = self.target_list.write().await; //Gets a write lock for the TargetList
|
||||
|
||||
// Clear existing targets first - rebuild from scratch to ensure consistency with new configuration
|
||||
target_list_guard.clear();
|
||||
|
||||
for target_boxed in targets_to_init {
|
||||
// Traverse the incoming Box<dyn Target >
|
||||
debug!("init bucket target: {}", target_boxed.name());
|
||||
@@ -240,6 +244,11 @@ impl TargetList {
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Clears all targets from the list
|
||||
pub fn clear(&mut self) {
|
||||
self.targets.clear();
|
||||
}
|
||||
|
||||
/// Removes a target by ID. Note: This does not stop its associated event stream.
|
||||
/// Stream cancellation should be handled by EventNotifier.
|
||||
pub async fn remove_target_only(&mut self, id: &TargetID) -> Option<Arc<dyn Target<Event> + Send + Sync>> {
|
||||
|
||||
@@ -16,9 +16,11 @@ use crate::Event;
|
||||
use crate::factory::{MQTTTargetFactory, TargetFactory, WebhookTargetFactory};
|
||||
use futures::stream::{FuturesUnordered, StreamExt};
|
||||
use hashbrown::{HashMap, HashSet};
|
||||
use rustfs_config::{DEFAULT_DELIMITER, ENABLE_KEY, ENV_PREFIX, notify::NOTIFY_ROUTE_PREFIX};
|
||||
use rustfs_config::{DEFAULT_DELIMITER, ENABLE_KEY, ENV_PREFIX, EnableState, notify::NOTIFY_ROUTE_PREFIX};
|
||||
use rustfs_ecstore::config::{Config, KVS};
|
||||
use rustfs_targets::{Target, TargetError, target::ChannelTargetType};
|
||||
use std::str::FromStr;
|
||||
use std::sync::Arc;
|
||||
use tracing::{debug, error, info, warn};
|
||||
|
||||
/// Registry for managing target factories
|
||||
@@ -117,11 +119,7 @@ impl TargetRegistry {
|
||||
format!("{ENV_PREFIX}{NOTIFY_ROUTE_PREFIX}{target_type}{DEFAULT_DELIMITER}{ENABLE_KEY}{DEFAULT_DELIMITER}")
|
||||
.to_uppercase();
|
||||
for (key, value) in &all_env {
|
||||
if value.eq_ignore_ascii_case(rustfs_config::EnableState::One.as_str())
|
||||
|| value.eq_ignore_ascii_case(rustfs_config::EnableState::On.as_str())
|
||||
|| value.eq_ignore_ascii_case(rustfs_config::EnableState::True.as_str())
|
||||
|| value.eq_ignore_ascii_case(rustfs_config::EnableState::Yes.as_str())
|
||||
{
|
||||
if EnableState::from_str(value).ok().map(|s| s.is_enabled()).unwrap_or(false) {
|
||||
if let Some(id) = key.strip_prefix(&enable_prefix) {
|
||||
if !id.is_empty() {
|
||||
instance_ids_from_env.insert(id.to_lowercase());
|
||||
@@ -208,10 +206,10 @@ impl TargetRegistry {
|
||||
let enabled = merged_config
|
||||
.lookup(ENABLE_KEY)
|
||||
.map(|v| {
|
||||
v.eq_ignore_ascii_case(rustfs_config::EnableState::One.as_str())
|
||||
|| v.eq_ignore_ascii_case(rustfs_config::EnableState::On.as_str())
|
||||
|| v.eq_ignore_ascii_case(rustfs_config::EnableState::True.as_str())
|
||||
|| v.eq_ignore_ascii_case(rustfs_config::EnableState::Yes.as_str())
|
||||
EnableState::from_str(v.as_str())
|
||||
.ok()
|
||||
.map(|s| s.is_enabled())
|
||||
.unwrap_or(false)
|
||||
})
|
||||
.unwrap_or(false);
|
||||
|
||||
@@ -220,10 +218,10 @@ impl TargetRegistry {
|
||||
// 5.3. Create asynchronous tasks for enabled instances
|
||||
let target_type_clone = target_type.clone();
|
||||
let tid = id.clone();
|
||||
let merged_config_arc = std::sync::Arc::new(merged_config);
|
||||
let merged_config_arc = Arc::new(merged_config);
|
||||
tasks.push(async move {
|
||||
let result = factory.create_target(tid.clone(), &merged_config_arc).await;
|
||||
(target_type_clone, tid, result, std::sync::Arc::clone(&merged_config_arc))
|
||||
(target_type_clone, tid, result, Arc::clone(&merged_config_arc))
|
||||
});
|
||||
} else {
|
||||
info!(instance_id = %id, "Skip the disabled target and will be removed from the final configuration");
|
||||
|
||||
@@ -45,7 +45,12 @@ regex = { workspace = true }
|
||||
reqwest.workspace = true
|
||||
chrono.workspace = true
|
||||
tracing.workspace = true
|
||||
moka.workspace = true
|
||||
async-trait.workspace = true
|
||||
futures.workspace = true
|
||||
pollster.workspace = true
|
||||
|
||||
[dev-dependencies]
|
||||
pollster.workspace = true
|
||||
test-case.workspace = true
|
||||
temp-env = { workspace = true }
|
||||
|
||||
@@ -20,7 +20,6 @@ use serde::{Deserialize, Serialize};
|
||||
use serde_json::{Value, json};
|
||||
use std::collections::HashMap;
|
||||
use time::OffsetDateTime;
|
||||
use time::macros::offset;
|
||||
use tracing::warn;
|
||||
|
||||
const ACCESS_KEY_MIN_LEN: usize = 3;
|
||||
@@ -231,7 +230,7 @@ pub fn create_new_credentials_with_metadata(
|
||||
let expiration = {
|
||||
if let Some(v) = claims.get("exp") {
|
||||
if let Some(expiry) = v.as_i64() {
|
||||
Some(OffsetDateTime::from_unix_timestamp(expiry)?.to_offset(offset!(+8)))
|
||||
Some(OffsetDateTime::from_unix_timestamp(expiry)?)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
|
||||
@@ -24,6 +24,7 @@ mod principal;
|
||||
pub mod resource;
|
||||
pub mod statement;
|
||||
pub(crate) mod utils;
|
||||
pub mod variables;
|
||||
|
||||
pub use action::ActionSet;
|
||||
pub use doc::PolicyDoc;
|
||||
|
||||
@@ -13,6 +13,7 @@
|
||||
// limitations under the License.
|
||||
|
||||
use crate::policy::function::condition::Condition;
|
||||
use crate::policy::variables::PolicyVariableResolver;
|
||||
use serde::ser::SerializeMap;
|
||||
use serde::{Deserialize, Serialize, Serializer, de};
|
||||
use std::collections::HashMap;
|
||||
@@ -37,21 +38,29 @@ pub struct Functions {
|
||||
}
|
||||
|
||||
impl Functions {
|
||||
pub fn evaluate(&self, values: &HashMap<String, Vec<String>>) -> bool {
|
||||
pub async fn evaluate(&self, values: &HashMap<String, Vec<String>>) -> bool {
|
||||
self.evaluate_with_resolver(values, None).await
|
||||
}
|
||||
|
||||
pub async fn evaluate_with_resolver(
|
||||
&self,
|
||||
values: &HashMap<String, Vec<String>>,
|
||||
resolver: Option<&dyn PolicyVariableResolver>,
|
||||
) -> bool {
|
||||
for c in self.for_any_value.iter() {
|
||||
if !c.evaluate(false, values) {
|
||||
if !c.evaluate_with_resolver(false, values, resolver).await {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
for c in self.for_all_values.iter() {
|
||||
if !c.evaluate(true, values) {
|
||||
if !c.evaluate_with_resolver(true, values, resolver).await {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
for c in self.for_normal.iter() {
|
||||
if !c.evaluate(false, values) {
|
||||
if !c.evaluate_with_resolver(false, values, resolver).await {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
@@ -12,6 +12,7 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use crate::policy::variables::PolicyVariableResolver;
|
||||
use serde::Deserialize;
|
||||
use serde::de::{Error, MapAccess};
|
||||
use serde::ser::SerializeMap;
|
||||
@@ -106,16 +107,21 @@ impl Condition {
|
||||
}
|
||||
}
|
||||
|
||||
pub fn evaluate(&self, for_all: bool, values: &HashMap<String, Vec<String>>) -> bool {
|
||||
pub async fn evaluate_with_resolver(
|
||||
&self,
|
||||
for_all: bool,
|
||||
values: &HashMap<String, Vec<String>>,
|
||||
resolver: Option<&dyn PolicyVariableResolver>,
|
||||
) -> bool {
|
||||
use Condition::*;
|
||||
|
||||
let r = match self {
|
||||
StringEquals(s) => s.evaluate(for_all, false, false, false, values),
|
||||
StringNotEquals(s) => s.evaluate(for_all, false, false, true, values),
|
||||
StringEqualsIgnoreCase(s) => s.evaluate(for_all, true, false, false, values),
|
||||
StringNotEqualsIgnoreCase(s) => s.evaluate(for_all, true, false, true, values),
|
||||
StringLike(s) => s.evaluate(for_all, false, true, false, values),
|
||||
StringNotLike(s) => s.evaluate(for_all, false, true, true, values),
|
||||
StringEquals(s) => s.evaluate_with_resolver(for_all, false, false, false, values, resolver).await,
|
||||
StringNotEquals(s) => s.evaluate_with_resolver(for_all, false, false, true, values, resolver).await,
|
||||
StringEqualsIgnoreCase(s) => s.evaluate_with_resolver(for_all, true, false, false, values, resolver).await,
|
||||
StringNotEqualsIgnoreCase(s) => s.evaluate_with_resolver(for_all, true, false, true, values, resolver).await,
|
||||
StringLike(s) => s.evaluate_with_resolver(for_all, false, true, false, values, resolver).await,
|
||||
StringNotLike(s) => s.evaluate_with_resolver(for_all, false, true, true, values, resolver).await,
|
||||
BinaryEquals(s) => s.evaluate(values),
|
||||
IpAddress(s) => s.evaluate(values),
|
||||
NotIpAddress(s) => s.evaluate(values),
|
||||
|
||||
@@ -21,26 +21,30 @@ use std::{borrow::Cow, collections::HashMap};
|
||||
|
||||
use crate::policy::function::func::FuncKeyValue;
|
||||
use crate::policy::utils::wildcard;
|
||||
use futures::future;
|
||||
use serde::{Deserialize, Deserializer, Serialize, de, ser::SerializeSeq};
|
||||
|
||||
use super::{func::InnerFunc, key_name::KeyName};
|
||||
use crate::policy::variables::PolicyVariableResolver;
|
||||
|
||||
pub type StringFunc = InnerFunc<StringFuncValue>;
|
||||
|
||||
impl StringFunc {
|
||||
pub(crate) fn evaluate(
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
pub(crate) async fn evaluate_with_resolver(
|
||||
&self,
|
||||
for_all: bool,
|
||||
ignore_case: bool,
|
||||
like: bool,
|
||||
negate: bool,
|
||||
values: &HashMap<String, Vec<String>>,
|
||||
resolver: Option<&dyn PolicyVariableResolver>,
|
||||
) -> bool {
|
||||
for inner in self.0.iter() {
|
||||
let result = if like {
|
||||
inner.eval_like(for_all, values) ^ negate
|
||||
inner.eval_like(for_all, values, resolver).await ^ negate
|
||||
} else {
|
||||
inner.eval(for_all, ignore_case, values) ^ negate
|
||||
inner.eval(for_all, ignore_case, values, resolver).await ^ negate
|
||||
};
|
||||
|
||||
if !result {
|
||||
@@ -53,7 +57,13 @@ impl StringFunc {
|
||||
}
|
||||
|
||||
impl FuncKeyValue<StringFuncValue> {
|
||||
fn eval(&self, for_all: bool, ignore_case: bool, values: &HashMap<String, Vec<String>>) -> bool {
|
||||
async fn eval(
|
||||
&self,
|
||||
for_all: bool,
|
||||
ignore_case: bool,
|
||||
values: &HashMap<String, Vec<String>>,
|
||||
resolver: Option<&dyn PolicyVariableResolver>,
|
||||
) -> bool {
|
||||
let rvalues = values
|
||||
// http.CanonicalHeaderKey ?
|
||||
.get(self.key.name().as_str())
|
||||
@@ -70,12 +80,20 @@ impl FuncKeyValue<StringFuncValue> {
|
||||
})
|
||||
.unwrap_or_default();
|
||||
|
||||
let fvalues = self
|
||||
.values
|
||||
.0
|
||||
.iter()
|
||||
.map(|c| {
|
||||
let mut c = Cow::from(c);
|
||||
let resolved_values: Vec<Vec<String>> = futures::future::join_all(self.values.0.iter().map(|c| async {
|
||||
if let Some(res) = resolver {
|
||||
super::super::variables::resolve_aws_variables(c, res).await
|
||||
} else {
|
||||
vec![c.to_string()]
|
||||
}
|
||||
}))
|
||||
.await;
|
||||
|
||||
let fvalues = resolved_values
|
||||
.into_iter()
|
||||
.flatten()
|
||||
.map(|resolved_c| {
|
||||
let mut c = Cow::from(resolved_c);
|
||||
for key in KeyName::COMMON_KEYS {
|
||||
match values.get(key.name()).and_then(|x| x.first()) {
|
||||
Some(v) if !v.is_empty() => return Cow::Owned(c.to_mut().replace(&key.var_name(), v)),
|
||||
@@ -97,15 +115,32 @@ impl FuncKeyValue<StringFuncValue> {
|
||||
}
|
||||
}
|
||||
|
||||
fn eval_like(&self, for_all: bool, values: &HashMap<String, Vec<String>>) -> bool {
|
||||
async fn eval_like(
|
||||
&self,
|
||||
for_all: bool,
|
||||
values: &HashMap<String, Vec<String>>,
|
||||
resolver: Option<&dyn PolicyVariableResolver>,
|
||||
) -> bool {
|
||||
if let Some(rvalues) = values.get(self.key.name().as_str()) {
|
||||
for v in rvalues.iter() {
|
||||
let matched = self
|
||||
let resolved_futures: Vec<_> = self
|
||||
.values
|
||||
.0
|
||||
.iter()
|
||||
.map(|c| {
|
||||
let mut c = Cow::from(c);
|
||||
.map(|c| async {
|
||||
if let Some(res) = resolver {
|
||||
super::super::variables::resolve_aws_variables(c, res).await
|
||||
} else {
|
||||
vec![c.to_string()]
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
let resolved_values = future::join_all(resolved_futures).await;
|
||||
let matched = resolved_values
|
||||
.into_iter()
|
||||
.flatten()
|
||||
.map(|resolved_c| {
|
||||
let mut c = Cow::from(resolved_c);
|
||||
for key in KeyName::COMMON_KEYS {
|
||||
match values.get(key.name()).and_then(|x| x.first()) {
|
||||
Some(v) if !v.is_empty() => return Cow::Owned(c.to_mut().replace(&key.var_name(), v)),
|
||||
@@ -214,6 +249,7 @@ mod tests {
|
||||
key_name::AwsKeyName::*,
|
||||
key_name::KeyName::{self, *},
|
||||
};
|
||||
use std::collections::HashMap;
|
||||
|
||||
use crate::policy::function::key_name::S3KeyName::S3LocationConstraint;
|
||||
use test_case::test_case;
|
||||
@@ -275,16 +311,13 @@ mod tests {
|
||||
negate: bool,
|
||||
values: Vec<(&str, Vec<&str>)>,
|
||||
) -> bool {
|
||||
let result = s.eval(
|
||||
for_all,
|
||||
ignore_case,
|
||||
&values
|
||||
.into_iter()
|
||||
.map(|(k, v)| (k.to_owned(), v.into_iter().map(ToOwned::to_owned).collect::<Vec<String>>()))
|
||||
.collect(),
|
||||
);
|
||||
let map: HashMap<String, Vec<String>> = values
|
||||
.into_iter()
|
||||
.map(|(k, v)| (k.to_owned(), v.into_iter().map(ToOwned::to_owned).collect::<Vec<String>>()))
|
||||
.collect();
|
||||
let result = s.eval(for_all, ignore_case, &map, None);
|
||||
|
||||
result ^ negate
|
||||
pollster::block_on(result) ^ negate
|
||||
}
|
||||
|
||||
#[test_case(new_fkv("s3:x-amz-copy-source", vec!["mybucket/myobject"]), false, vec![("x-amz-copy-source", vec!["mybucket/myobject"])] => true ; "1")]
|
||||
@@ -380,15 +413,13 @@ mod tests {
|
||||
}
|
||||
|
||||
fn test_eval_like(s: FuncKeyValue<StringFuncValue>, for_all: bool, negate: bool, values: Vec<(&str, Vec<&str>)>) -> bool {
|
||||
let result = s.eval_like(
|
||||
for_all,
|
||||
&values
|
||||
.into_iter()
|
||||
.map(|(k, v)| (k.to_owned(), v.into_iter().map(ToOwned::to_owned).collect::<Vec<String>>()))
|
||||
.collect(),
|
||||
);
|
||||
let map: HashMap<String, Vec<String>> = values
|
||||
.into_iter()
|
||||
.map(|(k, v)| (k.to_owned(), v.into_iter().map(ToOwned::to_owned).collect::<Vec<String>>()))
|
||||
.collect();
|
||||
let result = s.eval_like(for_all, &map, None);
|
||||
|
||||
result ^ negate
|
||||
pollster::block_on(result) ^ negate
|
||||
}
|
||||
|
||||
#[test_case(new_fkv("s3:x-amz-copy-source", vec!["mybucket/myobject"]), false, vec![("x-amz-copy-source", vec!["mybucket/myobject"])] => true ; "1")]
|
||||
|
||||
@@ -62,9 +62,9 @@ pub struct Policy {
|
||||
}
|
||||
|
||||
impl Policy {
|
||||
pub fn is_allowed(&self, args: &Args) -> bool {
|
||||
pub async fn is_allowed(&self, args: &Args<'_>) -> bool {
|
||||
for statement in self.statements.iter().filter(|s| matches!(s.effect, Effect::Deny)) {
|
||||
if !statement.is_allowed(args) {
|
||||
if !statement.is_allowed(args).await {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@@ -74,7 +74,7 @@ impl Policy {
|
||||
}
|
||||
|
||||
for statement in self.statements.iter().filter(|s| matches!(s.effect, Effect::Allow)) {
|
||||
if statement.is_allowed(args) {
|
||||
if statement.is_allowed(args).await {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
@@ -82,9 +82,9 @@ impl Policy {
|
||||
false
|
||||
}
|
||||
|
||||
pub fn match_resource(&self, resource: &str) -> bool {
|
||||
pub async fn match_resource(&self, resource: &str) -> bool {
|
||||
for statement in self.statements.iter() {
|
||||
if statement.resources.match_resource(resource) {
|
||||
if statement.resources.match_resource(resource).await {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
@@ -188,9 +188,9 @@ pub struct BucketPolicy {
|
||||
}
|
||||
|
||||
impl BucketPolicy {
|
||||
pub fn is_allowed(&self, args: &BucketPolicyArgs) -> bool {
|
||||
pub async fn is_allowed(&self, args: &BucketPolicyArgs<'_>) -> bool {
|
||||
for statement in self.statements.iter().filter(|s| matches!(s.effect, Effect::Deny)) {
|
||||
if !statement.is_allowed(args) {
|
||||
if !statement.is_allowed(args).await {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
@@ -200,7 +200,7 @@ impl BucketPolicy {
|
||||
}
|
||||
|
||||
for statement in self.statements.iter().filter(|s| matches!(s.effect, Effect::Allow)) {
|
||||
if statement.is_allowed(args) {
|
||||
if statement.is_allowed(args).await {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
@@ -525,4 +525,281 @@ mod test {
|
||||
// assert_eq!(p, p2);
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_aws_username_policy_variable() -> Result<()> {
|
||||
let data = r#"
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListBucket"],
|
||||
"Resource": ["arn:aws:s3:::${aws:username}-*"]
|
||||
}
|
||||
]
|
||||
}
|
||||
"#;
|
||||
|
||||
let policy = Policy::parse_config(data.as_bytes())?;
|
||||
|
||||
let conditions = HashMap::new();
|
||||
|
||||
// Test allowed case - user testuser accessing testuser-bucket
|
||||
let mut claims1 = HashMap::new();
|
||||
claims1.insert("username".to_string(), Value::String("testuser".to_string()));
|
||||
|
||||
let args1 = Args {
|
||||
account: "testuser",
|
||||
groups: &None,
|
||||
action: Action::S3Action(crate::policy::action::S3Action::ListBucketAction),
|
||||
bucket: "testuser-bucket",
|
||||
conditions: &conditions,
|
||||
is_owner: false,
|
||||
object: "",
|
||||
claims: &claims1,
|
||||
deny_only: false,
|
||||
};
|
||||
|
||||
// Test denied case - user otheruser accessing testuser-bucket
|
||||
let mut claims2 = HashMap::new();
|
||||
claims2.insert("username".to_string(), Value::String("otheruser".to_string()));
|
||||
|
||||
let args2 = Args {
|
||||
account: "otheruser",
|
||||
groups: &None,
|
||||
action: Action::S3Action(crate::policy::action::S3Action::ListBucketAction),
|
||||
bucket: "testuser-bucket",
|
||||
conditions: &conditions,
|
||||
is_owner: false,
|
||||
object: "",
|
||||
claims: &claims2,
|
||||
deny_only: false,
|
||||
};
|
||||
|
||||
assert!(pollster::block_on(policy.is_allowed(&args1)));
|
||||
assert!(!pollster::block_on(policy.is_allowed(&args2)));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_aws_userid_policy_variable() -> Result<()> {
|
||||
let data = r#"
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListBucket"],
|
||||
"Resource": ["arn:aws:s3:::${aws:userid}-bucket"]
|
||||
}
|
||||
]
|
||||
}
|
||||
"#;
|
||||
|
||||
let policy = Policy::parse_config(data.as_bytes())?;
|
||||
|
||||
let mut claims = HashMap::new();
|
||||
claims.insert("sub".to_string(), Value::String("AIDACKCEVSQ6C2EXAMPLE".to_string()));
|
||||
|
||||
let conditions = HashMap::new();
|
||||
|
||||
// Test allowed case
|
||||
let args1 = Args {
|
||||
account: "testuser",
|
||||
groups: &None,
|
||||
action: Action::S3Action(crate::policy::action::S3Action::ListBucketAction),
|
||||
bucket: "AIDACKCEVSQ6C2EXAMPLE-bucket",
|
||||
conditions: &conditions,
|
||||
is_owner: false,
|
||||
object: "",
|
||||
claims: &claims,
|
||||
deny_only: false,
|
||||
};
|
||||
|
||||
// Test denied case
|
||||
let args2 = Args {
|
||||
account: "testuser",
|
||||
groups: &None,
|
||||
action: Action::S3Action(crate::policy::action::S3Action::ListBucketAction),
|
||||
bucket: "OTHERUSER-bucket",
|
||||
conditions: &conditions,
|
||||
is_owner: false,
|
||||
object: "",
|
||||
claims: &claims,
|
||||
deny_only: false,
|
||||
};
|
||||
|
||||
assert!(pollster::block_on(policy.is_allowed(&args1)));
|
||||
assert!(!pollster::block_on(policy.is_allowed(&args2)));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_aws_policy_variables_concatenation() -> Result<()> {
|
||||
let data = r#"
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListBucket"],
|
||||
"Resource": ["arn:aws:s3:::${aws:username}-${aws:userid}-bucket"]
|
||||
}
|
||||
]
|
||||
}
|
||||
"#;
|
||||
|
||||
let policy = Policy::parse_config(data.as_bytes())?;
|
||||
|
||||
let mut claims = HashMap::new();
|
||||
claims.insert("username".to_string(), Value::String("testuser".to_string()));
|
||||
claims.insert("sub".to_string(), Value::String("AIDACKCEVSQ6C2EXAMPLE".to_string()));
|
||||
|
||||
let conditions = HashMap::new();
|
||||
|
||||
// Test allowed case
|
||||
let args1 = Args {
|
||||
account: "testuser",
|
||||
groups: &None,
|
||||
action: Action::S3Action(crate::policy::action::S3Action::ListBucketAction),
|
||||
bucket: "testuser-AIDACKCEVSQ6C2EXAMPLE-bucket",
|
||||
conditions: &conditions,
|
||||
is_owner: false,
|
||||
object: "",
|
||||
claims: &claims,
|
||||
deny_only: false,
|
||||
};
|
||||
|
||||
// Test denied case
|
||||
let args2 = Args {
|
||||
account: "testuser",
|
||||
groups: &None,
|
||||
action: Action::S3Action(crate::policy::action::S3Action::ListBucketAction),
|
||||
bucket: "otheruser-AIDACKCEVSQ6C2EXAMPLE-bucket",
|
||||
conditions: &conditions,
|
||||
is_owner: false,
|
||||
object: "",
|
||||
claims: &claims,
|
||||
deny_only: false,
|
||||
};
|
||||
|
||||
assert!(pollster::block_on(policy.is_allowed(&args1)));
|
||||
assert!(!pollster::block_on(policy.is_allowed(&args2)));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_aws_policy_variables_nested() -> Result<()> {
|
||||
let data = r#"
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListBucket"],
|
||||
"Resource": ["arn:aws:s3:::${${aws:PrincipalType}-${aws:userid}}"]
|
||||
}
|
||||
]
|
||||
}
|
||||
"#;
|
||||
|
||||
let policy = Policy::parse_config(data.as_bytes())?;
|
||||
|
||||
let mut claims = HashMap::new();
|
||||
claims.insert("sub".to_string(), Value::String("AIDACKCEVSQ6C2EXAMPLE".to_string()));
|
||||
// For PrincipalType, it will default to "User" when not explicitly set
|
||||
|
||||
let conditions = HashMap::new();
|
||||
|
||||
// Test allowed case
|
||||
let args1 = Args {
|
||||
account: "testuser",
|
||||
groups: &None,
|
||||
action: Action::S3Action(crate::policy::action::S3Action::ListBucketAction),
|
||||
bucket: "User-AIDACKCEVSQ6C2EXAMPLE",
|
||||
conditions: &conditions,
|
||||
is_owner: false,
|
||||
object: "",
|
||||
claims: &claims,
|
||||
deny_only: false,
|
||||
};
|
||||
|
||||
// Test denied case
|
||||
let args2 = Args {
|
||||
account: "testuser",
|
||||
groups: &None,
|
||||
action: Action::S3Action(crate::policy::action::S3Action::ListBucketAction),
|
||||
bucket: "User-OTHERUSER",
|
||||
conditions: &conditions,
|
||||
is_owner: false,
|
||||
object: "",
|
||||
claims: &claims,
|
||||
deny_only: false,
|
||||
};
|
||||
|
||||
assert!(pollster::block_on(policy.is_allowed(&args1)));
|
||||
assert!(!pollster::block_on(policy.is_allowed(&args2)));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_aws_policy_variables_multi_value() -> Result<()> {
|
||||
let data = r#"
|
||||
{
|
||||
"Version": "2012-10-17",
|
||||
"Statement": [
|
||||
{
|
||||
"Effect": "Allow",
|
||||
"Action": ["s3:ListBucket"],
|
||||
"Resource": ["arn:aws:s3:::${aws:username}-bucket"]
|
||||
}
|
||||
]
|
||||
}
|
||||
"#;
|
||||
|
||||
let policy = Policy::parse_config(data.as_bytes())?;
|
||||
|
||||
let mut claims = HashMap::new();
|
||||
// Test with array value for username
|
||||
claims.insert(
|
||||
"username".to_string(),
|
||||
Value::Array(vec![Value::String("user1".to_string()), Value::String("user2".to_string())]),
|
||||
);
|
||||
|
||||
let conditions = HashMap::new();
|
||||
|
||||
let args1 = Args {
|
||||
account: "user1",
|
||||
groups: &None,
|
||||
action: Action::S3Action(crate::policy::action::S3Action::ListBucketAction),
|
||||
bucket: "user1-bucket",
|
||||
conditions: &conditions,
|
||||
is_owner: false,
|
||||
object: "",
|
||||
claims: &claims,
|
||||
deny_only: false,
|
||||
};
|
||||
|
||||
let args2 = Args {
|
||||
account: "user2",
|
||||
groups: &None,
|
||||
action: Action::S3Action(crate::policy::action::S3Action::ListBucketAction),
|
||||
bucket: "user2-bucket",
|
||||
conditions: &conditions,
|
||||
is_owner: false,
|
||||
object: "",
|
||||
claims: &claims,
|
||||
deny_only: false,
|
||||
};
|
||||
|
||||
// Either user1 or user2 should be allowed
|
||||
assert!(pollster::block_on(policy.is_allowed(&args1)) || pollster::block_on(policy.is_allowed(&args2)));
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
@@ -24,15 +24,25 @@ use super::{
|
||||
Error as IamError, Validator,
|
||||
function::key_name::KeyName,
|
||||
utils::{path, wildcard},
|
||||
variables::PolicyVariableResolver,
|
||||
};
|
||||
|
||||
#[derive(Serialize, Deserialize, Clone, Default, Debug)]
|
||||
pub struct ResourceSet(pub HashSet<Resource>);
|
||||
|
||||
impl ResourceSet {
|
||||
pub fn is_match(&self, resource: &str, conditions: &HashMap<String, Vec<String>>) -> bool {
|
||||
pub async fn is_match(&self, resource: &str, conditions: &HashMap<String, Vec<String>>) -> bool {
|
||||
self.is_match_with_resolver(resource, conditions, None).await
|
||||
}
|
||||
|
||||
pub async fn is_match_with_resolver(
|
||||
&self,
|
||||
resource: &str,
|
||||
conditions: &HashMap<String, Vec<String>>,
|
||||
resolver: Option<&dyn PolicyVariableResolver>,
|
||||
) -> bool {
|
||||
for re in self.0.iter() {
|
||||
if re.is_match(resource, conditions) {
|
||||
if re.is_match_with_resolver(resource, conditions, resolver).await {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
@@ -40,9 +50,9 @@ impl ResourceSet {
|
||||
false
|
||||
}
|
||||
|
||||
pub fn match_resource(&self, resource: &str) -> bool {
|
||||
pub async fn match_resource(&self, resource: &str) -> bool {
|
||||
for re in self.0.iter() {
|
||||
if re.match_resource(resource) {
|
||||
if re.match_resource(resource).await {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
@@ -85,31 +95,56 @@ pub enum Resource {
|
||||
impl Resource {
|
||||
pub const S3_PREFIX: &'static str = "arn:aws:s3:::";
|
||||
|
||||
pub fn is_match(&self, resource: &str, conditions: &HashMap<String, Vec<String>>) -> bool {
|
||||
let mut pattern = match self {
|
||||
pub async fn is_match(&self, resource: &str, conditions: &HashMap<String, Vec<String>>) -> bool {
|
||||
self.is_match_with_resolver(resource, conditions, None).await
|
||||
}
|
||||
|
||||
pub async fn is_match_with_resolver(
|
||||
&self,
|
||||
resource: &str,
|
||||
conditions: &HashMap<String, Vec<String>>,
|
||||
resolver: Option<&dyn PolicyVariableResolver>,
|
||||
) -> bool {
|
||||
let pattern = match self {
|
||||
Resource::S3(s) => s.to_owned(),
|
||||
Resource::Kms(s) => s.to_owned(),
|
||||
};
|
||||
if !conditions.is_empty() {
|
||||
for key in KeyName::COMMON_KEYS {
|
||||
if let Some(rvalue) = conditions.get(key.name()) {
|
||||
if matches!(rvalue.first().map(|c| !c.is_empty()), Some(true)) {
|
||||
pattern = pattern.replace(&key.var_name(), &rvalue[0]);
|
||||
|
||||
let patterns = if let Some(res) = resolver {
|
||||
super::variables::resolve_aws_variables(&pattern, res).await
|
||||
} else {
|
||||
vec![pattern.clone()]
|
||||
};
|
||||
|
||||
for pattern in patterns {
|
||||
let mut resolved_pattern = pattern;
|
||||
|
||||
// Apply condition substitutions
|
||||
if !conditions.is_empty() {
|
||||
for key in KeyName::COMMON_KEYS {
|
||||
if let Some(rvalue) = conditions.get(key.name()) {
|
||||
if matches!(rvalue.first().map(|c| !c.is_empty()), Some(true)) {
|
||||
resolved_pattern = resolved_pattern.replace(&key.var_name(), &rvalue[0]);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let cp = path::clean(resource);
|
||||
if cp != "." && cp == resolved_pattern.as_str() {
|
||||
return true;
|
||||
}
|
||||
|
||||
if wildcard::is_match(resolved_pattern, resource) {
|
||||
return true;
|
||||
}
|
||||
}
|
||||
|
||||
let cp = path::clean(resource);
|
||||
if cp != "." && cp == pattern.as_str() {
|
||||
return true;
|
||||
}
|
||||
|
||||
wildcard::is_match(pattern, resource)
|
||||
false
|
||||
}
|
||||
|
||||
pub fn match_resource(&self, resource: &str) -> bool {
|
||||
self.is_match(resource, &HashMap::new())
|
||||
pub async fn match_resource(&self, resource: &str) -> bool {
|
||||
self.is_match(resource, &HashMap::new()).await
|
||||
}
|
||||
}
|
||||
|
||||
@@ -197,6 +232,7 @@ mod tests {
|
||||
#[test_case("arn:aws:s3:::mybucket","mybucket/myobject" => false; "15")]
|
||||
fn test_resource_is_match(resource: &str, object: &str) -> bool {
|
||||
let resource: Resource = resource.try_into().unwrap();
|
||||
resource.is_match(object, &HashMap::new())
|
||||
|
||||
pollster::block_on(resource.is_match(object, &HashMap::new()))
|
||||
}
|
||||
}
|
||||
|
||||
@@ -15,6 +15,7 @@
|
||||
use super::{
|
||||
ActionSet, Args, BucketPolicyArgs, Effect, Error as IamError, Functions, ID, Principal, ResourceSet, Validator,
|
||||
action::Action,
|
||||
variables::{VariableContext, VariableResolver},
|
||||
};
|
||||
use crate::error::{Error, Result};
|
||||
use serde::{Deserialize, Serialize};
|
||||
@@ -68,7 +69,24 @@ impl Statement {
|
||||
false
|
||||
}
|
||||
|
||||
pub fn is_allowed(&self, args: &Args) -> bool {
|
||||
pub async fn is_allowed(&self, args: &Args<'_>) -> bool {
|
||||
let mut context = VariableContext::new();
|
||||
context.claims = Some(args.claims.clone());
|
||||
context.conditions = args.conditions.clone();
|
||||
context.account_id = Some(args.account.to_string());
|
||||
|
||||
let username = if let Some(parent) = args.claims.get("parent").and_then(|v| v.as_str()) {
|
||||
// For temp credentials or service account credentials, username is parent_user
|
||||
parent.to_string()
|
||||
} else {
|
||||
// For regular user credentials, username is access_key
|
||||
args.account.to_string()
|
||||
};
|
||||
|
||||
context.username = Some(username);
|
||||
|
||||
let resolver = VariableResolver::new(context);
|
||||
|
||||
let check = 'c: {
|
||||
if (!self.actions.is_match(&args.action) && !self.actions.is_empty()) || self.not_actions.is_match(&args.action) {
|
||||
break 'c false;
|
||||
@@ -86,14 +104,20 @@ impl Statement {
|
||||
}
|
||||
|
||||
if self.is_kms() && (resource == "/" || self.resources.is_empty()) {
|
||||
break 'c self.conditions.evaluate(args.conditions);
|
||||
break 'c self.conditions.evaluate_with_resolver(args.conditions, Some(&resolver)).await;
|
||||
}
|
||||
|
||||
if !self.resources.is_match(&resource, args.conditions) && !self.is_admin() && !self.is_sts() {
|
||||
if !self
|
||||
.resources
|
||||
.is_match_with_resolver(&resource, args.conditions, Some(&resolver))
|
||||
.await
|
||||
&& !self.is_admin()
|
||||
&& !self.is_sts()
|
||||
{
|
||||
break 'c false;
|
||||
}
|
||||
|
||||
self.conditions.evaluate(args.conditions)
|
||||
self.conditions.evaluate_with_resolver(args.conditions, Some(&resolver)).await
|
||||
};
|
||||
|
||||
self.effect.is_allowed(check)
|
||||
@@ -155,7 +179,7 @@ pub struct BPStatement {
|
||||
}
|
||||
|
||||
impl BPStatement {
|
||||
pub fn is_allowed(&self, args: &BucketPolicyArgs) -> bool {
|
||||
pub async fn is_allowed(&self, args: &BucketPolicyArgs<'_>) -> bool {
|
||||
let check = 'c: {
|
||||
if !self.principal.is_match(args.account) {
|
||||
break 'c false;
|
||||
@@ -176,15 +200,15 @@ impl BPStatement {
|
||||
resource.push('/');
|
||||
}
|
||||
|
||||
if !self.resources.is_empty() && !self.resources.is_match(&resource, args.conditions) {
|
||||
if !self.resources.is_empty() && !self.resources.is_match(&resource, args.conditions).await {
|
||||
break 'c false;
|
||||
}
|
||||
|
||||
if !self.not_resources.is_empty() && self.not_resources.is_match(&resource, args.conditions) {
|
||||
if !self.not_resources.is_empty() && self.not_resources.is_match(&resource, args.conditions).await {
|
||||
break 'c false;
|
||||
}
|
||||
|
||||
self.conditions.evaluate(args.conditions)
|
||||
self.conditions.evaluate(args.conditions).await
|
||||
};
|
||||
|
||||
self.effect.is_allowed(check)
|
||||
|
||||
465
crates/policy/src/policy/variables.rs
Normal file
465
crates/policy/src/policy/variables.rs
Normal file
@@ -0,0 +1,465 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use async_trait::async_trait;
|
||||
use moka::future::Cache;
|
||||
use serde_json::Value;
|
||||
use std::collections::HashMap;
|
||||
use std::future::Future;
|
||||
use std::time::Duration;
|
||||
use time::OffsetDateTime;
|
||||
|
||||
/// Context information for variable resolution
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct VariableContext {
|
||||
pub is_https: bool,
|
||||
pub source_ip: Option<String>,
|
||||
pub account_id: Option<String>,
|
||||
pub region: Option<String>,
|
||||
pub username: Option<String>,
|
||||
pub claims: Option<HashMap<String, Value>>,
|
||||
pub conditions: HashMap<String, Vec<String>>,
|
||||
pub custom_variables: HashMap<String, String>,
|
||||
}
|
||||
|
||||
impl VariableContext {
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
}
|
||||
|
||||
pub struct VariableResolverCache {
|
||||
/// Moka cache storing resolved results
|
||||
cache: Cache<String, String>,
|
||||
}
|
||||
|
||||
impl VariableResolverCache {
|
||||
pub fn new(capacity: usize, ttl_seconds: u64) -> Self {
|
||||
let cache = Cache::builder()
|
||||
.max_capacity(capacity as u64)
|
||||
.time_to_live(Duration::from_secs(ttl_seconds))
|
||||
.build();
|
||||
|
||||
Self { cache }
|
||||
}
|
||||
|
||||
pub async fn get(&self, key: &str) -> Option<String> {
|
||||
self.cache.get(key).await
|
||||
}
|
||||
|
||||
pub async fn put(&self, key: String, value: String) {
|
||||
self.cache.insert(key, value).await;
|
||||
}
|
||||
|
||||
pub async fn clear(&self) {
|
||||
self.cache.invalidate_all();
|
||||
}
|
||||
}
|
||||
|
||||
/// Cached dynamic AWS variable resolver
|
||||
pub struct CachedAwsVariableResolver {
|
||||
inner: VariableResolver,
|
||||
cache: VariableResolverCache,
|
||||
}
|
||||
|
||||
impl CachedAwsVariableResolver {
|
||||
pub fn new(context: VariableContext) -> Self {
|
||||
Self {
|
||||
inner: VariableResolver::new(context),
|
||||
cache: VariableResolverCache::new(100, 300), // 100 entries, 5 minutes expiration
|
||||
}
|
||||
}
|
||||
|
||||
pub fn is_dynamic(&self, variable_name: &str) -> bool {
|
||||
self.inner.is_dynamic(variable_name)
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl PolicyVariableResolver for CachedAwsVariableResolver {
|
||||
async fn resolve(&self, variable_name: &str) -> Option<String> {
|
||||
if self.is_dynamic(variable_name) {
|
||||
return self.inner.resolve(variable_name).await;
|
||||
}
|
||||
|
||||
if let Some(cached) = self.cache.get(variable_name).await {
|
||||
return Some(cached);
|
||||
}
|
||||
|
||||
let value = self.inner.resolve(variable_name).await?;
|
||||
self.cache.put(variable_name.to_string(), value.clone()).await;
|
||||
Some(value)
|
||||
}
|
||||
|
||||
async fn resolve_multiple(&self, variable_name: &str) -> Option<Vec<String>> {
|
||||
self.inner.resolve_multiple(variable_name).await
|
||||
}
|
||||
|
||||
fn is_dynamic(&self, variable_name: &str) -> bool {
|
||||
self.inner.is_dynamic(variable_name)
|
||||
}
|
||||
}
|
||||
|
||||
/// Policy variable resolver trait
|
||||
#[async_trait]
|
||||
pub trait PolicyVariableResolver: Sync {
|
||||
async fn resolve(&self, variable_name: &str) -> Option<String>;
|
||||
async fn resolve_multiple(&self, variable_name: &str) -> Option<Vec<String>> {
|
||||
self.resolve(variable_name).await.map(|s| vec![s])
|
||||
}
|
||||
fn is_dynamic(&self, variable_name: &str) -> bool;
|
||||
}
|
||||
|
||||
/// AWS variable resolver
|
||||
pub struct VariableResolver {
|
||||
context: VariableContext,
|
||||
}
|
||||
|
||||
impl VariableResolver {
|
||||
pub fn new(context: VariableContext) -> Self {
|
||||
Self { context }
|
||||
}
|
||||
|
||||
fn get_claim_as_strings(&self, claim_name: &str) -> Option<Vec<String>> {
|
||||
self.context
|
||||
.claims
|
||||
.as_ref()
|
||||
.and_then(|claims| claims.get(claim_name))
|
||||
.and_then(|value| match value {
|
||||
Value::String(s) => Some(vec![s.clone()]),
|
||||
Value::Array(arr) => Some(
|
||||
arr.iter()
|
||||
.filter_map(|item| match item {
|
||||
Value::String(s) => Some(s.clone()),
|
||||
Value::Number(n) => Some(n.to_string()),
|
||||
Value::Bool(b) => Some(b.to_string()),
|
||||
_ => None,
|
||||
})
|
||||
.collect(),
|
||||
),
|
||||
Value::Number(n) => Some(vec![n.to_string()]),
|
||||
Value::Bool(b) => Some(vec![b.to_string()]),
|
||||
_ => None,
|
||||
})
|
||||
}
|
||||
|
||||
fn resolve_username(&self) -> Option<String> {
|
||||
self.context.username.clone()
|
||||
}
|
||||
|
||||
fn resolve_userid(&self) -> Option<String> {
|
||||
self.get_claim_as_strings("sub")
|
||||
.or_else(|| self.get_claim_as_strings("parent"))
|
||||
.and_then(|mut vec| vec.pop()) // 取第一个值,保持原有逻辑
|
||||
}
|
||||
|
||||
fn resolve_principal_type(&self) -> String {
|
||||
if let Some(claims) = &self.context.claims {
|
||||
if claims.contains_key("roleArn") {
|
||||
return "AssumedRole".to_string();
|
||||
}
|
||||
|
||||
if claims.contains_key("parent") && claims.contains_key("sa-policy") {
|
||||
return "ServiceAccount".to_string();
|
||||
}
|
||||
}
|
||||
|
||||
"User".to_string()
|
||||
}
|
||||
|
||||
fn resolve_secure_transport(&self) -> String {
|
||||
if self.context.is_https { "true" } else { "false" }.to_string()
|
||||
}
|
||||
|
||||
fn resolve_current_time(&self) -> String {
|
||||
let now = OffsetDateTime::now_utc();
|
||||
now.format(&time::format_description::well_known::Rfc3339)
|
||||
.unwrap_or_else(|_| now.to_string())
|
||||
}
|
||||
|
||||
fn resolve_epoch_time(&self) -> String {
|
||||
OffsetDateTime::now_utc().unix_timestamp().to_string()
|
||||
}
|
||||
|
||||
fn resolve_account_id(&self) -> Option<String> {
|
||||
self.context.account_id.clone()
|
||||
}
|
||||
|
||||
fn resolve_region(&self) -> Option<String> {
|
||||
self.context.region.clone()
|
||||
}
|
||||
|
||||
fn resolve_source_ip(&self) -> Option<String> {
|
||||
self.context.source_ip.clone()
|
||||
}
|
||||
|
||||
fn resolve_custom_variable(&self, variable_name: &str) -> Option<String> {
|
||||
let custom_key = variable_name.strip_prefix("custom:")?;
|
||||
self.context.custom_variables.get(custom_key).cloned()
|
||||
}
|
||||
}
|
||||
|
||||
#[async_trait]
|
||||
impl PolicyVariableResolver for VariableResolver {
|
||||
async fn resolve(&self, variable_name: &str) -> Option<String> {
|
||||
match variable_name {
|
||||
"aws:username" => self.resolve_username(),
|
||||
"aws:userid" => self.resolve_userid(),
|
||||
"aws:PrincipalType" => Some(self.resolve_principal_type()),
|
||||
"aws:SecureTransport" => Some(self.resolve_secure_transport()),
|
||||
"aws:CurrentTime" => Some(self.resolve_current_time()),
|
||||
"aws:EpochTime" => Some(self.resolve_epoch_time()),
|
||||
"aws:AccountId" => self.resolve_account_id(),
|
||||
"aws:Region" => self.resolve_region(),
|
||||
"aws:SourceIp" => self.resolve_source_ip(),
|
||||
_ => {
|
||||
// Handle custom:* variables
|
||||
if variable_name.starts_with("custom:") {
|
||||
self.resolve_custom_variable(variable_name)
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async fn resolve_multiple(&self, variable_name: &str) -> Option<Vec<String>> {
|
||||
match variable_name {
|
||||
"aws:username" => self.resolve_username().map(|s| vec![s]),
|
||||
|
||||
"aws:userid" => self
|
||||
.get_claim_as_strings("sub")
|
||||
.or_else(|| self.get_claim_as_strings("parent")),
|
||||
|
||||
_ => self.resolve(variable_name).await.map(|s| vec![s]),
|
||||
}
|
||||
}
|
||||
|
||||
fn is_dynamic(&self, variable_name: &str) -> bool {
|
||||
matches!(variable_name, "aws:CurrentTime" | "aws:EpochTime")
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn resolve_aws_variables(pattern: &str, resolver: &dyn PolicyVariableResolver) -> Vec<String> {
|
||||
let mut results = vec![pattern.to_string()];
|
||||
|
||||
let mut changed = true;
|
||||
let max_iterations = 10; // Prevent infinite loops
|
||||
let mut iteration = 0;
|
||||
|
||||
while changed && iteration < max_iterations {
|
||||
changed = false;
|
||||
iteration += 1;
|
||||
|
||||
let mut new_results = Vec::new();
|
||||
for result in &results {
|
||||
let resolved = resolve_single_pass(result, resolver).await;
|
||||
if resolved.len() > 1 || (resolved.len() == 1 && &resolved[0] != result) {
|
||||
changed = true;
|
||||
}
|
||||
new_results.extend(resolved);
|
||||
}
|
||||
|
||||
// Remove duplicates while preserving order
|
||||
results.clear();
|
||||
let mut seen = std::collections::HashSet::new();
|
||||
for result in new_results {
|
||||
if seen.insert(result.clone()) {
|
||||
results.push(result);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
results
|
||||
}
|
||||
|
||||
// Need to box the future to avoid infinite size due to recursion
|
||||
fn resolve_aws_variables_boxed<'a>(
|
||||
pattern: &'a str,
|
||||
resolver: &'a dyn PolicyVariableResolver,
|
||||
) -> std::pin::Pin<Box<dyn Future<Output = Vec<String>> + Send + 'a>> {
|
||||
Box::pin(resolve_aws_variables(pattern, resolver))
|
||||
}
|
||||
|
||||
/// Single pass resolution of variables in a string
|
||||
async fn resolve_single_pass(pattern: &str, resolver: &dyn PolicyVariableResolver) -> Vec<String> {
|
||||
// Find all ${...} format variables
|
||||
let mut results = vec![pattern.to_string()];
|
||||
|
||||
// Process each result string
|
||||
let mut i = 0;
|
||||
while i < results.len() {
|
||||
let mut start = 0;
|
||||
let mut modified = false;
|
||||
|
||||
// Find variables in current string
|
||||
while let Some(pos) = results[i][start..].find("${") {
|
||||
let actual_pos = start + pos;
|
||||
|
||||
// Find the matching closing brace, taking into account nested braces
|
||||
let mut brace_count = 1;
|
||||
let mut end_pos = actual_pos + 2; // Start after "${"
|
||||
|
||||
while end_pos < results[i].len() && brace_count > 0 {
|
||||
match results[i].chars().nth(end_pos).unwrap() {
|
||||
'{' => brace_count += 1,
|
||||
'}' => brace_count -= 1,
|
||||
_ => {}
|
||||
}
|
||||
if brace_count > 0 {
|
||||
end_pos += 1;
|
||||
}
|
||||
}
|
||||
|
||||
if brace_count == 0 {
|
||||
let var_name = &results[i][actual_pos + 2..end_pos];
|
||||
|
||||
// Check if this is a nested variable (contains ${...} inside)
|
||||
if var_name.contains("${") {
|
||||
// For nested variables like ${${a}-${b}}, we need to resolve the inner variables first
|
||||
// Then use the resolved result as a new variable to resolve
|
||||
let resolved_inner = resolve_aws_variables_boxed(var_name, resolver).await;
|
||||
let mut new_results = Vec::new();
|
||||
|
||||
for resolved_var_name in resolved_inner {
|
||||
let prefix = &results[i][..actual_pos];
|
||||
let suffix = &results[i][end_pos + 1..];
|
||||
new_results.push(format!("{prefix}{resolved_var_name}{suffix}"));
|
||||
}
|
||||
|
||||
if !new_results.is_empty() {
|
||||
// Update result set
|
||||
results.splice(i..i + 1, new_results);
|
||||
modified = true;
|
||||
break;
|
||||
} else {
|
||||
// If we couldn't resolve the nested variable, keep the original
|
||||
start = end_pos + 1;
|
||||
}
|
||||
} else {
|
||||
// Regular variable resolution
|
||||
if let Some(values) = resolver.resolve_multiple(var_name).await {
|
||||
if !values.is_empty() {
|
||||
// If there are multiple values, create a new result for each value
|
||||
let mut new_results = Vec::new();
|
||||
let prefix = &results[i][..actual_pos];
|
||||
let suffix = &results[i][end_pos + 1..];
|
||||
|
||||
for value in values {
|
||||
new_results.push(format!("{prefix}{value}{suffix}"));
|
||||
}
|
||||
|
||||
results.splice(i..i + 1, new_results);
|
||||
modified = true;
|
||||
break;
|
||||
} else {
|
||||
// Variable resolved to empty, just remove the variable placeholder
|
||||
let mut new_results = Vec::new();
|
||||
let prefix = &results[i][..actual_pos];
|
||||
let suffix = &results[i][end_pos + 1..];
|
||||
new_results.push(format!("{prefix}{suffix}"));
|
||||
|
||||
results.splice(i..i + 1, new_results);
|
||||
modified = true;
|
||||
break;
|
||||
}
|
||||
} else {
|
||||
// Variable not found, skip
|
||||
start = end_pos + 1;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// No matching closing brace found, break loop
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
if !modified {
|
||||
i += 1;
|
||||
}
|
||||
}
|
||||
|
||||
results
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use serde_json::Value;
|
||||
use std::collections::HashMap;
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_resolve_aws_variables_with_username() {
|
||||
let mut context = VariableContext::new();
|
||||
context.username = Some("testuser".to_string());
|
||||
|
||||
let resolver = VariableResolver::new(context);
|
||||
let result = resolve_aws_variables("${aws:username}-bucket", &resolver).await;
|
||||
assert_eq!(result, vec!["testuser-bucket".to_string()]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_resolve_aws_variables_with_userid() {
|
||||
let mut claims = HashMap::new();
|
||||
claims.insert("sub".to_string(), Value::String("AIDACKCEVSQ6C2EXAMPLE".to_string()));
|
||||
|
||||
let mut context = VariableContext::new();
|
||||
context.claims = Some(claims);
|
||||
|
||||
let resolver = VariableResolver::new(context);
|
||||
let result = resolve_aws_variables("${aws:userid}-bucket", &resolver).await;
|
||||
assert_eq!(result, vec!["AIDACKCEVSQ6C2EXAMPLE-bucket".to_string()]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_resolve_aws_variables_with_multiple_variables() {
|
||||
let mut claims = HashMap::new();
|
||||
claims.insert("sub".to_string(), Value::String("AIDACKCEVSQ6C2EXAMPLE".to_string()));
|
||||
|
||||
let mut context = VariableContext::new();
|
||||
context.claims = Some(claims);
|
||||
context.username = Some("testuser".to_string());
|
||||
|
||||
let resolver = VariableResolver::new(context);
|
||||
let result = resolve_aws_variables("${aws:username}-${aws:userid}-bucket", &resolver).await;
|
||||
assert_eq!(result, vec!["testuser-AIDACKCEVSQ6C2EXAMPLE-bucket".to_string()]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_resolve_aws_variables_no_variables() {
|
||||
let context = VariableContext::new();
|
||||
let resolver = VariableResolver::new(context);
|
||||
|
||||
let result = resolve_aws_variables("test-bucket", &resolver).await;
|
||||
assert_eq!(result, vec!["test-bucket".to_string()]);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_cached_aws_variable_resolver_dynamic_variables() {
|
||||
let context = VariableContext::new();
|
||||
|
||||
let cached_resolver = CachedAwsVariableResolver::new(context);
|
||||
|
||||
// Dynamic variables should not be cached
|
||||
let result1 = resolve_aws_variables("${aws:EpochTime}-bucket", &cached_resolver).await;
|
||||
|
||||
// Add a delay of 1 second to ensure different timestamps
|
||||
tokio::time::sleep(Duration::from_secs(1)).await;
|
||||
|
||||
let result2 = resolve_aws_variables("${aws:EpochTime}-bucket", &cached_resolver).await;
|
||||
|
||||
// Both results should be different (different timestamps)
|
||||
assert_ne!(result1, result2);
|
||||
}
|
||||
}
|
||||
@@ -612,7 +612,7 @@ struct ArgsBuilder {
|
||||
"24"
|
||||
)]
|
||||
fn policy_is_allowed(policy: Policy, args: ArgsBuilder) -> bool {
|
||||
policy.is_allowed(&Args {
|
||||
pollster::block_on(policy.is_allowed(&Args {
|
||||
account: &args.account,
|
||||
groups: &{
|
||||
if args.groups.is_empty() {
|
||||
@@ -628,5 +628,5 @@ fn policy_is_allowed(policy: Policy, args: ArgsBuilder) -> bool {
|
||||
object: &args.object,
|
||||
claims: &args.claims,
|
||||
deny_only: args.deny_only,
|
||||
})
|
||||
}))
|
||||
}
|
||||
|
||||
@@ -15,19 +15,19 @@
|
||||
#[allow(unsafe_code)]
|
||||
mod generated;
|
||||
|
||||
use std::{error::Error, time::Duration};
|
||||
|
||||
pub use generated::*;
|
||||
use proto_gen::node_service::node_service_client::NodeServiceClient;
|
||||
use rustfs_common::globals::{GLOBAL_Conn_Map, evict_connection};
|
||||
use rustfs_common::globals::{GLOBAL_CONN_MAP, GLOBAL_ROOT_CERT, evict_connection};
|
||||
use std::{error::Error, time::Duration};
|
||||
use tonic::{
|
||||
Request, Status,
|
||||
metadata::MetadataValue,
|
||||
service::interceptor::InterceptedService,
|
||||
transport::{Channel, Endpoint},
|
||||
transport::{Certificate, Channel, ClientTlsConfig, Endpoint},
|
||||
};
|
||||
use tracing::{debug, warn};
|
||||
|
||||
pub use generated::*;
|
||||
|
||||
// Default 100 MB
|
||||
pub const DEFAULT_GRPC_SERVER_MESSAGE_LEN: usize = 100 * 1024 * 1024;
|
||||
|
||||
@@ -46,6 +46,12 @@ const HTTP2_KEEPALIVE_TIMEOUT_SECS: u64 = 3;
|
||||
/// Overall RPC timeout - maximum time for any single RPC operation
|
||||
const RPC_TIMEOUT_SECS: u64 = 30;
|
||||
|
||||
/// Default HTTPS prefix for rustfs
|
||||
/// This is the default HTTPS prefix for rustfs.
|
||||
/// It is used to identify HTTPS URLs.
|
||||
/// Default value: https://
|
||||
const RUSTFS_HTTPS_PREFIX: &str = "https://";
|
||||
|
||||
/// Creates a new gRPC channel with optimized keepalive settings for cluster resilience.
|
||||
///
|
||||
/// This function is designed to detect dead peers quickly:
|
||||
@@ -56,7 +62,7 @@ const RPC_TIMEOUT_SECS: u64 = 30;
|
||||
async fn create_new_channel(addr: &str) -> Result<Channel, Box<dyn Error>> {
|
||||
debug!("Creating new gRPC channel to: {}", addr);
|
||||
|
||||
let connector = Endpoint::from_shared(addr.to_string())?
|
||||
let mut connector = Endpoint::from_shared(addr.to_string())?
|
||||
// Fast connection timeout for dead peer detection
|
||||
.connect_timeout(Duration::from_secs(CONNECT_TIMEOUT_SECS))
|
||||
// TCP-level keepalive - OS will probe connection
|
||||
@@ -70,11 +76,42 @@ async fn create_new_channel(addr: &str) -> Result<Channel, Box<dyn Error>> {
|
||||
// Overall timeout for any RPC - fail fast on unresponsive peers
|
||||
.timeout(Duration::from_secs(RPC_TIMEOUT_SECS));
|
||||
|
||||
let root_cert = GLOBAL_ROOT_CERT.read().await;
|
||||
if addr.starts_with(RUSTFS_HTTPS_PREFIX) {
|
||||
if let Some(cert_pem) = root_cert.as_ref() {
|
||||
let ca = Certificate::from_pem(cert_pem);
|
||||
// Derive the hostname from the HTTPS URL for TLS hostname verification.
|
||||
let domain = addr
|
||||
.trim_start_matches(RUSTFS_HTTPS_PREFIX)
|
||||
.split('/')
|
||||
.next()
|
||||
.unwrap_or("")
|
||||
.split(':')
|
||||
.next()
|
||||
.unwrap_or("");
|
||||
let tls = if !domain.is_empty() {
|
||||
ClientTlsConfig::new().ca_certificate(ca).domain_name(domain)
|
||||
} else {
|
||||
// Fallback: configure TLS without explicit domain if parsing fails.
|
||||
ClientTlsConfig::new().ca_certificate(ca)
|
||||
};
|
||||
connector = connector.tls_config(tls)?;
|
||||
debug!("Configured TLS with custom root certificate for: {}", addr);
|
||||
} else {
|
||||
debug!("Using system root certificates for TLS: {}", addr);
|
||||
}
|
||||
} else {
|
||||
// Custom root certificates are configured but will be ignored for non-HTTPS addresses.
|
||||
if root_cert.is_some() {
|
||||
warn!("Custom root certificates are configured but not used because the address does not use HTTPS: {addr}");
|
||||
}
|
||||
}
|
||||
|
||||
let channel = connector.connect().await?;
|
||||
|
||||
// Cache the new connection
|
||||
{
|
||||
GLOBAL_Conn_Map.write().await.insert(addr.to_string(), channel.clone());
|
||||
GLOBAL_CONN_MAP.write().await.insert(addr.to_string(), channel.clone());
|
||||
}
|
||||
|
||||
debug!("Successfully created and cached gRPC channel to: {}", addr);
|
||||
@@ -111,7 +148,7 @@ pub async fn node_service_time_out_client(
|
||||
let token: MetadataValue<_> = "rustfs rpc".parse()?;
|
||||
|
||||
// Try to get cached channel
|
||||
let cached_channel = { GLOBAL_Conn_Map.read().await.get(addr).cloned() };
|
||||
let cached_channel = { GLOBAL_CONN_MAP.read().await.get(addr).cloned() };
|
||||
|
||||
let channel = match cached_channel {
|
||||
Some(channel) => {
|
||||
|
||||
@@ -46,7 +46,7 @@ fn main() -> Result<(), AnyError> {
|
||||
};
|
||||
|
||||
if !need_compile {
|
||||
println!("no need to compile protos.{}", need_compile);
|
||||
println!("no need to compile protos.{need_compile}");
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
|
||||
@@ -39,6 +39,7 @@ object_store = { workspace = true }
|
||||
pin-project-lite.workspace = true
|
||||
s3s.workspace = true
|
||||
snafu = { workspace = true, features = ["backtrace"] }
|
||||
parking_lot.workspace = true
|
||||
tokio.workspace = true
|
||||
tokio-util.workspace = true
|
||||
tracing.workspace = true
|
||||
|
||||
@@ -15,10 +15,11 @@
|
||||
use std::fmt::Display;
|
||||
use std::pin::Pin;
|
||||
use std::sync::Arc;
|
||||
use std::sync::atomic::{AtomicPtr, Ordering};
|
||||
use std::task::{Context, Poll};
|
||||
use std::time::{Duration, Instant};
|
||||
|
||||
use parking_lot::RwLock;
|
||||
|
||||
use async_trait::async_trait;
|
||||
use datafusion::arrow::datatypes::{Schema, SchemaRef};
|
||||
use datafusion::arrow::record_batch::RecordBatch;
|
||||
@@ -132,7 +133,7 @@ pub struct QueryStateMachine {
|
||||
pub session: SessionCtx,
|
||||
pub query: Query,
|
||||
|
||||
state: AtomicPtr<QueryState>,
|
||||
state: RwLock<QueryState>,
|
||||
start: Instant,
|
||||
}
|
||||
|
||||
@@ -141,14 +142,14 @@ impl QueryStateMachine {
|
||||
Self {
|
||||
session,
|
||||
query,
|
||||
state: AtomicPtr::new(Box::into_raw(Box::new(QueryState::ACCEPTING))),
|
||||
state: RwLock::new(QueryState::ACCEPTING),
|
||||
start: Instant::now(),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn begin_analyze(&self) {
|
||||
// TODO record time
|
||||
self.translate_to(Box::new(QueryState::RUNNING(RUNNING::ANALYZING)));
|
||||
self.translate_to(QueryState::RUNNING(RUNNING::ANALYZING));
|
||||
}
|
||||
|
||||
pub fn end_analyze(&self) {
|
||||
@@ -157,7 +158,7 @@ impl QueryStateMachine {
|
||||
|
||||
pub fn begin_optimize(&self) {
|
||||
// TODO record time
|
||||
self.translate_to(Box::new(QueryState::RUNNING(RUNNING::OPTIMIZING)));
|
||||
self.translate_to(QueryState::RUNNING(RUNNING::OPTIMIZING));
|
||||
}
|
||||
|
||||
pub fn end_optimize(&self) {
|
||||
@@ -166,7 +167,7 @@ impl QueryStateMachine {
|
||||
|
||||
pub fn begin_schedule(&self) {
|
||||
// TODO
|
||||
self.translate_to(Box::new(QueryState::RUNNING(RUNNING::SCHEDULING)));
|
||||
self.translate_to(QueryState::RUNNING(RUNNING::SCHEDULING));
|
||||
}
|
||||
|
||||
pub fn end_schedule(&self) {
|
||||
@@ -175,29 +176,29 @@ impl QueryStateMachine {
|
||||
|
||||
pub fn finish(&self) {
|
||||
// TODO
|
||||
self.translate_to(Box::new(QueryState::DONE(DONE::FINISHED)));
|
||||
self.translate_to(QueryState::DONE(DONE::FINISHED));
|
||||
}
|
||||
|
||||
pub fn cancel(&self) {
|
||||
// TODO
|
||||
self.translate_to(Box::new(QueryState::DONE(DONE::CANCELLED)));
|
||||
self.translate_to(QueryState::DONE(DONE::CANCELLED));
|
||||
}
|
||||
|
||||
pub fn fail(&self) {
|
||||
// TODO
|
||||
self.translate_to(Box::new(QueryState::DONE(DONE::FAILED)));
|
||||
self.translate_to(QueryState::DONE(DONE::FAILED));
|
||||
}
|
||||
|
||||
pub fn state(&self) -> &QueryState {
|
||||
unsafe { &*self.state.load(Ordering::Relaxed) }
|
||||
pub fn state(&self) -> QueryState {
|
||||
self.state.read().clone()
|
||||
}
|
||||
|
||||
pub fn duration(&self) -> Duration {
|
||||
self.start.elapsed()
|
||||
}
|
||||
|
||||
fn translate_to(&self, state: Box<QueryState>) {
|
||||
self.state.store(Box::into_raw(state), Ordering::Relaxed);
|
||||
fn translate_to(&self, state: QueryState) {
|
||||
*self.state.write() = state;
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -12,7 +12,6 @@
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use std::fmt;
|
||||
|
||||
/// Error returned when parsing event name string fails.
|
||||
@@ -29,7 +28,7 @@ impl std::error::Error for ParseEventNameError {}
|
||||
|
||||
/// Represents the type of event that occurs on the object.
|
||||
/// Based on AWS S3 event type and includes RustFS extension.
|
||||
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq, Hash, Default)]
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash, Default)]
|
||||
pub enum EventName {
|
||||
// Single event type (values are 1-32 for compatible mask logic)
|
||||
ObjectAccessedGet = 1,
|
||||
@@ -289,3 +288,79 @@ impl From<&str> for EventName {
|
||||
EventName::parse(event_str).unwrap_or_else(|e| panic!("{}", e))
|
||||
}
|
||||
}
|
||||
|
||||
impl serde::ser::Serialize for EventName {
|
||||
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error>
|
||||
where
|
||||
S: serde::ser::Serializer,
|
||||
{
|
||||
serializer.serialize_str(self.as_str())
|
||||
}
|
||||
}
|
||||
|
||||
impl<'de> serde::de::Deserialize<'de> for EventName {
|
||||
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error>
|
||||
where
|
||||
D: serde::de::Deserializer<'de>,
|
||||
{
|
||||
let s = String::deserialize(deserializer)?;
|
||||
let s = Self::parse(&s).map_err(serde::de::Error::custom)?;
|
||||
Ok(s)
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
// test serialization
|
||||
#[test]
|
||||
fn test_event_name_serialization_and_deserialization() {
|
||||
struct TestCase {
|
||||
event: EventName,
|
||||
serialized_str: &'static str,
|
||||
}
|
||||
|
||||
let test_cases = vec![
|
||||
TestCase {
|
||||
event: EventName::BucketCreated,
|
||||
serialized_str: "\"s3:BucketCreated:*\"",
|
||||
},
|
||||
TestCase {
|
||||
event: EventName::ObjectCreatedAll,
|
||||
serialized_str: "\"s3:ObjectCreated:*\"",
|
||||
},
|
||||
TestCase {
|
||||
event: EventName::ObjectCreatedPut,
|
||||
serialized_str: "\"s3:ObjectCreated:Put\"",
|
||||
},
|
||||
];
|
||||
|
||||
for case in &test_cases {
|
||||
let serialized = serde_json::to_string(&case.event);
|
||||
assert!(serialized.is_ok(), "Serialization failed for `{}`", case.serialized_str);
|
||||
assert_eq!(serialized.unwrap(), case.serialized_str);
|
||||
|
||||
let deserialized = serde_json::from_str::<EventName>(case.serialized_str);
|
||||
assert!(deserialized.is_ok(), "Deserialization failed for `{}`", case.serialized_str);
|
||||
assert_eq!(deserialized.unwrap(), case.event);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_invalid_event_name_deserialization() {
|
||||
let invalid_str = "\"s3:InvalidEvent:Test\"";
|
||||
let deserialized = serde_json::from_str::<EventName>(invalid_str);
|
||||
assert!(deserialized.is_err(), "Deserialization should fail for invalid event name");
|
||||
|
||||
// Serializing EventName::Everything produces an empty string, but deserializing an empty string should fail.
|
||||
let event_name = EventName::Everything;
|
||||
let serialized_str = "\"\"";
|
||||
let serialized = serde_json::to_string(&event_name);
|
||||
assert!(serialized.is_ok(), "Serialization failed for `{serialized_str}`");
|
||||
assert_eq!(serialized.unwrap(), serialized_str);
|
||||
|
||||
let deserialized = serde_json::from_str::<EventName>(serialized_str);
|
||||
assert!(deserialized.is_err(), "Deserialization should fail for empty string");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -27,6 +27,7 @@ pub use target::Target;
|
||||
|
||||
/// Represents a log of events for sending to targets
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
#[serde(rename_all = "PascalCase")]
|
||||
pub struct TargetLog<E> {
|
||||
/// The event name
|
||||
pub event_name: EventName,
|
||||
|
||||
@@ -312,7 +312,7 @@ where
|
||||
compress: true,
|
||||
};
|
||||
|
||||
let data = serde_json::to_vec(&item).map_err(|e| StoreError::Serialization(e.to_string()))?;
|
||||
let data = serde_json::to_vec(&*item).map_err(|e| StoreError::Serialization(e.to_string()))?;
|
||||
self.write_file(&key, &data)?;
|
||||
|
||||
Ok(key)
|
||||
|
||||
@@ -159,3 +159,30 @@ impl std::fmt::Display for TargetType {
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Decodes a form-urlencoded object name to its original form.
|
||||
///
|
||||
/// This function properly handles form-urlencoded strings where spaces are
|
||||
/// represented as `+` symbols. It first replaces `+` with spaces, then
|
||||
/// performs standard percent-decoding.
|
||||
///
|
||||
/// # Arguments
|
||||
/// * `encoded` - The form-urlencoded string to decode
|
||||
///
|
||||
/// # Returns
|
||||
/// The decoded string, or an error if decoding fails
|
||||
///
|
||||
/// # Example
|
||||
/// ```
|
||||
/// use rustfs_targets::target::decode_object_name;
|
||||
///
|
||||
/// let encoded = "greeting+file+%282%29.csv";
|
||||
/// let decoded = decode_object_name(encoded).unwrap();
|
||||
/// assert_eq!(decoded, "greeting file (2).csv");
|
||||
/// ```
|
||||
pub fn decode_object_name(encoded: &str) -> Result<String, TargetError> {
|
||||
let replaced = encoded.replace("+", " ");
|
||||
urlencoding::decode(&replaced)
|
||||
.map(|s| s.into_owned())
|
||||
.map_err(|e| TargetError::Encoding(format!("Failed to decode object key: {e}")))
|
||||
}
|
||||
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user