Compare commits

..

12 Commits

Author SHA1 Message Date
weisd
00787cbce4 todo 2025-12-10 15:36:52 +08:00
weisd
3ac004510a fix clippy 2025-12-09 17:33:08 +08:00
weisd
d8f8bfa5b7 fix heal replication 2025-12-09 17:07:39 +08:00
weisd
1768e7bbdb Merge branch 'main' into feat/scan 2025-12-09 14:44:08 +08:00
houseme
3326737c01 Merge branch 'main' into feat/scan 2025-12-08 22:46:28 +08:00
weisd
91770ffd1b Merge branch 'main' into feat/scan 2025-12-08 21:25:51 +08:00
weisd
7940b69bf8 Merge branch 'main' into feat/scan 2025-12-08 21:24:36 +08:00
weisd
427d31d09c Merge branch 'main' into feat/scan 2025-12-08 17:31:56 +08:00
weisd
dbdcecb9c5 optimize 2025-12-08 17:26:07 +08:00
houseme
ad34f1b031 Merge branch 'main' into feat/scan 2025-12-07 21:45:36 +08:00
weisd
2a5ccd2211 fix meta bucket check 2025-12-04 17:12:01 +08:00
guojidan
c43166c4c6 Enhance heal and lifecycle integration tests, improve replication han… (#944)
Signed-off-by: junxiang Mu <1948535941@qq.com>
Co-authored-by: houseme <housemecn@gmail.com>
Co-authored-by: weisd <im@weisd.in>
2025-12-04 16:14:47 +08:00
288 changed files with 7639 additions and 20940 deletions

1
.envrc
View File

@@ -1 +0,0 @@
use flake

View File

@@ -1,103 +0,0 @@
# S3 Compatibility Tests Configuration
This directory contains the configuration for running [Ceph S3 compatibility tests](https://github.com/ceph/s3-tests) against RustFS.
## Configuration File
The `s3tests.conf` file is based on the official `s3tests.conf.SAMPLE` from the ceph/s3-tests repository. It uses environment variable substitution via `envsubst` to configure the endpoint and credentials.
### Key Configuration Points
- **Host**: Set via `${S3_HOST}` environment variable (e.g., `rustfs-single` for single-node, `lb` for multi-node)
- **Port**: 9000 (standard RustFS port)
- **Credentials**: Uses `${S3_ACCESS_KEY}` and `${S3_SECRET_KEY}` from workflow environment
- **TLS**: Disabled (`is_secure = False`)
## Test Execution Strategy
### Network Connectivity Fix
Tests run inside a Docker container on the `rustfs-net` network, which allows them to resolve and connect to the RustFS container hostnames. This fixes the "Temporary failure in name resolution" error that occurred when tests ran on the GitHub runner host.
### Performance Optimizations
1. **Parallel Execution**: Uses `pytest-xdist` with `-n 4` to run tests in parallel across 4 workers
2. **Load Distribution**: Uses `--dist=loadgroup` to distribute test groups across workers
3. **Fail-Fast**: Uses `--maxfail=50` to stop after 50 failures, saving time on catastrophic failures
### Feature Filtering
Tests are filtered using pytest markers (`-m`) to skip features not yet supported by RustFS:
- `lifecycle` - Bucket lifecycle policies
- `versioning` - Object versioning
- `s3website` - Static website hosting
- `bucket_logging` - Bucket logging
- `encryption` / `sse_s3` - Server-side encryption
- `cloud_transition` / `cloud_restore` - Cloud storage transitions
- `lifecycle_expiration` / `lifecycle_transition` - Lifecycle operations
This filtering:
1. Reduces test execution time significantly (from 1+ hour to ~10-15 minutes)
2. Focuses on features RustFS currently supports
3. Avoids hundreds of expected failures
## Running Tests Locally
### Single-Node Test
```bash
# Set credentials
export S3_ACCESS_KEY=rustfsadmin
export S3_SECRET_KEY=rustfsadmin
# Start RustFS container
docker run -d --name rustfs-single \
--network rustfs-net \
-e RUSTFS_ADDRESS=0.0.0.0:9000 \
-e RUSTFS_ACCESS_KEY=$S3_ACCESS_KEY \
-e RUSTFS_SECRET_KEY=$S3_SECRET_KEY \
-e RUSTFS_VOLUMES="/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3" \
rustfs-ci
# Generate config
export S3_HOST=rustfs-single
envsubst < .github/s3tests/s3tests.conf > /tmp/s3tests.conf
# Run tests
docker run --rm \
--network rustfs-net \
-v /tmp/s3tests.conf:/etc/s3tests.conf:ro \
python:3.12-slim \
bash -c '
apt-get update -qq && apt-get install -y -qq git
git clone --depth 1 https://github.com/ceph/s3-tests.git /s3-tests
cd /s3-tests
pip install -q -r requirements.txt pytest-xdist
S3TEST_CONF=/etc/s3tests.conf pytest -v -n 4 \
s3tests/functional/test_s3.py \
-m "not lifecycle and not versioning and not s3website and not bucket_logging and not encryption and not sse_s3"
'
```
## Test Results Interpretation
- **PASSED**: Test succeeded, feature works correctly
- **FAILED**: Test failed, indicates a potential bug or incompatibility
- **ERROR**: Test setup failed (e.g., network issues, missing dependencies)
- **SKIPPED**: Test skipped due to marker filtering
## Adding New Feature Support
When adding support for a new S3 feature to RustFS:
1. Remove the corresponding marker from the filter in `.github/workflows/e2e-s3tests.yml`
2. Run the tests to verify compatibility
3. Fix any failing tests
4. Update this README to reflect the newly supported feature
## References
- [Ceph S3 Tests Repository](https://github.com/ceph/s3-tests)
- [S3 API Compatibility](https://docs.aws.amazon.com/AmazonS3/latest/API/)
- [pytest-xdist Documentation](https://pytest-xdist.readthedocs.io/)

View File

@@ -1,185 +0,0 @@
# RustFS s3-tests configuration
# Based on: https://github.com/ceph/s3-tests/blob/master/s3tests.conf.SAMPLE
#
# Usage:
# Single-node: S3_HOST=rustfs-single envsubst < s3tests.conf > /tmp/s3tests.conf
# Multi-node: S3_HOST=lb envsubst < s3tests.conf > /tmp/s3tests.conf
[DEFAULT]
## this section is just used for host, port and bucket_prefix
# host set for RustFS - will be substituted via envsubst
host = ${S3_HOST}
# port for RustFS
port = 9000
## say "False" to disable TLS
is_secure = False
## say "False" to disable SSL Verify
ssl_verify = False
[fixtures]
## all the buckets created will start with this prefix;
## {random} will be filled with random characters to pad
## the prefix to 30 characters long, and avoid collisions
bucket prefix = rustfs-{random}-
# all the iam account resources (users, roles, etc) created
# will start with this name prefix
iam name prefix = s3-tests-
# all the iam account resources (users, roles, etc) created
# will start with this path prefix
iam path prefix = /s3-tests/
[s3 main]
# main display_name
display_name = RustFS Tester
# main user_id
user_id = rustfsadmin
# main email
email = tester@rustfs.local
# zonegroup api_name for bucket location
api_name = default
## main AWS access key
access_key = ${S3_ACCESS_KEY}
## main AWS secret key
secret_key = ${S3_SECRET_KEY}
## replace with key id obtained when secret is created, or delete if KMS not tested
#kms_keyid = 01234567-89ab-cdef-0123-456789abcdef
## Storage classes
#storage_classes = "LUKEWARM, FROZEN"
## Lifecycle debug interval (default: 10)
#lc_debug_interval = 20
## Restore debug interval (default: 100)
#rgw_restore_debug_interval = 60
#rgw_restore_processor_period = 60
[s3 alt]
# alt display_name
display_name = RustFS Alt Tester
## alt email
email = alt@rustfs.local
# alt user_id
user_id = rustfsalt
# alt AWS access key (must be different from s3 main for many tests)
access_key = ${S3_ALT_ACCESS_KEY}
# alt AWS secret key
secret_key = ${S3_ALT_SECRET_KEY}
#[s3 cloud]
## to run the testcases with "cloud_transition" for transition
## and "cloud_restore" for restore attribute.
## Note: the waiting time may have to tweaked depending on
## the I/O latency to the cloud endpoint.
## host set for cloud endpoint
# host = localhost
## port set for cloud endpoint
# port = 8001
## say "False" to disable TLS
# is_secure = False
## cloud endpoint credentials
# access_key = 0555b35654ad1656d804
# secret_key = h7GhxuBLTrlhVUyxSPUKUV8r/2EI4ngqJxD7iBdBYLhwluN30JaT3Q==
## storage class configured as cloud tier on local rgw server
# cloud_storage_class = CLOUDTIER
## Below are optional -
## Above configured cloud storage class config options
# retain_head_object = false
# allow_read_through = false # change it to enable read_through
# read_through_restore_days = 2
# target_storage_class = Target_SC
# target_path = cloud-bucket
## another regular storage class to test multiple transition rules,
# storage_class = S1
[s3 tenant]
# tenant display_name
display_name = RustFS Tenant Tester
# tenant user_id
user_id = rustfstenant
# tenant AWS access key
access_key = ${S3_ACCESS_KEY}
# tenant AWS secret key
secret_key = ${S3_SECRET_KEY}
# tenant email
email = tenant@rustfs.local
# tenant name
tenant = testx
#following section needs to be added for all sts-tests
[iam]
#used for iam operations in sts-tests
#email
email = s3@rustfs.local
#user_id
user_id = rustfsiam
#access_key
access_key = ${S3_ACCESS_KEY}
#secret_key
secret_key = ${S3_SECRET_KEY}
#display_name
display_name = RustFS IAM User
# iam account root user for iam_account tests
[iam root]
access_key = ${S3_ACCESS_KEY}
secret_key = ${S3_SECRET_KEY}
user_id = RGW11111111111111111
email = account1@rustfs.local
# iam account root user in a different account than [iam root]
[iam alt root]
access_key = ${S3_ACCESS_KEY}
secret_key = ${S3_SECRET_KEY}
user_id = RGW22222222222222222
email = account2@rustfs.local
#following section needs to be added when you want to run Assume Role With Webidentity test
[webidentity]
#used for assume role with web identity test in sts-tests
#all parameters will be obtained from ceph/qa/tasks/keycloak.py
#token=<access_token>
#aud=<obtained after introspecting token>
#sub=<obtained after introspecting token>
#azp=<obtained after introspecting token>
#user_token=<access token for a user, with attribute Department=[Engineering, Marketing>]
#thumbprint=<obtained from x509 certificate>
#KC_REALM=<name of the realm>

View File

@@ -40,11 +40,11 @@ env:
jobs:
security-audit:
name: Security Audit
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@v5
- name: Install cargo-audit
uses: taiki-e/install-action@v2
@@ -65,14 +65,14 @@ jobs:
dependency-review:
name: Dependency Review
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@v5
- name: Dependency Review
uses: actions/dependency-review-action@v4

View File

@@ -83,7 +83,7 @@ jobs:
# Build strategy check - determine build type based on trigger
build-check:
name: Build Strategy Check
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
outputs:
should_build: ${{ steps.check.outputs.should_build }}
build_type: ${{ steps.check.outputs.build_type }}
@@ -92,7 +92,7 @@ jobs:
is_prerelease: ${{ steps.check.outputs.is_prerelease }}
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
fetch-depth: 0
@@ -167,19 +167,19 @@ jobs:
matrix:
include:
# Linux builds
- os: ubicloud-standard-2
- os: ubuntu-latest
target: x86_64-unknown-linux-musl
cross: false
platform: linux
- os: ubicloud-standard-2
- os: ubuntu-latest
target: aarch64-unknown-linux-musl
cross: true
platform: linux
- os: ubicloud-standard-2
- os: ubuntu-latest
target: x86_64-unknown-linux-gnu
cross: false
platform: linux
- os: ubicloud-standard-2
- os: ubuntu-latest
target: aarch64-unknown-linux-gnu
cross: true
platform: linux
@@ -203,7 +203,7 @@ jobs:
# platform: windows
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
fetch-depth: 0
@@ -454,7 +454,7 @@ jobs:
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
OSS_REGION: cn-beijing
OSS_ENDPOINT: https://oss-accelerate.aliyuncs.com
OSS_ENDPOINT: https://oss-cn-beijing.aliyuncs.com
shell: bash
run: |
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
@@ -532,7 +532,7 @@ jobs:
name: Build Summary
needs: [ build-check, build-rustfs ]
if: always() && needs.build-check.outputs.should_build == 'true'
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
steps:
- name: Build completion summary
shell: bash
@@ -584,7 +584,7 @@ jobs:
name: Create GitHub Release
needs: [ build-check, build-rustfs ]
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
permissions:
contents: write
outputs:
@@ -592,7 +592,7 @@ jobs:
release_url: ${{ steps.create.outputs.release_url }}
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
fetch-depth: 0
@@ -670,13 +670,13 @@ jobs:
name: Upload Release Assets
needs: [ build-check, build-rustfs, create-release ]
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
permissions:
contents: write
actions: read
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@v5
- name: Download all build artifacts
uses: actions/download-artifact@v5
@@ -751,7 +751,7 @@ jobs:
name: Update Latest Version
needs: [ build-check, upload-release-assets ]
if: startsWith(github.ref, 'refs/tags/')
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
steps:
- name: Update latest.json
env:
@@ -801,12 +801,12 @@ jobs:
name: Publish Release
needs: [ build-check, create-release, upload-release-assets ]
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@v5
- name: Update release notes and publish
env:

View File

@@ -4,7 +4,7 @@
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
@@ -62,23 +62,17 @@ on:
permissions:
contents: read
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
CARGO_BUILD_JOBS: 2
jobs:
skip-check:
name: Skip Duplicate Actions
permissions:
actions: write
contents: read
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
steps:
@@ -89,13 +83,15 @@ jobs:
concurrent_skipping: "same_content_newer"
cancel_others: true
paths_ignore: '["*.md", "docs/**", "deploy/**"]'
# Never skip release events and tag pushes
do_not_skip: '["workflow_dispatch", "schedule", "merge_group", "release", "push"]'
typos:
name: Typos
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v6
- uses: actions/checkout@v5
- uses: dtolnay/rust-toolchain@stable
- name: Typos check with custom config file
uses: crate-ci/typos@master
@@ -104,11 +100,13 @@ jobs:
name: Test and Lint
needs: skip-check
if: needs.skip-check.outputs.should_skip != 'true'
runs-on: ubicloud-standard-4
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- name: Delete huge unnecessary tools folder
run: rm -rf /opt/hostedtoolcache
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@v5
- name: Setup Rust environment
uses: ./.github/actions/setup
@@ -118,9 +116,6 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
- name: Install cargo-nextest
uses: taiki-e/install-action@nextest
- name: Run tests
run: |
cargo nextest run --all --exclude e2e_test
@@ -136,16 +131,11 @@ jobs:
name: End-to-End Tests
needs: skip-check
if: needs.skip-check.outputs.should_skip != 'true'
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v6
- name: Clean up previous test run
run: |
rm -rf /tmp/rustfs
rm -f /tmp/rustfs.log
uses: actions/checkout@v5
- name: Setup Rust environment
uses: ./.github/actions/setup
@@ -165,8 +155,7 @@ jobs:
- name: Build debug binary
run: |
touch rustfs/build.rs
# Limit concurrency to prevent OOM
cargo build -p rustfs --bins --jobs 2
cargo build -p rustfs --bins
- name: Run end-to-end tests
run: |

View File

@@ -72,7 +72,7 @@ jobs:
# Check if we should build Docker images
build-check:
name: Docker Build Check
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
outputs:
should_build: ${{ steps.check.outputs.should_build }}
should_push: ${{ steps.check.outputs.should_push }}
@@ -83,7 +83,7 @@ jobs:
create_latest: ${{ steps.check.outputs.create_latest }}
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@v5
with:
fetch-depth: 0
# For workflow_run events, checkout the specific commit that triggered the workflow
@@ -162,11 +162,11 @@ jobs:
if [[ "$version" == *"alpha"* ]] || [[ "$version" == *"beta"* ]] || [[ "$version" == *"rc"* ]]; then
build_type="prerelease"
is_prerelease=true
# TODO: Temporary change - currently allows alpha versions to also create latest tags
# After the version is stable, you need to remove the following line and restore the original logic (latest is created only for stable versions)
# TODO: 临时修改 - 当前允许 alpha 版本也创建 latest 标签
# 等版本稳定后,需要移除下面这行,恢复原有逻辑(只有稳定版本才创建 latest
if [[ "$version" == *"alpha"* ]]; then
create_latest=true
echo "🧪 Building Docker image for prerelease: $version (temporarily allowing creation of latest tag)"
echo "🧪 Building Docker image for prerelease: $version (临时允许创建 latest 标签)"
else
echo "🧪 Building Docker image for prerelease: $version"
fi
@@ -215,11 +215,11 @@ jobs:
v*alpha*|v*beta*|v*rc*|*alpha*|*beta*|*rc*)
build_type="prerelease"
is_prerelease=true
# TODO: Temporary change - currently allows alpha versions to also create latest tags
# After the version is stable, you need to remove the if block below and restore the original logic.
# TODO: 临时修改 - 当前允许 alpha 版本也创建 latest 标签
# 等版本稳定后,需要移除下面的 if 块,恢复原有逻辑
if [[ "$input_version" == *"alpha"* ]]; then
create_latest=true
echo "🧪 Building with prerelease version: $input_version (temporarily allowing creation of latest tag)"
echo "🧪 Building with prerelease version: $input_version (临时允许创建 latest 标签)"
else
echo "🧪 Building with prerelease version: $input_version"
fi
@@ -264,11 +264,11 @@ jobs:
name: Build Docker Images
needs: build-check
if: needs.build-check.outputs.should_build == 'true'
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@v5
- name: Login to Docker Hub
uses: docker/login-action@v3
@@ -330,9 +330,9 @@ jobs:
# Add channel tags for prereleases and latest for stable
if [[ "$CREATE_LATEST" == "true" ]]; then
# TODO: Temporary change - the current alpha version will also create the latest tag
# After the version is stabilized, the logic here remains unchanged, but the upstream CREATE_LATEST setting needs to be restored.
# Stable release (and temporary alpha versions)
# TODO: 临时修改 - 当前 alpha 版本也会创建 latest 标签
# 等版本稳定后,这里的逻辑保持不变,但上游的 CREATE_LATEST 设置需要恢复
# Stable release (以及临时的 alpha 版本)
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:latest"
elif [[ "$BUILD_TYPE" == "prerelease" ]]; then
# Prerelease channel tags (alpha, beta, rc)
@@ -404,7 +404,7 @@ jobs:
name: Docker Build Summary
needs: [ build-check, build-docker ]
if: always() && needs.build-check.outputs.should_build == 'true'
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
steps:
- name: Docker build completion summary
run: |
@@ -429,10 +429,10 @@ jobs:
"prerelease")
echo "🧪 Prerelease Docker image has been built with ${VERSION} tags"
echo "⚠️ This is a prerelease image - use with caution"
# TODO: Temporary change - alpha versions currently create the latest tag
# After the version is stable, you need to restore the following prompt information
# TODO: 临时修改 - alpha 版本当前会创建 latest 标签
# 等版本稳定后,需要恢复下面的提示信息
if [[ "$VERSION" == *"alpha"* ]] && [[ "$CREATE_LATEST" == "true" ]]; then
echo "🏷️ Latest tag has been created for alpha version (temporary measures)"
echo "🏷️ Latest tag has been created for alpha version (临时措施)"
else
echo "🚫 Latest tag NOT created for prerelease"
fi

View File

@@ -1,260 +0,0 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: e2e-mint
on:
push:
branches: [ main ]
paths:
- ".github/workflows/e2e-mint.yml"
- "Dockerfile.source"
- "rustfs/**"
- "crates/**"
workflow_dispatch:
inputs:
run-multi:
description: "Run multi-node Mint as well"
required: false
default: "false"
env:
ACCESS_KEY: rustfsadmin
SECRET_KEY: rustfsadmin
RUST_LOG: info
PLATFORM: linux/amd64
jobs:
mint-single:
runs-on: ubicloud-standard-2
timeout-minutes: 40
steps:
- name: Checkout
uses: actions/checkout@v6
- name: Enable buildx
uses: docker/setup-buildx-action@v3
- name: Build RustFS image (source)
run: |
DOCKER_BUILDKIT=1 docker buildx build --load \
--platform ${PLATFORM} \
-t rustfs-ci \
-f Dockerfile.source .
- name: Create network
run: |
docker network inspect rustfs-net >/dev/null 2>&1 || docker network create rustfs-net
- name: Remove existing rustfs-single (if any)
run: docker rm -f rustfs-single >/dev/null 2>&1 || true
- name: Start single RustFS
run: |
docker run -d --name rustfs-single \
--network rustfs-net \
-e RUSTFS_ADDRESS=0.0.0.0:9000 \
-e RUSTFS_ACCESS_KEY=$ACCESS_KEY \
-e RUSTFS_SECRET_KEY=$SECRET_KEY \
-e RUSTFS_VOLUMES="/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3" \
-v /tmp/rustfs-single:/data \
rustfs-ci
- name: Wait for RustFS ready
run: |
for i in {1..30}; do
if docker exec rustfs-single curl -sf http://localhost:9000/health >/dev/null; then
exit 0
fi
sleep 2
done
echo "RustFS did not become ready" >&2
docker logs rustfs-single || true
exit 1
- name: Run Mint (single, S3-only)
run: |
mkdir -p artifacts/mint-single
docker run --rm --network rustfs-net \
--platform ${PLATFORM} \
-e SERVER_ENDPOINT=rustfs-single:9000 \
-e ACCESS_KEY=$ACCESS_KEY \
-e SECRET_KEY=$SECRET_KEY \
-e ENABLE_HTTPS=0 \
-e SERVER_REGION=us-east-1 \
-e RUN_ON_FAIL=1 \
-e MINT_MODE=core \
-v ${GITHUB_WORKSPACE}/artifacts/mint-single:/mint/log \
--entrypoint /mint/mint.sh \
minio/mint:edge \
awscli aws-sdk-go aws-sdk-java-v2 aws-sdk-php aws-sdk-ruby s3cmd s3select
- name: Collect RustFS logs
run: |
mkdir -p artifacts/rustfs-single
docker logs rustfs-single > artifacts/rustfs-single/rustfs.log || true
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: mint-single
path: artifacts/**
mint-multi:
if: github.event_name == 'workflow_dispatch' && github.event.inputs.run-multi == 'true'
needs: mint-single
runs-on: ubicloud-standard-2
timeout-minutes: 60
steps:
- name: Checkout
uses: actions/checkout@v6
- name: Enable buildx
uses: docker/setup-buildx-action@v3
- name: Build RustFS image (source)
run: |
DOCKER_BUILDKIT=1 docker buildx build --load \
--platform ${PLATFORM} \
-t rustfs-ci \
-f Dockerfile.source .
- name: Prepare cluster compose
run: |
cat > compose.yml <<'EOF'
version: '3.8'
services:
rustfs1:
image: rustfs-ci
hostname: rustfs1
networks: [rustfs-net]
environment:
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_ACCESS_KEY=${ACCESS_KEY}
- RUSTFS_SECRET_KEY=${SECRET_KEY}
- RUSTFS_VOLUMES=/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3
volumes:
- rustfs1-data:/data
rustfs2:
image: rustfs-ci
hostname: rustfs2
networks: [rustfs-net]
environment:
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_ACCESS_KEY=${ACCESS_KEY}
- RUSTFS_SECRET_KEY=${SECRET_KEY}
- RUSTFS_VOLUMES=/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3
volumes:
- rustfs2-data:/data
rustfs3:
image: rustfs-ci
hostname: rustfs3
networks: [rustfs-net]
environment:
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_ACCESS_KEY=${ACCESS_KEY}
- RUSTFS_SECRET_KEY=${SECRET_KEY}
- RUSTFS_VOLUMES=/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3
volumes:
- rustfs3-data:/data
rustfs4:
image: rustfs-ci
hostname: rustfs4
networks: [rustfs-net]
environment:
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_ACCESS_KEY=${ACCESS_KEY}
- RUSTFS_SECRET_KEY=${SECRET_KEY}
- RUSTFS_VOLUMES=/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3
volumes:
- rustfs4-data:/data
lb:
image: haproxy:2.9
hostname: lb
networks: [rustfs-net]
ports:
- "9000:9000"
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
networks:
rustfs-net:
name: rustfs-net
volumes:
rustfs1-data:
rustfs2-data:
rustfs3-data:
rustfs4-data:
EOF
cat > haproxy.cfg <<'EOF'
defaults
mode http
timeout connect 5s
timeout client 30s
timeout server 30s
frontend fe_s3
bind *:9000
default_backend be_s3
backend be_s3
balance roundrobin
server s1 rustfs1:9000 check
server s2 rustfs2:9000 check
server s3 rustfs3:9000 check
server s4 rustfs4:9000 check
EOF
- name: Launch cluster
run: docker compose -f compose.yml up -d
- name: Wait for LB ready
run: |
for i in {1..60}; do
if docker run --rm --network rustfs-net curlimages/curl -sf http://lb:9000/health >/dev/null; then
exit 0
fi
sleep 2
done
echo "LB or backend not ready" >&2
docker compose -f compose.yml logs --tail=200 || true
exit 1
- name: Run Mint (multi, S3-only)
run: |
mkdir -p artifacts/mint-multi
docker run --rm --network rustfs-net \
--platform ${PLATFORM} \
-e SERVER_ENDPOINT=lb:9000 \
-e ACCESS_KEY=$ACCESS_KEY \
-e SECRET_KEY=$SECRET_KEY \
-e ENABLE_HTTPS=0 \
-e SERVER_REGION=us-east-1 \
-e RUN_ON_FAIL=1 \
-e MINT_MODE=core \
-v ${GITHUB_WORKSPACE}/artifacts/mint-multi:/mint/log \
--entrypoint /mint/mint.sh \
minio/mint:edge \
awscli aws-sdk-go aws-sdk-java-v2 aws-sdk-php aws-sdk-ruby s3cmd s3select
- name: Collect logs
run: |
mkdir -p artifacts/cluster
docker compose -f compose.yml logs --no-color > artifacts/cluster/cluster.log || true
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: mint-multi
path: artifacts/**

View File

@@ -1,422 +0,0 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: e2e-s3tests
on:
workflow_dispatch:
inputs:
test-mode:
description: "Test mode to run"
required: true
type: choice
default: "single"
options:
- single
- multi
xdist:
description: "Enable pytest-xdist (parallel). '0' to disable."
required: false
default: "0"
maxfail:
description: "Stop after N failures (debug friendly)"
required: false
default: "1"
markexpr:
description: "pytest -m expression (feature filters)"
required: false
default: "not lifecycle and not versioning and not s3website and not bucket_logging and not encryption"
env:
# main user
S3_ACCESS_KEY: rustfsadmin
S3_SECRET_KEY: rustfsadmin
# alt user (must be different from main for many s3-tests)
S3_ALT_ACCESS_KEY: rustfsalt
S3_ALT_SECRET_KEY: rustfsalt
S3_REGION: us-east-1
RUST_LOG: info
PLATFORM: linux/amd64
defaults:
run:
shell: bash
jobs:
s3tests-single:
if: github.event.inputs.test-mode == 'single'
runs-on: ubicloud-standard-2
timeout-minutes: 120
steps:
- uses: actions/checkout@v6
- name: Enable buildx
uses: docker/setup-buildx-action@v3
- name: Build RustFS image (source, cached)
run: |
DOCKER_BUILDKIT=1 docker buildx build --load \
--platform ${PLATFORM} \
--cache-from type=gha \
--cache-to type=gha,mode=max \
-t rustfs-ci \
-f Dockerfile.source .
- name: Create network
run: docker network inspect rustfs-net >/dev/null 2>&1 || docker network create rustfs-net
- name: Remove existing rustfs-single (if any)
run: docker rm -f rustfs-single >/dev/null 2>&1 || true
- name: Start single RustFS
run: |
docker run -d --name rustfs-single \
--network rustfs-net \
-p 9000:9000 \
-e RUSTFS_ADDRESS=0.0.0.0:9000 \
-e RUSTFS_ACCESS_KEY=$S3_ACCESS_KEY \
-e RUSTFS_SECRET_KEY=$S3_SECRET_KEY \
-e RUSTFS_VOLUMES="/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3" \
-v /tmp/rustfs-single:/data \
rustfs-ci
- name: Wait for RustFS ready
run: |
for i in {1..60}; do
if curl -sf http://127.0.0.1:9000/health >/dev/null 2>&1; then
echo "RustFS is ready"
exit 0
fi
if [ "$(docker inspect -f '{{.State.Running}}' rustfs-single 2>/dev/null)" != "true" ]; then
echo "RustFS container not running" >&2
docker logs rustfs-single || true
exit 1
fi
sleep 2
done
echo "Health check timed out" >&2
docker logs rustfs-single || true
exit 1
- name: Generate s3tests config
run: |
export S3_HOST=127.0.0.1
envsubst < .github/s3tests/s3tests.conf > s3tests.conf
- name: Provision s3-tests alt user (required by suite)
run: |
python3 -m pip install --user --upgrade pip awscurl
export PATH="$HOME/.local/bin:$PATH"
# Admin API requires AWS SigV4 signing. awscurl is used by RustFS codebase as well.
awscurl \
--service s3 \
--region "${S3_REGION}" \
--access_key "${S3_ACCESS_KEY}" \
--secret_key "${S3_SECRET_KEY}" \
-X PUT \
-H 'Content-Type: application/json' \
-d '{"secretKey":"'"${S3_ALT_SECRET_KEY}"'","status":"enabled","policy":"readwrite"}' \
"http://127.0.0.1:9000/rustfs/admin/v3/add-user?accessKey=${S3_ALT_ACCESS_KEY}"
# Explicitly attach built-in policy via policy mapping.
# s3-tests relies on alt client being able to ListBuckets during setup cleanup.
awscurl \
--service s3 \
--region "${S3_REGION}" \
--access_key "${S3_ACCESS_KEY}" \
--secret_key "${S3_SECRET_KEY}" \
-X PUT \
"http://127.0.0.1:9000/rustfs/admin/v3/set-user-or-group-policy?policyName=readwrite&userOrGroup=${S3_ALT_ACCESS_KEY}&isGroup=false"
# Sanity check: alt user can list buckets (should not be AccessDenied).
awscurl \
--service s3 \
--region "${S3_REGION}" \
--access_key "${S3_ALT_ACCESS_KEY}" \
--secret_key "${S3_ALT_SECRET_KEY}" \
-X GET \
"http://127.0.0.1:9000/" >/dev/null
- name: Prepare s3-tests
run: |
python3 -m pip install --user --upgrade pip tox
export PATH="$HOME/.local/bin:$PATH"
git clone --depth 1 https://github.com/ceph/s3-tests.git s3-tests
- name: Run ceph s3-tests (debug friendly)
run: |
export PATH="$HOME/.local/bin:$PATH"
mkdir -p artifacts/s3tests-single
cd s3-tests
set -o pipefail
MAXFAIL="${{ github.event.inputs.maxfail }}"
if [ -z "$MAXFAIL" ]; then MAXFAIL="1"; fi
MARKEXPR="${{ github.event.inputs.markexpr }}"
if [ -z "$MARKEXPR" ]; then MARKEXPR="not lifecycle and not versioning and not s3website and not bucket_logging and not encryption"; fi
XDIST="${{ github.event.inputs.xdist }}"
if [ -z "$XDIST" ]; then XDIST="0"; fi
XDIST_ARGS=""
if [ "$XDIST" != "0" ]; then
# Add pytest-xdist to requirements.txt so tox installs it inside
# its virtualenv. Installing outside tox does NOT work.
echo "pytest-xdist" >> requirements.txt
XDIST_ARGS="-n $XDIST --dist=loadgroup"
fi
# Run tests from s3tests/functional (boto2+boto3 combined directory).
S3TEST_CONF=${GITHUB_WORKSPACE}/s3tests.conf \
tox -- \
-vv -ra --showlocals --tb=long \
--maxfail="$MAXFAIL" \
--junitxml=${GITHUB_WORKSPACE}/artifacts/s3tests-single/junit.xml \
$XDIST_ARGS \
s3tests/functional/test_s3.py \
-m "$MARKEXPR" \
2>&1 | tee ${GITHUB_WORKSPACE}/artifacts/s3tests-single/pytest.log
- name: Collect RustFS logs
if: always()
run: |
mkdir -p artifacts/rustfs-single
docker logs rustfs-single > artifacts/rustfs-single/rustfs.log 2>&1 || true
docker inspect rustfs-single > artifacts/rustfs-single/inspect.json || true
- name: Upload artifacts
if: always() && env.ACT != 'true'
uses: actions/upload-artifact@v4
with:
name: s3tests-single
path: artifacts/**
s3tests-multi:
if: github.event_name == 'workflow_dispatch' && github.event.inputs.test-mode == 'multi'
runs-on: ubicloud-standard-2
timeout-minutes: 150
steps:
- uses: actions/checkout@v6
- name: Enable buildx
uses: docker/setup-buildx-action@v3
- name: Build RustFS image (source, cached)
run: |
DOCKER_BUILDKIT=1 docker buildx build --load \
--platform ${PLATFORM} \
--cache-from type=gha \
--cache-to type=gha,mode=max \
-t rustfs-ci \
-f Dockerfile.source .
- name: Prepare cluster compose
run: |
cat > compose.yml <<'EOF'
services:
rustfs1:
image: rustfs-ci
hostname: rustfs1
networks: [rustfs-net]
environment:
RUSTFS_ADDRESS: "0.0.0.0:9000"
RUSTFS_ACCESS_KEY: ${S3_ACCESS_KEY}
RUSTFS_SECRET_KEY: ${S3_SECRET_KEY}
RUSTFS_VOLUMES: "/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3"
volumes:
- rustfs1-data:/data
rustfs2:
image: rustfs-ci
hostname: rustfs2
networks: [rustfs-net]
environment:
RUSTFS_ADDRESS: "0.0.0.0:9000"
RUSTFS_ACCESS_KEY: ${S3_ACCESS_KEY}
RUSTFS_SECRET_KEY: ${S3_SECRET_KEY}
RUSTFS_VOLUMES: "/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3"
volumes:
- rustfs2-data:/data
rustfs3:
image: rustfs-ci
hostname: rustfs3
networks: [rustfs-net]
environment:
RUSTFS_ADDRESS: "0.0.0.0:9000"
RUSTFS_ACCESS_KEY: ${S3_ACCESS_KEY}
RUSTFS_SECRET_KEY: ${S3_SECRET_KEY}
RUSTFS_VOLUMES: "/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3"
volumes:
- rustfs3-data:/data
rustfs4:
image: rustfs-ci
hostname: rustfs4
networks: [rustfs-net]
environment:
RUSTFS_ADDRESS: "0.0.0.0:9000"
RUSTFS_ACCESS_KEY: ${S3_ACCESS_KEY}
RUSTFS_SECRET_KEY: ${S3_SECRET_KEY}
RUSTFS_VOLUMES: "/data/rustfs0 /data/rustfs1 /data/rustfs2 /data/rustfs3"
volumes:
- rustfs4-data:/data
lb:
image: haproxy:2.9
hostname: lb
networks: [rustfs-net]
ports:
- "9000:9000"
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
networks:
rustfs-net:
name: rustfs-net
volumes:
rustfs1-data:
rustfs2-data:
rustfs3-data:
rustfs4-data:
EOF
cat > haproxy.cfg <<'EOF'
defaults
mode http
timeout connect 5s
timeout client 30s
timeout server 30s
frontend fe_s3
bind *:9000
default_backend be_s3
backend be_s3
balance roundrobin
server s1 rustfs1:9000 check
server s2 rustfs2:9000 check
server s3 rustfs3:9000 check
server s4 rustfs4:9000 check
EOF
- name: Launch cluster
run: docker compose -f compose.yml up -d
- name: Wait for LB ready
run: |
for i in {1..90}; do
if curl -sf http://127.0.0.1:9000/health >/dev/null 2>&1; then
echo "Load balancer is ready"
exit 0
fi
sleep 2
done
echo "LB or backend not ready" >&2
docker compose -f compose.yml logs --tail=200 || true
exit 1
- name: Generate s3tests config
run: |
export S3_HOST=127.0.0.1
envsubst < .github/s3tests/s3tests.conf > s3tests.conf
- name: Provision s3-tests alt user (required by suite)
run: |
python3 -m pip install --user --upgrade pip awscurl
export PATH="$HOME/.local/bin:$PATH"
awscurl \
--service s3 \
--region "${S3_REGION}" \
--access_key "${S3_ACCESS_KEY}" \
--secret_key "${S3_SECRET_KEY}" \
-X PUT \
-H 'Content-Type: application/json' \
-d '{"secretKey":"'"${S3_ALT_SECRET_KEY}"'","status":"enabled","policy":"readwrite"}' \
"http://127.0.0.1:9000/rustfs/admin/v3/add-user?accessKey=${S3_ALT_ACCESS_KEY}"
awscurl \
--service s3 \
--region "${S3_REGION}" \
--access_key "${S3_ACCESS_KEY}" \
--secret_key "${S3_SECRET_KEY}" \
-X PUT \
"http://127.0.0.1:9000/rustfs/admin/v3/set-user-or-group-policy?policyName=readwrite&userOrGroup=${S3_ALT_ACCESS_KEY}&isGroup=false"
awscurl \
--service s3 \
--region "${S3_REGION}" \
--access_key "${S3_ALT_ACCESS_KEY}" \
--secret_key "${S3_ALT_SECRET_KEY}" \
-X GET \
"http://127.0.0.1:9000/" >/dev/null
- name: Prepare s3-tests
run: |
python3 -m pip install --user --upgrade pip tox
export PATH="$HOME/.local/bin:$PATH"
git clone --depth 1 https://github.com/ceph/s3-tests.git s3-tests
- name: Run ceph s3-tests (multi, debug friendly)
run: |
export PATH="$HOME/.local/bin:$PATH"
mkdir -p artifacts/s3tests-multi
cd s3-tests
set -o pipefail
MAXFAIL="${{ github.event.inputs.maxfail }}"
if [ -z "$MAXFAIL" ]; then MAXFAIL="1"; fi
MARKEXPR="${{ github.event.inputs.markexpr }}"
if [ -z "$MARKEXPR" ]; then MARKEXPR="not lifecycle and not versioning and not s3website and not bucket_logging and not encryption"; fi
XDIST="${{ github.event.inputs.xdist }}"
if [ -z "$XDIST" ]; then XDIST="0"; fi
XDIST_ARGS=""
if [ "$XDIST" != "0" ]; then
# Add pytest-xdist to requirements.txt so tox installs it inside
# its virtualenv. Installing outside tox does NOT work.
echo "pytest-xdist" >> requirements.txt
XDIST_ARGS="-n $XDIST --dist=loadgroup"
fi
# Run tests from s3tests/functional (boto2+boto3 combined directory).
S3TEST_CONF=${GITHUB_WORKSPACE}/s3tests.conf \
tox -- \
-vv -ra --showlocals --tb=long \
--maxfail="$MAXFAIL" \
--junitxml=${GITHUB_WORKSPACE}/artifacts/s3tests-multi/junit.xml \
$XDIST_ARGS \
s3tests/functional/test_s3.py \
-m "$MARKEXPR" \
2>&1 | tee ${GITHUB_WORKSPACE}/artifacts/s3tests-multi/pytest.log
- name: Collect logs
if: always()
run: |
mkdir -p artifacts/cluster
docker compose -f compose.yml logs --no-color > artifacts/cluster/cluster.log 2>&1 || true
- name: Upload artifacts
if: always() && env.ACT != 'true'
uses: actions/upload-artifact@v4
with:
name: s3tests-multi
path: artifacts/**

View File

@@ -1,33 +1,16 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: Publish helm chart to artifacthub
on:
workflow_run:
workflows: [ "Build and Release" ]
types: [ completed ]
permissions:
contents: read
workflows: ["Build and Release"]
types: [completed]
env:
new_version: ${{ github.event.workflow_run.head_branch }}
jobs:
build-helm-package:
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
# Only run on successful builds triggered by tag pushes (version format: x.y.z or x.y.z-suffix)
if: |
github.event.workflow_run.conclusion == 'success' &&
@@ -36,9 +19,9 @@ jobs:
steps:
- name: Checkout helm chart repo
uses: actions/checkout@v6
uses: actions/checkout@v2
- name: Replace chart app version
- name: Replace chart appversion
run: |
set -e
set -x
@@ -54,7 +37,7 @@ jobs:
cp helm/README.md helm/rustfs/
package_version=$(echo $new_version | awk -F '-' '{print $2}' | awk -F '.' '{print $NF}')
helm package ./helm/rustfs --destination helm/rustfs/ --version "0.0.$package_version"
- name: Upload helm package as artifact
uses: actions/upload-artifact@v4
with:
@@ -63,25 +46,25 @@ jobs:
retention-days: 1
publish-helm-package:
runs-on: ubicloud-standard-2
needs: [ build-helm-package ]
runs-on: ubuntu-latest
needs: [build-helm-package]
steps:
- name: Checkout helm package repo
uses: actions/checkout@v6
uses: actions/checkout@v2
with:
repository: rustfs/helm
repository: rustfs/helm
token: ${{ secrets.RUSTFS_HELM_PACKAGE }}
- name: Download helm package
uses: actions/download-artifact@v4
with:
name: helm-package
path: ./
- name: Set up helm
uses: azure/setup-helm@v4.3.0
- name: Generate index
run: helm repo index . --url https://charts.rustfs.com

View File

@@ -25,7 +25,7 @@ permissions:
jobs:
build:
runs-on: ubicloud-standard-4
runs-on: ubuntu-latest
steps:
- uses: usthe/issues-translate-action@v2.7
with:

View File

@@ -40,11 +40,11 @@ env:
jobs:
performance-profile:
name: Performance Profiling
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@v5
- name: Setup Rust environment
uses: ./.github/actions/setup
@@ -115,11 +115,11 @@ jobs:
benchmark:
name: Benchmark Tests
runs-on: ubicloud-standard-2
runs-on: ubuntu-latest
timeout-minutes: 45
steps:
- name: Checkout repository
uses: actions/checkout@v6
uses: actions/checkout@v5
- name: Setup Rust environment
uses: ./.github/actions/setup

12
.gitignore vendored
View File

@@ -2,7 +2,6 @@
.DS_Store
.idea
.vscode
.direnv/
/test
/logs
/data
@@ -24,13 +23,4 @@ profile.json
*.go
*.pb
*.svg
deploy/logs/*.log.*
# s3-tests local artifacts (root directory only)
/s3-tests/
/s3-tests-local/
/s3tests.conf
/s3tests.conf.*
*.events
*.audit
*.snappy
deploy/logs/*.log.*

View File

@@ -1,32 +0,0 @@
# See https://pre-commit.com for more information
# See https://pre-commit.com/hooks.html for more hooks
repos:
- repo: local
hooks:
- id: cargo-fmt
name: cargo fmt
entry: cargo fmt --all --check
language: system
types: [rust]
pass_filenames: false
- id: cargo-clippy
name: cargo clippy
entry: cargo clippy --all-targets --all-features -- -D warnings
language: system
types: [rust]
pass_filenames: false
- id: cargo-check
name: cargo check
entry: cargo check --all-targets
language: system
types: [rust]
pass_filenames: false
- id: cargo-test
name: cargo test
entry: bash -c 'cargo test --workspace --exclude e2e_test && cargo test --all --doc'
language: system
types: [rust]
pass_filenames: false

71
.vscode/launch.json vendored
View File

@@ -1,31 +1,9 @@
{
// 使用 IntelliSense 了解相关属性。
// 悬停以查看现有属性的描述。
// 欲了解更多信息,请访问: https://go.microsoft.com/fwlink/?linkid=830387
"version": "0.2.0",
"configurations": [
{
"type": "lldb",
"request": "launch",
"name": "Debug(only) executable 'rustfs'",
"env": {
"RUST_LOG": "rustfs=info,ecstore=info,s3s=info,iam=info",
"RUSTFS_SKIP_BACKGROUND_TASK": "on"
//"RUSTFS_OBS_LOG_DIRECTORY": "./deploy/logs",
// "RUSTFS_POLICY_PLUGIN_URL":"http://localhost:8181/v1/data/rustfs/authz/allow",
// "RUSTFS_POLICY_PLUGIN_AUTH_TOKEN":"your-opa-token"
},
"program": "${workspaceFolder}/target/debug/rustfs",
"args": [
"--access-key",
"rustfsadmin",
"--secret-key",
"rustfsadmin",
"--address",
"0.0.0.0:9010",
"--server-domains",
"127.0.0.1:9010",
"./target/volume/test{1...4}"
],
"cwd": "${workspaceFolder}"
},
{
"type": "lldb",
"request": "launch",
@@ -89,8 +67,12 @@
"test",
"--no-run",
"--lib",
"--package=rustfs-ecstore"
]
"--package=ecstore"
],
"filter": {
"name": "ecstore",
"kind": "lib"
}
},
"args": [],
"cwd": "${workspaceFolder}"
@@ -99,17 +81,7 @@
"name": "Debug executable target/debug/rustfs",
"type": "lldb",
"request": "launch",
"cargo": {
"args": [
"run",
"--bin",
"rustfs",
"-j",
"1",
"--profile",
"dev"
]
},
"program": "${workspaceFolder}/target/debug/rustfs",
"args": [],
"cwd": "${workspaceFolder}",
//"stopAtEntry": false,
@@ -117,40 +89,19 @@
"env": {
"RUSTFS_ACCESS_KEY": "rustfsadmin",
"RUSTFS_SECRET_KEY": "rustfsadmin",
//"RUSTFS_VOLUMES": "./target/volume/test{1...4}",
"RUSTFS_VOLUMES": "./target/volume/test{1...4}",
"RUSTFS_ADDRESS": ":9000",
"RUSTFS_CONSOLE_ENABLE": "true",
// "RUSTFS_OBS_TRACE_ENDPOINT": "http://127.0.0.1:4318/v1/traces", // jeager otlp http endpoint
// "RUSTFS_OBS_METRIC_ENDPOINT": "http://127.0.0.1:4318/v1/metrics", // default otlp http endpoint
// "RUSTFS_OBS_LOG_ENDPOINT": "http://127.0.0.1:4318/v1/logs", // default otlp http endpoint
// "RUSTFS_COMPRESS_ENABLE": "true",
"RUSTFS_CONSOLE_ADDRESS": "127.0.0.1:9001",
"RUSTFS_OBS_LOG_DIRECTORY": "./target/logs",
"RUST_LOG":"rustfs=debug,ecstore=debug,s3s=debug,iam=debug",
},
"sourceLanguages": [
"rust"
],
},
{
"type": "lldb",
"request": "launch",
"name": "Debug test_lifecycle_transition_basic",
"cargo": {
"args": [
"test",
"-p",
"rustfs-scanner",
"--test",
"lifecycle_integration_test",
"serial_tests::test_lifecycle_transition_basic",
"-j",
"1"
]
},
"args": [],
"cwd": "${workspaceFolder}"
},
{
"name": "Debug executable target/debug/test",
"type": "lldb",

View File

@@ -2,7 +2,6 @@
## Communication Rules
- Respond to the user in Chinese; use English in all other contexts.
- Code and documentation must be written in English only. Chinese text is allowed solely as test data/fixtures when a case explicitly requires Chinese-language content for validation.
## Project Structure & Module Organization
The workspace root hosts shared dependencies in `Cargo.toml`. The service binary lives under `rustfs/src/main.rs`, while reusable crates sit in `crates/` (`crypto`, `iam`, `kms`, and `e2e_test`). Local fixtures for standalone flows reside in `test_standalone/`, deployment manifests are under `deploy/`, Docker assets sit at the root, and automation lives in `scripts/`. Skim each crates README or module docs before contributing changes.

View File

@@ -2,8 +2,6 @@
## 📋 Code Quality Requirements
For instructions on setting up and running the local development environment, please see [Development Guide](docs/DEVELOPMENT.md).
### 🔧 Code Formatting Rules
**MANDATORY**: All code must be properly formatted before committing. This project enforces strict formatting standards to maintain code consistency and readability.

658
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -34,7 +34,6 @@ members = [
"crates/targets", # Target-specific configurations and utilities
"crates/s3select-api", # S3 Select API interface
"crates/s3select-query", # S3 Select query engine
"crates/scanner", # Scanner for data integrity checks and health monitoring
"crates/signer", # client signer
"crates/checksums", # client checksums
"crates/utils", # Utility functions and helpers
@@ -87,7 +86,6 @@ rustfs-protos = { path = "crates/protos", version = "0.0.5" }
rustfs-rio = { path = "crates/rio", version = "0.0.5" }
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.5" }
rustfs-s3select-query = { path = "crates/s3select-query", version = "0.0.5" }
rustfs-scanner = { path = "crates/scanner", version = "0.0.5" }
rustfs-signer = { path = "crates/signer", version = "0.0.5" }
rustfs-targets = { path = "crates/targets", version = "0.0.5" }
rustfs-utils = { path = "crates/utils", version = "0.0.5" }
@@ -99,20 +97,18 @@ async-channel = "2.5.0"
async-compression = { version = "0.4.19" }
async-recursion = "1.1.1"
async-trait = "0.1.89"
axum = "0.8.8"
axum-extra = "0.12.3"
axum = "0.8.7"
axum-extra = "0.12.2"
axum-server = { version = "0.8.0", features = ["tls-rustls-no-provider"], default-features = false }
futures = "0.3.31"
futures-core = "0.3.31"
futures-util = "0.3.31"
pollster = "0.4.0"
hyper = { version = "1.8.1", features = ["http2", "http1", "server"] }
hyper-rustls = { version = "0.27.7", default-features = false, features = ["native-tokio", "http1", "tls12", "logging", "http2", "ring", "webpki-roots"] }
hyper-util = { version = "0.1.19", features = ["tokio", "server-auto", "server-graceful"] }
http = "1.4.0"
http-body = "1.0.1"
http-body-util = "0.1.3"
reqwest = { version = "0.12.28", default-features = false, features = ["rustls-tls-webpki-roots", "charset", "http2", "system-proxy", "stream", "json", "blocking"] }
reqwest = { version = "0.12.25", default-features = false, features = ["rustls-tls-webpki-roots", "charset", "http2", "system-proxy", "stream", "json", "blocking"] }
socket2 = "0.6.1"
tokio = { version = "1.48.0", features = ["fs", "rt-multi-thread"] }
tokio-rustls = { version = "0.26.4", default-features = false, features = ["logging", "tls12", "ring"] }
@@ -129,31 +125,31 @@ tower-http = { version = "0.6.8", features = ["cors"] }
bytes = { version = "1.11.0", features = ["serde"] }
bytesize = "2.3.1"
byteorder = "1.5.0"
flatbuffers = "25.12.19"
flatbuffers = "25.9.23"
form_urlencoded = "1.2.2"
prost = "0.14.1"
quick-xml = "0.38.4"
rmcp = { version = "0.12.0" }
rmp = { version = "0.8.15" }
rmp-serde = { version = "1.3.1" }
rmcp = { version = "0.10.0" }
rmp = { version = "0.8.14" }
rmp-serde = { version = "1.3.0" }
serde = { version = "1.0.228", features = ["derive"] }
serde_json = { version = "1.0.147", features = ["raw_value"] }
serde_json = { version = "1.0.145", features = ["raw_value"] }
serde_urlencoded = "0.7.1"
schemars = "1.1.0"
# Cryptography and Security
aes-gcm = { version = "0.11.0-rc.2", features = ["rand_core"] }
argon2 = { version = "0.6.0-rc.5" }
argon2 = { version = "0.6.0-rc.3", features = ["std"] }
blake3 = { version = "1.8.2", features = ["rayon", "mmap"] }
chacha20poly1305 = { version = "0.11.0-rc.2" }
crc-fast = "1.6.0"
hmac = { version = "0.13.0-rc.3" }
jsonwebtoken = { version = "10.2.0", features = ["rust_crypto"] }
pbkdf2 = "0.13.0-rc.5"
pbkdf2 = "0.13.0-rc.3"
rsa = { version = "0.10.0-rc.10" }
rustls = { version = "0.23.35", features = ["ring", "logging", "std", "tls12"], default-features = false }
rustls-pemfile = "2.2.0"
rustls-pki-types = "1.13.2"
rustls-pki-types = "1.13.1"
sha1 = "0.11.0-rc.3"
sha2 = "0.11.0-rc.3"
subtle = "2.6"
@@ -166,20 +162,20 @@ time = { version = "0.3.44", features = ["std", "parsing", "formatting", "macros
# Utilities and Tools
anyhow = "1.0.100"
arc-swap = "1.8.0"
arc-swap = "1.7.1"
astral-tokio-tar = "0.5.6"
atoi = "2.0.0"
atomic_enum = "0.3.0"
aws-config = { version = "1.8.12" }
aws-credential-types = { version = "1.2.11" }
aws-sdk-s3 = { version = "1.119.0", default-features = false, features = ["sigv4a", "rustls", "rt-tokio"] }
aws-smithy-types = { version = "1.3.5" }
aws-config = { version = "1.8.11" }
aws-credential-types = { version = "1.2.10" }
aws-sdk-s3 = { version = "1.116.0", default-features = false, features = ["sigv4a", "rustls", "rt-tokio"] }
aws-smithy-types = { version = "1.3.4" }
base64 = "0.22.1"
base64-simd = "0.8.0"
brotli = "8.0.2"
cfg-if = "1.0.4"
clap = { version = "4.5.53", features = ["derive", "env"] }
const-str = { version = "0.7.1", features = ["std", "proc"] }
const-str = { version = "0.7.0", features = ["std", "proc"] }
convert_case = "0.10.0"
criterion = { version = "0.8", features = ["html_reports"] }
crossbeam-queue = "0.3.12"
@@ -190,8 +186,8 @@ faster-hex = "0.10.0"
flate2 = "1.1.5"
flexi_logger = { version = "0.31.7", features = ["trc", "dont_minimize_extra_stacks", "compress", "kv", "json"] }
glob = "0.3.3"
google-cloud-storage = "1.5.0"
google-cloud-auth = "1.3.0"
google-cloud-storage = "1.4.0"
google-cloud-auth = "1.2.0"
hashbrown = { version = "0.16.1", features = ["serde", "rayon"] }
heed = { version = "0.22.0" }
hex-simd = "0.8.0"
@@ -200,13 +196,13 @@ ipnetwork = { version = "0.21.1", features = ["serde"] }
lazy_static = "1.5.0"
libc = "0.2.178"
libsystemd = "0.7.2"
local-ip-address = "0.6.8"
local-ip-address = "0.6.6"
lz4 = "1.28.1"
matchit = "0.9.0"
md-5 = "0.11.0-rc.3"
md5 = "0.8.0"
mime_guess = "2.0.5"
moka = { version = "0.12.12", features = ["future"] }
moka = { version = "0.12.11", features = ["future"] }
netif = "0.1.6"
nix = { version = "0.30.1", features = ["fs"] }
nu-ansi-term = "0.50.3"
@@ -225,9 +221,9 @@ regex = { version = "1.12.2" }
rumqttc = { version = "0.25.1" }
rust-embed = { version = "8.9.0" }
rustc-hash = { version = "2.1.1" }
s3s = { version = "0.13.0-alpha", features = ["minio"], git = "https://github.com/s3s-project/s3s.git", branch = "main" }
s3s = { version = "0.12.0-rc.4", features = ["minio"] }
serial_test = "3.2.0"
shadow-rs = { version = "1.5.0", default-features = false }
shadow-rs = { version = "1.4.0", default-features = false }
siphasher = "1.0.1"
smallvec = { version = "1.15.1", features = ["serde"] }
smartstring = "1.0.1"
@@ -238,10 +234,10 @@ strum = { version = "0.27.2", features = ["derive"] }
sysctl = "0.7.1"
sysinfo = "0.37.2"
temp-env = "0.3.6"
tempfile = "3.24.0"
tempfile = "3.23.0"
test-case = "3.3.1"
thiserror = "2.0.17"
tracing = { version = "0.1.44" }
tracing = { version = "0.1.43" }
tracing-appender = "0.2.4"
tracing-error = "0.2.1"
tracing-opentelemetry = "0.32.0"
@@ -255,7 +251,7 @@ walkdir = "2.5.0"
wildmatch = { version = "2.6.1", features = ["serde"] }
winapi = { version = "0.3.9" }
xxhash-rust = { version = "0.8.15", features = ["xxh64", "xxh3"] }
zip = "7.0.0"
zip = "6.0.0"
zstd = "0.13.3"
# Observability and Metrics
@@ -281,7 +277,7 @@ pprof = { version = "0.15.0", features = ["flamegraph", "protobuf-codec"] }
[workspace.metadata.cargo-shear]
ignored = ["rustfs", "rustfs-mcp"]
ignored = ["rustfs", "rustfs-mcp", "tokio-test"]
[profile.release]
opt-level = 3

View File

@@ -81,11 +81,12 @@ ENV RUSTFS_ADDRESS=":9000" \
RUSTFS_CORS_ALLOWED_ORIGINS="*" \
RUSTFS_CONSOLE_CORS_ALLOWED_ORIGINS="*" \
RUSTFS_VOLUMES="/data" \
RUST_LOG="warn"
RUST_LOG="warn" \
RUSTFS_OBS_LOG_DIRECTORY="/logs"
EXPOSE 9000 9001
VOLUME ["/data"]
VOLUME ["/data", "/logs"]
USER rustfs

View File

@@ -39,9 +39,7 @@ RUN set -eux; \
libssl-dev \
lld \
protobuf-compiler \
flatbuffers-compiler \
gcc-aarch64-linux-gnu \
gcc-x86-64-linux-gnu; \
flatbuffers-compiler; \
rm -rf /var/lib/apt/lists/*
# Optional: cross toolchain for aarch64 (only when targeting linux/arm64)
@@ -53,18 +51,18 @@ RUN set -eux; \
rm -rf /var/lib/apt/lists/*; \
fi
# Add Rust targets for both arches (to support cross-builds on multi-arch runners)
# Add Rust targets based on TARGETPLATFORM
RUN set -eux; \
rustup target add x86_64-unknown-linux-gnu aarch64-unknown-linux-gnu; \
rustup component add rust-std-x86_64-unknown-linux-gnu rust-std-aarch64-unknown-linux-gnu
case "${TARGETPLATFORM:-linux/amd64}" in \
linux/amd64) rustup target add x86_64-unknown-linux-gnu ;; \
linux/arm64) rustup target add aarch64-unknown-linux-gnu ;; \
*) echo "Unsupported TARGETPLATFORM=${TARGETPLATFORM}" >&2; exit 1 ;; \
esac
# Cross-compilation environment (used only when targeting aarch64)
ENV CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc
ENV CC_aarch64_unknown_linux_gnu=aarch64-linux-gnu-gcc
ENV CXX_aarch64_unknown_linux_gnu=aarch64-linux-gnu-g++
ENV CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER=x86_64-linux-gnu-gcc
ENV CC_x86_64_unknown_linux_gnu=x86_64-linux-gnu-gcc
ENV CXX_x86_64_unknown_linux_gnu=x86_64-linux-gnu-g++
WORKDIR /usr/src/rustfs
@@ -74,6 +72,7 @@ COPY Cargo.toml Cargo.lock ./
# 2) workspace member manifests (adjust if workspace layout changes)
COPY rustfs/Cargo.toml rustfs/Cargo.toml
COPY crates/*/Cargo.toml crates/
COPY cli/rustfs-gui/Cargo.toml cli/rustfs-gui/Cargo.toml
# Pre-fetch dependencies for better caching
RUN --mount=type=cache,target=/usr/local/cargo/registry \
@@ -118,49 +117,6 @@ RUN --mount=type=cache,target=/usr/local/cargo/registry \
;; \
esac
# -----------------------------
# Development stage (keeps toolchain)
# -----------------------------
FROM builder AS dev
ARG BUILD_DATE
ARG VCS_REF
LABEL name="RustFS (dev-source)" \
maintainer="RustFS Team" \
build-date="${BUILD_DATE}" \
vcs-ref="${VCS_REF}" \
description="RustFS - local development with Rust toolchain."
# Install runtime dependencies that might be missing in partial builder
# (builder already has build-essential, lld, etc.)
WORKDIR /app
ENV CARGO_INCREMENTAL=1
# Ensure we have the same default env vars available
ENV RUSTFS_ADDRESS=":9000" \
RUSTFS_ACCESS_KEY="rustfsadmin" \
RUSTFS_SECRET_KEY="rustfsadmin" \
RUSTFS_CONSOLE_ENABLE="true" \
RUSTFS_VOLUMES="/data" \
RUST_LOG="warn" \
RUSTFS_OBS_LOG_DIRECTORY="/logs" \
RUSTFS_USERNAME="rustfs" \
RUSTFS_GROUPNAME="rustfs" \
RUSTFS_UID="1000" \
RUSTFS_GID="1000"
# Note: We don't COPY source here because we expect it to be mounted at /app
# We rely on cargo run to build and run
EXPOSE 9000 9001
COPY entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
CMD ["cargo", "run", "--bin", "rustfs", "--"]
# -----------------------------
# Runtime stage (Ubuntu minimal)
# -----------------------------
@@ -210,13 +166,14 @@ ENV RUSTFS_ADDRESS=":9000" \
RUSTFS_CONSOLE_ENABLE="true" \
RUSTFS_VOLUMES="/data" \
RUST_LOG="warn" \
RUSTFS_OBS_LOG_DIRECTORY="/logs" \
RUSTFS_USERNAME="rustfs" \
RUSTFS_GROUPNAME="rustfs" \
RUSTFS_UID="1000" \
RUSTFS_GID="1000"
EXPOSE 9000
VOLUME ["/data"]
VOLUME ["/data", "/logs"]
# Keep root here; entrypoint will drop privileges using chroot --userspec
ENTRYPOINT ["/entrypoint.sh"]

View File

@@ -9,53 +9,30 @@ CONTAINER_NAME ?= rustfs-dev
DOCKERFILE_PRODUCTION = Dockerfile
DOCKERFILE_SOURCE = Dockerfile.source
# Fatal check
# Checks all required dependencies and exits with error if not found
# (e.g., cargo, rustfmt)
check-%:
@command -v $* >/dev/null 2>&1 || { \
echo >&2 "❌ '$*' is not installed."; \
exit 1; \
}
# Warning-only check
# Checks for optional dependencies and issues a warning if not found
# (e.g., cargo-nextest for enhanced testing)
warn-%:
@command -v $* >/dev/null 2>&1 || { \
echo >&2 "⚠️ '$*' is not installed."; \
}
# For checking dependencies use check-<dep-name> or warn-<dep-name>
.PHONY: core-deps fmt-deps test-deps
core-deps: check-cargo
fmt-deps: check-rustfmt
test-deps: warn-cargo-nextest
# Code quality and formatting targets
.PHONY: fmt
fmt: core-deps fmt-deps
fmt:
@echo "🔧 Formatting code..."
cargo fmt --all
.PHONY: fmt-check
fmt-check: core-deps fmt-deps
fmt-check:
@echo "📝 Checking code formatting..."
cargo fmt --all --check
.PHONY: clippy
clippy: core-deps
clippy:
@echo "🔍 Running clippy checks..."
cargo clippy --fix --allow-dirty
cargo clippy --all-targets --all-features -- -D warnings
.PHONY: check
check: core-deps
check:
@echo "🔨 Running compilation check..."
cargo check --all-targets
.PHONY: test
test: core-deps test-deps
test:
@echo "🧪 Running tests..."
@if command -v cargo-nextest >/dev/null 2>&1; then \
cargo nextest run --all --exclude e2e_test; \
@@ -65,16 +42,16 @@ test: core-deps test-deps
fi
cargo test --all --doc
.PHONY: pre-commit
pre-commit: fmt clippy check test
@echo "✅ All pre-commit checks passed!"
.PHONY: setup-hooks
setup-hooks:
@echo "🔧 Setting up git hooks..."
chmod +x .git/hooks/pre-commit
@echo "✅ Git hooks setup complete!"
.PHONY: pre-commit
pre-commit: fmt clippy check test
@echo "✅ All pre-commit checks passed!"
.PHONY: e2e-server
e2e-server:
sh $(shell pwd)/scripts/run.sh
@@ -209,6 +186,8 @@ docker-dev-push:
--push \
.
# Local production builds using direct buildx (alternative to docker-buildx.sh)
.PHONY: docker-buildx-production-local
docker-buildx-production-local:
@@ -268,6 +247,8 @@ dev-env-stop:
.PHONY: dev-env-restart
dev-env-restart: dev-env-stop dev-env-start
# ========================================================================================
# Build Utilities
# ========================================================================================

View File

@@ -103,7 +103,7 @@ The RustFS container runs as a non-root user `rustfs` (UID `10001`). If you run
docker run -d -p 9000:9000 -p 9001:9001 -v $(pwd)/data:/data -v $(pwd)/logs:/logs rustfs/rustfs:latest
# Using specific version
docker run -d -p 9000:9000 -p 9001:9001 -v $(pwd)/data:/data -v $(pwd)/logs:/logs rustfs/rustfs:1.0.0-alpha.76
docker run -d -p 9000:9000 -p 9001:9001 -v $(pwd)/data:/data -v $(pwd)/logs:/logs rustfs/rustfs:1.0.0.alpha.68
```
You can also use Docker Compose. Using the `docker-compose.yml` file in the root directory:
@@ -153,28 +153,11 @@ make help-docker # Show all Docker-related commands
Follow the instructions in the [Helm Chart README](https://charts.rustfs.com/) to install RustFS on a Kubernetes cluster.
### 5\. Nix Flake (Option 5)
If you have [Nix with flakes enabled](https://nixos.wiki/wiki/Flakes#Enable_flakes):
```bash
# Run directly without installing
nix run github:rustfs/rustfs
# Build the binary
nix build github:rustfs/rustfs
./result/bin/rustfs --help
# Or from a local checkout
nix build
nix run
```
-----
### Accessing RustFS
5. **Access the Console**: Open your web browser and navigate to `http://localhost:9001` to access the RustFS console.
5. **Access the Console**: Open your web browser and navigate to `http://localhost:9000` to access the RustFS console.
* Default credentials: `rustfsadmin` / `rustfsadmin`
6. **Create a Bucket**: Use the console to create a new bucket for your objects.
7. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs/clients to interact with your RustFS instance.

View File

@@ -2,7 +2,8 @@
## Supported Versions
Security updates are provided for the latest released version of this project.
Use this section to tell people about which versions of your project are
currently being supported with security updates.
| Version | Supported |
| ------- | ------------------ |
@@ -10,10 +11,8 @@ Security updates are provided for the latest released version of this project.
## Reporting a Vulnerability
Please report security vulnerabilities **privately** via GitHub Security Advisories:
Use this section to tell people how to report a vulnerability.
https://github.com/rustfs/rustfs/security/advisories/new
Do **not** open a public issue for security-sensitive bugs.
You can expect an initial response within a reasonable timeframe. Further updates will be provided as the report is triaged.
Tell them where to go, how often they can expect to get an update on a
reported vulnerability, what to expect if the vulnerability is accepted or
declined, etc.

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/bash
# RustFS Binary Build Script
# This script compiles RustFS binaries for different platforms and architectures

View File

@@ -183,7 +183,7 @@ impl HealChannelProcessor {
HealType::Object {
bucket: request.bucket.clone(),
object: prefix.clone(),
version_id: request.object_version_id.clone(),
version_id: None,
}
} else {
HealType::Bucket {
@@ -366,7 +366,6 @@ mod tests {
id: "test-id".to_string(),
bucket: "test-bucket".to_string(),
object_prefix: None,
object_version_id: None,
disk: None,
priority: HealChannelPriority::Normal,
scan_mode: None,
@@ -395,7 +394,6 @@ mod tests {
id: "test-id".to_string(),
bucket: "test-bucket".to_string(),
object_prefix: Some("test-object".to_string()),
object_version_id: None,
disk: None,
priority: HealChannelPriority::High,
scan_mode: Some(HealScanMode::Deep),
@@ -427,7 +425,6 @@ mod tests {
id: "test-id".to_string(),
bucket: "test-bucket".to_string(),
object_prefix: None,
object_version_id: None,
disk: Some("pool_0_set_1".to_string()),
priority: HealChannelPriority::Critical,
scan_mode: None,
@@ -456,7 +453,6 @@ mod tests {
id: "test-id".to_string(),
bucket: "test-bucket".to_string(),
object_prefix: None,
object_version_id: None,
disk: Some("invalid-disk-id".to_string()),
priority: HealChannelPriority::Normal,
scan_mode: None,
@@ -492,7 +488,6 @@ mod tests {
id: "test-id".to_string(),
bucket: "test-bucket".to_string(),
object_prefix: None,
object_version_id: None,
disk: None,
priority: channel_priority,
scan_mode: None,
@@ -521,7 +516,6 @@ mod tests {
id: "test-id".to_string(),
bucket: "test-bucket".to_string(),
object_prefix: None,
object_version_id: None,
disk: None,
priority: HealChannelPriority::Normal,
scan_mode: None,
@@ -551,7 +545,6 @@ mod tests {
id: "test-id".to_string(),
bucket: "test-bucket".to_string(),
object_prefix: Some("".to_string()), // Empty prefix should be treated as bucket heal
object_version_id: None,
disk: None,
priority: HealChannelPriority::Normal,
scan_mode: None,

View File

@@ -31,7 +31,7 @@ use tokio::{
time::interval,
};
use tokio_util::sync::CancellationToken;
use tracing::{error, info, warn};
use tracing::{debug, error, info, warn};
/// Priority queue wrapper for heal requests
/// Uses BinaryHeap for priority-based ordering while maintaining FIFO for same-priority items
@@ -418,7 +418,12 @@ impl HealManager {
/// Get statistics
pub async fn get_statistics(&self) -> HealStatistics {
self.statistics.read().await.clone()
let stats = self.statistics.read().await.clone();
debug!(
"HealManager stats snapshot: total_tasks={}, successful_tasks={}, failed_tasks={}, running_tasks={}",
stats.total_tasks, stats.successful_tasks, stats.failed_tasks, stats.running_tasks
);
stats
}
/// Get active task count
@@ -468,17 +473,14 @@ impl HealManager {
let active_heals = self.active_heals.clone();
let cancel_token = self.cancel_token.clone();
let storage = self.storage.clone();
let mut duration = {
let config = config.read().await;
config.heal_interval
};
if duration < Duration::from_secs(1) {
duration = Duration::from_secs(1);
}
info!("start_auto_disk_scanner: Starting auto disk scanner with interval: {:?}", duration);
info!(
"start_auto_disk_scanner: Starting auto disk scanner with interval: {:?}",
config.read().await.heal_interval
);
tokio::spawn(async move {
let mut interval = interval(duration);
let mut interval = interval(config.read().await.heal_interval);
loop {
tokio::select! {

File diff suppressed because it is too large Load Diff

View File

@@ -12,7 +12,10 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::Result;
use crate::{
Result,
scanner::metrics::{BucketMetrics, MetricsCollector},
};
use rustfs_common::data_usage::SizeSummary;
use rustfs_common::metrics::IlmAction;
use rustfs_ecstore::bucket::{
@@ -27,15 +30,26 @@ use rustfs_ecstore::bucket::{
versioning::VersioningApi,
versioning_sys::BucketVersioningSys,
};
use rustfs_ecstore::store_api::{ObjectInfo, ObjectToDelete};
use rustfs_filemeta::FileInfo;
use s3s::dto::{BucketLifecycleConfiguration as LifecycleConfig, VersioningConfiguration};
use std::sync::{
Arc,
atomic::{AtomicU64, Ordering},
use rustfs_ecstore::bucket::{
replication::{GLOBAL_REPLICATION_POOL, ReplicationConfig, get_heal_replicate_object_info},
utils::is_meta_bucketname,
};
use time::OffsetDateTime;
use tracing::info;
use rustfs_ecstore::store_api::{ObjectInfo, ObjectToDelete};
use rustfs_filemeta::{FileInfo, ReplicationStatusType, replication_statuses_map};
use rustfs_utils::http::headers::{AMZ_BUCKET_REPLICATION_STATUS, HeaderExt, VERSION_PURGE_STATUS_KEY};
use s3s::dto::DefaultRetention;
use s3s::dto::{BucketLifecycleConfiguration as LifecycleConfig, VersioningConfiguration};
use std::{
collections::HashMap,
sync::{
Arc,
atomic::{AtomicU64, Ordering},
},
time::Duration as StdDuration,
};
use time::{Duration as TimeDuration, OffsetDateTime};
use tokio::sync::Mutex;
use tracing::{debug, info, warn};
static SCANNER_EXCESS_OBJECT_VERSIONS: AtomicU64 = AtomicU64::new(100);
static SCANNER_EXCESS_OBJECT_VERSIONS_TOTAL_SIZE: AtomicU64 = AtomicU64::new(1024 * 1024 * 1024 * 1024); // 1 TB
@@ -44,21 +58,94 @@ static SCANNER_EXCESS_OBJECT_VERSIONS_TOTAL_SIZE: AtomicU64 = AtomicU64::new(102
pub struct ScannerItem {
pub bucket: String,
pub object_name: String,
pub replication: Option<ReplicationConfig>,
pub lifecycle: Option<Arc<LifecycleConfig>>,
pub versioning: Option<Arc<VersioningConfiguration>>,
pub object_lock_config: Option<DefaultRetention>,
pub replication_pending_grace: StdDuration,
pub replication_metrics: Option<ReplicationMetricsHandle>,
}
#[derive(Clone)]
pub struct ReplicationMetricsHandle {
inner: Arc<ReplicationMetricsInner>,
}
struct ReplicationMetricsInner {
metrics: Arc<MetricsCollector>,
bucket_metrics: Arc<Mutex<HashMap<String, BucketMetrics>>>,
}
impl ReplicationMetricsHandle {
pub fn new(metrics: Arc<MetricsCollector>, bucket_metrics: Arc<Mutex<HashMap<String, BucketMetrics>>>) -> Self {
Self {
inner: Arc::new(ReplicationMetricsInner { metrics, bucket_metrics }),
}
}
pub async fn record_status(&self, bucket: &str, status: ReplicationStatusType, lagging: bool) {
match status {
ReplicationStatusType::Pending => self.inner.metrics.increment_replication_pending_objects(1),
ReplicationStatusType::Failed => self.inner.metrics.increment_replication_failed_objects(1),
_ => {}
}
if lagging {
self.inner.metrics.increment_replication_lagging_objects(1);
}
let mut guard = self.inner.bucket_metrics.lock().await;
let entry = guard.entry(bucket.to_string()).or_insert_with(|| BucketMetrics {
bucket: bucket.to_string(),
..Default::default()
});
match status {
ReplicationStatusType::Pending => {
entry.replication_pending = entry.replication_pending.saturating_add(1);
}
ReplicationStatusType::Failed => {
entry.replication_failed = entry.replication_failed.saturating_add(1);
}
_ => {}
}
if lagging {
entry.replication_lagging = entry.replication_lagging.saturating_add(1);
}
}
pub async fn record_task_submission(&self, bucket: &str) {
self.inner.metrics.increment_replication_tasks_queued(1);
let mut guard = self.inner.bucket_metrics.lock().await;
let entry = guard.entry(bucket.to_string()).or_insert_with(|| BucketMetrics {
bucket: bucket.to_string(),
..Default::default()
});
entry.replication_tasks_queued = entry.replication_tasks_queued.saturating_add(1);
}
}
impl ScannerItem {
const INTERNAL_REPLICATION_STATUS_KEY: &'static str = "x-rustfs-internal-replication-status";
pub fn new(
bucket: String,
replication: Option<ReplicationConfig>,
lifecycle: Option<Arc<LifecycleConfig>>,
versioning: Option<Arc<VersioningConfiguration>>,
object_lock_config: Option<DefaultRetention>,
replication_pending_grace: StdDuration,
replication_metrics: Option<ReplicationMetricsHandle>,
) -> Self {
Self {
bucket,
object_name: "".to_string(),
replication,
lifecycle,
versioning,
object_lock_config,
replication_pending_grace,
replication_metrics,
}
}
@@ -164,6 +251,23 @@ impl ScannerItem {
}
pub async fn apply_actions(&mut self, oi: &ObjectInfo, _size_s: &mut SizeSummary) -> (bool, i64) {
let object_locked = self.is_object_lock_protected(oi);
if let Err(err) = self.heal_replication(oi).await {
warn!(
"heal_replication failed for {}/{} (version {:?}): {}",
oi.bucket, oi.name, oi.version_id, err
);
}
if object_locked {
info!(
"apply_actions: Skipping lifecycle for {}/{} because object lock retention or legal hold is active",
oi.bucket, oi.name
);
return (false, oi.size);
}
let (action, _size) = self.apply_lifecycle(oi).await;
info!(
@@ -174,16 +278,6 @@ impl ScannerItem {
oi.user_defined.clone()
);
// Create a mutable clone if you need to modify fields
/*let mut oi = oi.clone();
oi.replication_status = ReplicationStatusType::from(
oi.user_defined
.get("x-amz-bucket-replication-status")
.unwrap_or(&"PENDING".to_string()),
);
info!("apply status is: {:?}", oi.replication_status);
self.heal_replication(&oi, _size_s).await;*/
if action.delete_all() {
return (true, 0);
}
@@ -200,7 +294,7 @@ impl ScannerItem {
info!("apply_lifecycle: Lifecycle config exists for object: {}", oi.name);
let (olcfg, rcfg) = if self.bucket != ".minio.sys" {
let (olcfg, rcfg) = if !is_meta_bucketname(&self.bucket) {
(
get_object_lock_config(&self.bucket).await.ok(),
None, // FIXME: replication config
@@ -266,4 +360,202 @@ impl ScannerItem {
(lc_evt.action, new_size)
}
fn is_object_lock_protected(&self, oi: &ObjectInfo) -> bool {
enforce_retention_for_deletion(oi)
}
async fn heal_replication(&self, oi: &ObjectInfo) -> Result<()> {
warn!("heal_replication: healing replication for {}/{}", oi.bucket, oi.name);
warn!("heal_replication: ObjectInfo oi: {:?}", oi);
let enriched = Self::hydrate_replication_metadata(oi);
let pending_lagging = self.is_pending_lagging(&enriched);
if let Some(handle) = &self.replication_metrics {
handle
.record_status(&self.bucket, enriched.replication_status.clone(), pending_lagging)
.await;
}
debug!(
"heal_replication: evaluating {}/{} with status {:?} and internal {:?}",
enriched.bucket, enriched.name, enriched.replication_status, enriched.replication_status_internal
);
// if !self.needs_replication_heal(&enriched, pending_lagging) {
// return Ok(());
// }
// let replication_cfg = match get_replication_config(&self.bucket).await {
// Ok((cfg, _)) => Some(cfg),
// Err(err) => {
// debug!("heal_replication: failed to fetch replication config for bucket {}: {}", self.bucket, err);
// None
// }
// };
// if replication_cfg.is_none() {
// return Ok(());
// }
// let bucket_targets = match get_bucket_targets_config(&self.bucket).await {
// Ok(targets) => Some(targets),
// Err(err) => {
// debug!("heal_replication: no bucket targets for bucket {}: {}", self.bucket, err);
// None
// }
// };
// let replication_cfg = ReplicationConfig::new(replication_cfg, bucket_targets);
let replication_cfg = self.replication.clone().unwrap_or_default();
if replication_cfg.config.is_none() && replication_cfg.remotes.is_none() {
debug!("heal_replication: no replication config for {}/{}", enriched.bucket, enriched.name);
return Ok(());
}
let replicate_info = get_heal_replicate_object_info(&enriched, &replication_cfg).await;
let should_replicate = replicate_info.dsc.replicate_any()
|| matches!(
enriched.replication_status,
ReplicationStatusType::Failed | ReplicationStatusType::Pending
);
if !should_replicate {
debug!("heal_replication: no actionable targets for {}/{}", enriched.bucket, enriched.name);
return Ok(());
}
if let Some(pool) = GLOBAL_REPLICATION_POOL.get() {
pool.queue_replica_task(replicate_info).await;
if let Some(handle) = &self.replication_metrics {
handle.record_task_submission(&self.bucket).await;
}
warn!("heal_replication: queued replication heal task for {}/{}", enriched.bucket, enriched.name);
} else {
warn!(
"heal_replication: GLOBAL_REPLICATION_POOL not initialized, skipping heal for {}/{}",
enriched.bucket, enriched.name
);
}
Ok(())
}
#[allow(dead_code)]
fn needs_replication_heal(&self, oi: &ObjectInfo, pending_lagging: bool) -> bool {
if matches!(oi.replication_status, ReplicationStatusType::Failed) {
return true;
}
if pending_lagging && matches!(oi.replication_status, ReplicationStatusType::Pending) {
return true;
}
if let Some(raw) = oi.replication_status_internal.as_ref() {
let statuses = replication_statuses_map(raw);
if statuses
.values()
.any(|status| matches!(status, ReplicationStatusType::Failed))
{
return true;
}
if pending_lagging
&& statuses
.values()
.any(|status| matches!(status, ReplicationStatusType::Pending))
{
return true;
}
}
false
}
fn hydrate_replication_metadata(oi: &ObjectInfo) -> ObjectInfo {
let mut enriched = oi.clone();
if enriched.replication_status.is_empty() {
if let Some(status) = enriched.user_defined.lookup(AMZ_BUCKET_REPLICATION_STATUS) {
enriched.replication_status = ReplicationStatusType::from(status);
}
}
if enriched.replication_status_internal.is_none() {
if let Some(raw) = enriched.user_defined.lookup(Self::INTERNAL_REPLICATION_STATUS_KEY) {
if !raw.is_empty() {
enriched.replication_status_internal = Some(raw.to_string());
}
}
}
if enriched.version_purge_status_internal.is_none() {
if let Some(raw) = enriched.user_defined.lookup(VERSION_PURGE_STATUS_KEY) {
if !raw.is_empty() {
enriched.version_purge_status_internal = Some(raw.to_string());
}
}
}
enriched
}
fn is_pending_lagging(&self, oi: &ObjectInfo) -> bool {
if !matches!(oi.replication_status, ReplicationStatusType::Pending) {
return false;
}
let Some(mod_time) = oi.mod_time else {
return false;
};
let grace = TimeDuration::try_from(self.replication_pending_grace).unwrap_or_else(|_| {
warn!(
"replication_pending_grace is invalid, using default value: 0 seconds, grace: {:?}",
self.replication_pending_grace
);
TimeDuration::seconds(0)
});
if grace.is_zero() {
return true;
}
let elapsed = OffsetDateTime::now_utc() - mod_time;
elapsed >= grace
}
}
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn replication_metrics_handle_tracks_counts() {
let metrics = Arc::new(MetricsCollector::new());
let bucket_metrics = Arc::new(Mutex::new(HashMap::new()));
let handle = ReplicationMetricsHandle::new(metrics.clone(), bucket_metrics.clone());
handle
.record_status("test-bucket", ReplicationStatusType::Pending, true)
.await;
handle
.record_status("test-bucket", ReplicationStatusType::Failed, false)
.await;
handle.record_task_submission("test-bucket").await;
let snapshot = metrics.get_metrics();
assert_eq!(snapshot.replication_pending_objects, 1);
assert_eq!(snapshot.replication_failed_objects, 1);
assert_eq!(snapshot.replication_lagging_objects, 1);
assert_eq!(snapshot.replication_tasks_queued, 1);
let guard = bucket_metrics.lock().await;
let bucket_entry = guard.get("test-bucket").expect("bucket metrics exists");
assert_eq!(bucket_entry.replication_pending, 1);
assert_eq!(bucket_entry.replication_failed, 1);
assert_eq!(bucket_entry.replication_lagging, 1);
assert_eq!(bucket_entry.replication_tasks_queued, 1);
}
}

View File

@@ -62,6 +62,7 @@ struct DiskScanResult {
pub struct LocalObjectRecord {
pub usage: LocalObjectUsage,
pub object_info: Option<rustfs_ecstore::store_api::ObjectInfo>,
pub file_info: Option<FileInfo>,
}
#[derive(Debug, Default)]
@@ -84,9 +85,6 @@ pub async fn scan_and_persist_local_usage(store: Arc<ECStore>) -> Result<LocalSc
guard.clone()
};
// Use the first local online disk in the set to avoid missing stats when disk 0 is down
let mut picked = false;
for (disk_index, disk_opt) in disks.into_iter().enumerate() {
let Some(disk) = disk_opt else {
continue;
@@ -96,17 +94,11 @@ pub async fn scan_and_persist_local_usage(store: Arc<ECStore>) -> Result<LocalSc
continue;
}
if picked {
// Count objects once by scanning only disk index zero from each set.
if disk_index != 0 {
continue;
}
// Skip offline disks; keep looking for an online candidate
if !disk.is_online().await {
continue;
}
picked = true;
let disk_id = match disk.get_disk_id().await.map_err(Error::from)? {
Some(id) => id.to_string(),
None => {
@@ -232,9 +224,11 @@ fn scan_disk_blocking(root: PathBuf, meta: LocalUsageSnapshotMeta, mut state: In
record.usage.last_modified_ns = mtime_ns;
state.objects.insert(rel_path.clone(), record.usage.clone());
emitted.insert(rel_path.clone());
warn!("compute_object_usage: record: {:?}", record.clone());
objects_by_bucket.entry(record.usage.bucket.clone()).or_default().push(record);
}
Ok(None) => {
warn!("compute_object_usage: None, rel_path: {:?}", rel_path);
state.objects.remove(&rel_path);
}
Err(err) => {
@@ -249,24 +243,27 @@ fn scan_disk_blocking(root: PathBuf, meta: LocalUsageSnapshotMeta, mut state: In
warn!("Failed to read xl.meta {:?}: {}", xl_path, err);
}
}
} else {
warn!("should_parse: false, rel_path: {:?}", rel_path);
}
}
state.objects.retain(|key, _| visited.contains(key));
state.last_scan_ns = Some(now_ns);
for (key, usage) in &state.objects {
if emitted.contains(key) {
continue;
}
objects_by_bucket
.entry(usage.bucket.clone())
.or_default()
.push(LocalObjectRecord {
usage: usage.clone(),
object_info: None,
});
}
// for (key, usage) in &state.objects {
// if emitted.contains(key) {
// continue;
// }
// objects_by_bucket
// .entry(usage.bucket.clone())
// .or_default()
// .push(LocalObjectRecord {
// usage: usage.clone(),
// object_info: None,
// file_info: None,
// });
// }
let snapshot = build_snapshot(meta, &state.objects, now);
status.snapshot_exists = true;
@@ -328,6 +325,7 @@ fn compute_object_usage(bucket: &str, object: &str, file_meta: &FileMeta) -> Res
let versioned = fi.version_id.is_some();
ObjectInfo::from_file_info(fi, bucket, object, versioned)
});
let file_info = latest_file_info.clone();
Ok(Some(LocalObjectRecord {
usage: LocalObjectUsage {
@@ -340,6 +338,7 @@ fn compute_object_usage(bucket: &str, object: &str, file_meta: &FileMeta) -> Res
has_live_object,
},
object_info,
file_info,
}))
}

View File

@@ -45,6 +45,14 @@ pub struct ScannerMetrics {
pub healthy_objects: u64,
/// Total corrupted objects found
pub corrupted_objects: u64,
/// Replication heal tasks queued
pub replication_tasks_queued: u64,
/// Objects observed with pending replication
pub replication_pending_objects: u64,
/// Objects observed with failed replication
pub replication_failed_objects: u64,
/// Objects with replication pending longer than grace period
pub replication_lagging_objects: u64,
/// Last scan activity time
pub last_activity: Option<SystemTime>,
/// Current scan cycle
@@ -86,6 +94,14 @@ pub struct BucketMetrics {
pub heal_tasks_completed: u64,
/// Heal tasks failed for this bucket
pub heal_tasks_failed: u64,
/// Objects observed with pending replication status
pub replication_pending: u64,
/// Objects observed with failed replication status
pub replication_failed: u64,
/// Objects exceeding replication grace period
pub replication_lagging: u64,
/// Replication heal tasks queued for this bucket
pub replication_tasks_queued: u64,
}
/// Disk-specific metrics
@@ -127,6 +143,10 @@ pub struct MetricsCollector {
total_cycles: AtomicU64,
healthy_objects: AtomicU64,
corrupted_objects: AtomicU64,
replication_tasks_queued: AtomicU64,
replication_pending_objects: AtomicU64,
replication_failed_objects: AtomicU64,
replication_lagging_objects: AtomicU64,
}
impl MetricsCollector {
@@ -146,6 +166,10 @@ impl MetricsCollector {
total_cycles: AtomicU64::new(0),
healthy_objects: AtomicU64::new(0),
corrupted_objects: AtomicU64::new(0),
replication_tasks_queued: AtomicU64::new(0),
replication_pending_objects: AtomicU64::new(0),
replication_failed_objects: AtomicU64::new(0),
replication_lagging_objects: AtomicU64::new(0),
}
}
@@ -194,6 +218,26 @@ impl MetricsCollector {
self.heal_tasks_failed.fetch_add(count, Ordering::Relaxed);
}
/// Increment replication tasks queued
pub fn increment_replication_tasks_queued(&self, count: u64) {
self.replication_tasks_queued.fetch_add(count, Ordering::Relaxed);
}
/// Increment replication pending objects
pub fn increment_replication_pending_objects(&self, count: u64) {
self.replication_pending_objects.fetch_add(count, Ordering::Relaxed);
}
/// Increment replication failed objects
pub fn increment_replication_failed_objects(&self, count: u64) {
self.replication_failed_objects.fetch_add(count, Ordering::Relaxed);
}
/// Increment replication lagging objects
pub fn increment_replication_lagging_objects(&self, count: u64) {
self.replication_lagging_objects.fetch_add(count, Ordering::Relaxed);
}
/// Set current cycle
pub fn set_current_cycle(&self, cycle: u64) {
self.current_cycle.store(cycle, Ordering::Relaxed);
@@ -228,6 +272,10 @@ impl MetricsCollector {
heal_tasks_failed: self.heal_tasks_failed.load(Ordering::Relaxed),
healthy_objects: self.healthy_objects.load(Ordering::Relaxed),
corrupted_objects: self.corrupted_objects.load(Ordering::Relaxed),
replication_tasks_queued: self.replication_tasks_queued.load(Ordering::Relaxed),
replication_pending_objects: self.replication_pending_objects.load(Ordering::Relaxed),
replication_failed_objects: self.replication_failed_objects.load(Ordering::Relaxed),
replication_lagging_objects: self.replication_lagging_objects.load(Ordering::Relaxed),
last_activity: Some(SystemTime::now()),
current_cycle: self.current_cycle.load(Ordering::Relaxed),
total_cycles: self.total_cycles.load(Ordering::Relaxed),
@@ -255,6 +303,10 @@ impl MetricsCollector {
self.total_cycles.store(0, Ordering::Relaxed);
self.healthy_objects.store(0, Ordering::Relaxed);
self.corrupted_objects.store(0, Ordering::Relaxed);
self.replication_tasks_queued.store(0, Ordering::Relaxed);
self.replication_pending_objects.store(0, Ordering::Relaxed);
self.replication_failed_objects.store(0, Ordering::Relaxed);
self.replication_lagging_objects.store(0, Ordering::Relaxed);
info!("Scanner metrics reset");
}

View File

@@ -19,6 +19,7 @@ use crate::scanner::{
};
use rustfs_common::data_usage::DataUsageInfo;
use rustfs_ecstore::StorageAPI;
use rustfs_ecstore::bucket::utils::is_meta_bucketname;
use rustfs_ecstore::disk::{DiskAPI, DiskStore};
use serde::{Deserialize, Serialize};
use std::{
@@ -879,7 +880,7 @@ impl NodeScanner {
let bucket_name = &bucket_info.name;
// skip system internal buckets
if bucket_name == ".minio.sys" {
if is_meta_bucketname(bucket_name) {
continue;
}

View File

@@ -347,8 +347,7 @@ impl DecentralizedStatsAggregator {
// update cache
*self.cached_stats.write().await = Some(aggregated.clone());
// Use the time when aggregation completes as cache timestamp to avoid premature expiry during long runs
*self.cache_timestamp.write().await = SystemTime::now();
*self.cache_timestamp.write().await = aggregation_timestamp;
Ok(aggregated)
}
@@ -360,8 +359,7 @@ impl DecentralizedStatsAggregator {
// update cache
*self.cached_stats.write().await = Some(aggregated.clone());
// Cache timestamp should reflect completion time rather than aggregation start
*self.cache_timestamp.write().await = SystemTime::now();
*self.cache_timestamp.write().await = now;
Ok(aggregated)
}

View File

@@ -1,112 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![cfg(test)]
use rustfs_ahm::scanner::data_scanner::Scanner;
use rustfs_common::data_usage::DataUsageInfo;
use rustfs_ecstore::GLOBAL_Endpoints;
use rustfs_ecstore::bucket::metadata_sys::{BucketMetadataSys, GLOBAL_BucketMetadataSys};
use rustfs_ecstore::endpoints::EndpointServerPools;
use rustfs_ecstore::store::ECStore;
use rustfs_ecstore::store_api::{ObjectIO, PutObjReader, StorageAPI};
use std::sync::{Arc, Once};
use tempfile::TempDir;
use tokio::sync::RwLock;
use tokio_util::sync::CancellationToken;
use tracing::Level;
/// Build a minimal single-node ECStore over a temp directory and populate objects.
async fn create_store_with_objects(count: usize) -> (TempDir, std::sync::Arc<ECStore>) {
let temp_dir = TempDir::new().expect("temp dir");
let root = temp_dir.path().to_string_lossy().to_string();
// Create endpoints from the temp dir
let (endpoint_pools, _setup) = EndpointServerPools::from_volumes("127.0.0.1:0", vec![root])
.await
.expect("endpoint pools");
// Seed globals required by metadata sys if not already set
if GLOBAL_Endpoints.get().is_none() {
let _ = GLOBAL_Endpoints.set(endpoint_pools.clone());
}
let store = ECStore::new("127.0.0.1:0".parse().unwrap(), endpoint_pools, CancellationToken::new())
.await
.expect("create store");
if rustfs_ecstore::global::new_object_layer_fn().is_none() {
rustfs_ecstore::global::set_object_layer(store.clone()).await;
}
// Initialize metadata system before bucket operations
if GLOBAL_BucketMetadataSys.get().is_none() {
let mut sys = BucketMetadataSys::new(store.clone());
sys.init(Vec::new()).await;
let _ = GLOBAL_BucketMetadataSys.set(Arc::new(RwLock::new(sys)));
}
store
.make_bucket("fallback-bucket", &rustfs_ecstore::store_api::MakeBucketOptions::default())
.await
.expect("make bucket");
for i in 0..count {
let key = format!("obj-{i:04}");
let data = format!("payload-{i}");
let mut reader = PutObjReader::from_vec(data.into_bytes());
store
.put_object("fallback-bucket", &key, &mut reader, &rustfs_ecstore::store_api::ObjectOptions::default())
.await
.expect("put object");
}
(temp_dir, store)
}
static INIT: Once = Once::new();
fn init_tracing(filter_level: Level) {
INIT.call_once(|| {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.with_max_level(filter_level)
.with_timer(tracing_subscriber::fmt::time::UtcTime::rfc_3339())
.with_thread_names(true)
.try_init();
});
}
#[tokio::test]
async fn fallback_builds_full_counts_over_100_objects() {
init_tracing(Level::ERROR);
let (_tmp, store) = create_store_with_objects(1000).await;
let scanner = Scanner::new(None, None);
// Directly call the fallback builder to ensure pagination works.
let usage: DataUsageInfo = scanner.build_data_usage_from_ecstore(&store).await.expect("fallback usage");
let bucket = usage.buckets_usage.get("fallback-bucket").expect("bucket usage present");
assert!(
usage.objects_total_count >= 1000,
"total objects should be >=1000, got {}",
usage.objects_total_count
);
assert!(
bucket.objects_count >= 1000,
"bucket objects should be >=1000, got {}",
bucket.objects_count
);
}

View File

@@ -12,39 +12,56 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use rustfs_ahm::heal::{
manager::{HealConfig, HealManager},
storage::{ECStoreHealStorage, HealStorageAPI},
task::{HealOptions, HealPriority, HealRequest, HealTaskStatus, HealType},
use async_trait::async_trait;
use rustfs_ahm::{
heal::{
manager::{HealConfig, HealManager},
storage::{ECStoreHealStorage, HealStorageAPI},
task::{HealOptions, HealPriority, HealRequest, HealTaskStatus, HealType},
},
scanner::{ScanMode, Scanner},
};
use rustfs_common::heal_channel::{HealOpts, HealScanMode};
use rustfs_ecstore::bucket::metadata_sys::{self, set_bucket_metadata};
use rustfs_ecstore::bucket::replication::{
DeletedObjectReplicationInfo, DynReplicationPool, GLOBAL_REPLICATION_POOL, ReplicationPoolTrait, ReplicationPriority,
};
use rustfs_ecstore::bucket::target::{BucketTarget, BucketTargetType, BucketTargets};
use rustfs_ecstore::bucket::utils::serialize;
use rustfs_ecstore::error::Error as EcstoreError;
use rustfs_ecstore::{
disk::endpoint::Endpoint,
endpoints::{EndpointServerPools, Endpoints, PoolEndpoints},
store::ECStore,
store_api::{ObjectIO, ObjectOptions, PutObjReader, StorageAPI},
};
use rustfs_filemeta::{ReplicateObjectInfo, ReplicationStatusType};
use rustfs_utils::http::headers::{AMZ_BUCKET_REPLICATION_STATUS, RESERVED_METADATA_PREFIX_LOWER};
use s3s::dto::{
BucketVersioningStatus, Destination, ExistingObjectReplication, ExistingObjectReplicationStatus, ReplicationConfiguration,
ReplicationRule, ReplicationRuleStatus, VersioningConfiguration,
};
use serial_test::serial;
use std::{
os::unix::fs::PermissionsExt,
path::PathBuf,
sync::{Arc, Once, OnceLock},
time::Duration,
};
use time::OffsetDateTime;
use tokio::fs;
use tokio::sync::Mutex;
use tokio_util::sync::CancellationToken;
use tracing::info;
use walkdir::WalkDir;
static GLOBAL_ENV: OnceLock<(Vec<PathBuf>, Arc<ECStore>, Arc<ECStoreHealStorage>)> = OnceLock::new();
static INIT: Once = Once::new();
const TEST_REPLICATION_TARGET_ARN: &str = "arn:aws:s3:::rustfs-replication-heal-target";
pub fn init_tracing() {
fn init_tracing() {
INIT.call_once(|| {
let _ = tracing_subscriber::fmt()
.with_env_filter(tracing_subscriber::EnvFilter::from_default_env())
.with_timer(tracing_subscriber::fmt::time::UtcTime::rfc_3339())
.with_thread_names(true)
.try_init();
let _ = tracing_subscriber::fmt::try_init();
});
}
@@ -149,6 +166,225 @@ async fn upload_test_object(ecstore: &Arc<ECStore>, bucket: &str, object: &str,
info!("Uploaded test object: {}/{} ({} bytes)", bucket, object, object_info.size);
}
fn delete_first_part_file(disk_paths: &[PathBuf], bucket: &str, object: &str) -> PathBuf {
for disk_path in disk_paths {
let obj_dir = disk_path.join(bucket).join(object);
if !obj_dir.exists() {
continue;
}
if let Some(part_path) = WalkDir::new(&obj_dir)
.min_depth(2)
.max_depth(2)
.into_iter()
.filter_map(Result::ok)
.find(|entry| {
entry.file_type().is_file()
&& entry
.file_name()
.to_str()
.map(|name| name.starts_with("part."))
.unwrap_or(false)
})
.map(|entry| entry.into_path())
{
std::fs::remove_file(&part_path).expect("Failed to delete part file");
return part_path;
}
}
panic!("Failed to locate part file for {}/{}", bucket, object);
}
fn delete_xl_meta_file(disk_paths: &[PathBuf], bucket: &str, object: &str) -> PathBuf {
for disk_path in disk_paths {
let xl_meta_path = disk_path.join(bucket).join(object).join("xl.meta");
if xl_meta_path.exists() {
std::fs::remove_file(&xl_meta_path).expect("Failed to delete xl.meta file");
return xl_meta_path;
}
}
panic!("Failed to locate xl.meta for {}/{}", bucket, object);
}
struct FormatPathGuard {
original: PathBuf,
backup: PathBuf,
}
impl FormatPathGuard {
fn new(original: PathBuf) -> std::io::Result<Self> {
let backup = original.with_extension("bak");
if backup.exists() {
std::fs::remove_file(&backup)?;
}
std::fs::rename(&original, &backup)?;
Ok(Self { original, backup })
}
}
impl Drop for FormatPathGuard {
fn drop(&mut self) {
if self.backup.exists() {
let _ = std::fs::rename(&self.backup, &self.original);
}
}
}
struct PermissionGuard {
path: PathBuf,
original_mode: u32,
}
impl PermissionGuard {
fn new(path: PathBuf, new_mode: u32) -> std::io::Result<Self> {
let metadata = std::fs::metadata(&path)?;
let original_mode = metadata.permissions().mode();
std::fs::set_permissions(&path, std::fs::Permissions::from_mode(new_mode))?;
Ok(Self { path, original_mode })
}
}
impl Drop for PermissionGuard {
fn drop(&mut self) {
if self.path.exists() {
let _ = std::fs::set_permissions(&self.path, std::fs::Permissions::from_mode(self.original_mode));
}
}
}
#[derive(Debug, Default)]
struct RecordingReplicationPool {
replica_tasks: Mutex<Vec<ReplicateObjectInfo>>,
delete_tasks: Mutex<Vec<DeletedObjectReplicationInfo>>,
}
impl RecordingReplicationPool {
async fn take_replica_tasks(&self) -> Vec<ReplicateObjectInfo> {
let mut guard = self.replica_tasks.lock().await;
guard.drain(..).collect()
}
async fn clear(&self) {
self.replica_tasks.lock().await.clear();
self.delete_tasks.lock().await.clear();
}
}
#[async_trait]
impl ReplicationPoolTrait for RecordingReplicationPool {
async fn queue_replica_task(&self, ri: ReplicateObjectInfo) {
self.replica_tasks.lock().await.push(ri);
}
async fn queue_replica_delete_task(&self, ri: DeletedObjectReplicationInfo) {
self.delete_tasks.lock().await.push(ri);
}
async fn resize(&self, _priority: ReplicationPriority, _max_workers: usize, _max_l_workers: usize) {}
async fn init_resync(
self: Arc<Self>,
_cancellation_token: CancellationToken,
_buckets: Vec<String>,
) -> Result<(), EcstoreError> {
Ok(())
}
}
async fn ensure_test_replication_pool() -> Arc<RecordingReplicationPool> {
static TEST_POOL: OnceLock<Arc<RecordingReplicationPool>> = OnceLock::new();
if let Some(pool) = TEST_POOL.get() {
pool.clear().await;
return pool.clone();
}
let pool = Arc::new(RecordingReplicationPool::default());
let dyn_pool: Arc<DynReplicationPool> = pool.clone();
let global_pool = GLOBAL_REPLICATION_POOL
.get_or_init(|| {
let pool_clone = dyn_pool.clone();
async move { pool_clone }
})
.await
.clone();
assert!(
Arc::ptr_eq(&dyn_pool, &global_pool),
"GLOBAL_REPLICATION_POOL initialized before test replication pool"
);
let _ = TEST_POOL.set(pool.clone());
pool.clear().await;
pool
}
async fn configure_bucket_replication(bucket: &str, target_arn: &str) {
let meta = metadata_sys::get(bucket)
.await
.expect("bucket metadata should exist for replication configuration");
let mut metadata = (*meta).clone();
let replication_rule = ReplicationRule {
delete_marker_replication: None,
delete_replication: None,
destination: Destination {
access_control_translation: None,
account: None,
bucket: target_arn.to_string(),
encryption_configuration: None,
metrics: None,
replication_time: None,
storage_class: None,
},
existing_object_replication: Some(ExistingObjectReplication {
status: ExistingObjectReplicationStatus::from_static(ExistingObjectReplicationStatus::ENABLED),
}),
filter: None,
id: Some("heal-replication-rule".to_string()),
prefix: Some(String::new()),
priority: Some(1),
source_selection_criteria: None,
status: ReplicationRuleStatus::from_static(ReplicationRuleStatus::ENABLED),
};
let replication_cfg = ReplicationConfiguration {
role: target_arn.to_string(),
rules: vec![replication_rule],
};
let bucket_targets = BucketTargets {
targets: vec![BucketTarget {
source_bucket: bucket.to_string(),
endpoint: "replication.invalid".to_string(),
target_bucket: "replication-target".to_string(),
arn: target_arn.to_string(),
target_type: BucketTargetType::ReplicationService,
..Default::default()
}],
};
metadata.replication_config = Some(replication_cfg.clone());
metadata.replication_config_xml = serialize(&replication_cfg).expect("serialize replication config");
metadata.replication_config_updated_at = OffsetDateTime::now_utc();
metadata.bucket_target_config = Some(bucket_targets.clone());
metadata.bucket_targets_config_json = serde_json::to_vec(&bucket_targets).expect("serialize bucket targets");
metadata.bucket_targets_config_updated_at = OffsetDateTime::now_utc();
let versioning_cfg = VersioningConfiguration {
status: Some(BucketVersioningStatus::from_static(BucketVersioningStatus::ENABLED)),
..Default::default()
};
metadata.versioning_config = Some(versioning_cfg.clone());
metadata.versioning_config_xml = serialize(&versioning_cfg).expect("serialize versioning config");
metadata.versioning_config_updated_at = OffsetDateTime::now_utc();
set_bucket_metadata(bucket.to_string(), metadata)
.await
.expect("failed to update bucket metadata for replication");
}
mod serial_tests {
use super::*;
@@ -360,7 +596,7 @@ mod serial_tests {
// Create heal manager with faster interval
let cfg = HealConfig {
heal_interval: Duration::from_secs(1),
heal_interval: Duration::from_secs(2),
..Default::default()
};
let heal_manager = HealManager::new(heal_storage.clone(), Some(cfg));
@@ -434,4 +670,380 @@ mod serial_tests {
info!("Direct heal storage API test passed");
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
#[serial]
async fn test_scanner_submits_heal_task_when_part_missing() {
let (disk_paths, ecstore, heal_storage) = setup_test_env().await;
let bucket_name = format!("scanner-heal-bucket-{}", uuid::Uuid::new_v4().simple());
let object_name = "scanner-heal-object.txt";
create_test_bucket(&ecstore, &bucket_name).await;
upload_test_object(&ecstore, &bucket_name, object_name, b"Scanner auto-heal data").await;
let heal_cfg = HealConfig {
enable_auto_heal: true,
heal_interval: Duration::from_millis(20),
max_concurrent_heals: 4,
..Default::default()
};
let heal_manager = Arc::new(HealManager::new(heal_storage.clone(), Some(heal_cfg)));
heal_manager.start().await.unwrap();
let scanner = Scanner::new(None, Some(heal_manager.clone()));
scanner.initialize_with_ecstore().await;
scanner.set_config_enable_healing(true).await;
scanner.set_config_scan_mode(ScanMode::Deep).await;
scanner
.scan_cycle()
.await
.expect("Initial scan should succeed before simulating failures");
let baseline_stats = heal_manager.get_statistics().await;
let deleted_part_path = delete_first_part_file(&disk_paths, &bucket_name, object_name);
assert!(!deleted_part_path.exists(), "Deleted part file should not exist before healing");
scanner
.scan_cycle()
.await
.expect("Scan after part deletion should finish and enqueue heal task");
tokio::time::sleep(Duration::from_millis(500)).await;
let updated_stats = heal_manager.get_statistics().await;
assert!(
updated_stats.total_tasks > baseline_stats.total_tasks,
"Scanner should submit heal tasks when data parts go missing"
);
// Allow heal manager to restore the missing part
tokio::time::sleep(Duration::from_secs(2)).await;
assert!(
deleted_part_path.exists(),
"Missing part should be restored after heal: {:?}",
deleted_part_path
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
#[serial]
async fn test_scanner_submits_metadata_heal_when_xl_meta_missing() {
let (disk_paths, ecstore, heal_storage) = setup_test_env().await;
let bucket_name = format!("scanner-meta-bucket-{}", uuid::Uuid::new_v4().simple());
let object_name = "scanner-meta-object.txt";
create_test_bucket(&ecstore, &bucket_name).await;
upload_test_object(&ecstore, &bucket_name, object_name, b"Scanner metadata heal data").await;
let heal_cfg = HealConfig {
enable_auto_heal: true,
heal_interval: Duration::from_millis(20),
max_concurrent_heals: 4,
..Default::default()
};
let heal_manager = Arc::new(HealManager::new(heal_storage.clone(), Some(heal_cfg)));
heal_manager.start().await.unwrap();
let scanner = Scanner::new(None, Some(heal_manager.clone()));
scanner.initialize_with_ecstore().await;
scanner.set_config_enable_healing(true).await;
scanner.set_config_scan_mode(ScanMode::Deep).await;
scanner
.scan_cycle()
.await
.expect("Initial scan should succeed before metadata deletion");
let baseline_stats = heal_manager.get_statistics().await;
let deleted_meta_path = delete_xl_meta_file(&disk_paths, &bucket_name, object_name);
assert!(!deleted_meta_path.exists(), "Deleted xl.meta should not exist before healing");
scanner
.scan_cycle()
.await
.expect("Scan after metadata deletion should finish and enqueue heal task");
tokio::time::sleep(Duration::from_millis(800)).await;
let updated_stats = heal_manager.get_statistics().await;
assert!(
updated_stats.total_tasks > baseline_stats.total_tasks,
"Scanner should submit metadata heal tasks when xl.meta is missing"
);
tokio::time::sleep(Duration::from_secs(2)).await;
assert!(
deleted_meta_path.exists(),
"xl.meta should be restored after heal: {:?}",
deleted_meta_path
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
#[serial]
async fn test_scanner_triggers_replication_heal_when_status_failed() {
let (_disk_paths, ecstore, heal_storage) = setup_test_env().await;
let bucket_name = format!("scanner-replication-bucket-{}", uuid::Uuid::new_v4().simple());
let object_name = "scanner-replication-heal-object";
create_test_bucket(&ecstore, &bucket_name).await;
configure_bucket_replication(&bucket_name, TEST_REPLICATION_TARGET_ARN).await;
let replication_pool = ensure_test_replication_pool().await;
replication_pool.clear().await;
let mut opts = ObjectOptions::default();
opts.user_defined.insert(
AMZ_BUCKET_REPLICATION_STATUS.to_string(),
ReplicationStatusType::Failed.as_str().to_string(),
);
let replication_status_key = format!("{}replication-status", RESERVED_METADATA_PREFIX_LOWER);
opts.user_defined.insert(
replication_status_key.clone(),
format!("{}={};", TEST_REPLICATION_TARGET_ARN, ReplicationStatusType::Failed.as_str()),
);
let mut reader = PutObjReader::from_vec(b"replication heal data".to_vec());
ecstore
.put_object(&bucket_name, object_name, &mut reader, &opts)
.await
.expect("Failed to upload replication test object");
let object_info = ecstore
.get_object_info(&bucket_name, object_name, &ObjectOptions::default())
.await
.expect("Failed to read object info for replication test");
assert_eq!(
object_info
.user_defined
.get(AMZ_BUCKET_REPLICATION_STATUS)
.map(|s| s.as_str()),
Some(ReplicationStatusType::Failed.as_str()),
"Uploaded object should contain replication status metadata"
);
assert!(
object_info
.user_defined
.get(&replication_status_key)
.map(|s| s.contains(ReplicationStatusType::Failed.as_str()))
.unwrap_or(false),
"Uploaded object should preserve internal replication status metadata"
);
let heal_cfg = HealConfig {
enable_auto_heal: true,
heal_interval: Duration::from_millis(20),
max_concurrent_heals: 4,
..Default::default()
};
let heal_manager = Arc::new(HealManager::new(heal_storage.clone(), Some(heal_cfg)));
heal_manager.start().await.unwrap();
let scanner = Scanner::new(None, Some(heal_manager.clone()));
scanner.initialize_with_ecstore().await;
scanner.set_config_enable_healing(true).await;
scanner.set_config_scan_mode(ScanMode::Deep).await;
scanner
.scan_cycle()
.await
.expect("Scan cycle should succeed and evaluate replication state");
let replica_tasks = replication_pool.take_replica_tasks().await;
assert!(
replica_tasks
.iter()
.any(|info| info.bucket == bucket_name && info.name == object_name),
"Scanner should enqueue replication heal task when replication status is FAILED (recorded tasks: {:?})",
replica_tasks
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
#[serial]
async fn test_scanner_submits_erasure_set_heal_when_disk_offline() {
let (disk_paths, _ecstore, heal_storage) = setup_test_env().await;
let format_path = disk_paths[0].join(".rustfs.sys").join("format.json");
assert!(format_path.exists(), "format.json should exist before simulating offline disk");
let _format_guard = FormatPathGuard::new(format_path.clone()).expect("failed to move format.json");
let heal_cfg = HealConfig {
enable_auto_heal: true,
heal_interval: Duration::from_millis(20),
max_concurrent_heals: 2,
..Default::default()
};
let heal_manager = Arc::new(HealManager::new(heal_storage.clone(), Some(heal_cfg)));
heal_manager.start().await.unwrap();
let scanner = Scanner::new(None, Some(heal_manager.clone()));
scanner.initialize_with_ecstore().await;
scanner.set_config_enable_healing(true).await;
scanner.set_config_scan_mode(ScanMode::Normal).await;
let baseline_stats = heal_manager.get_statistics().await;
scanner
.scan_cycle()
.await
.expect("Scan cycle should complete even when a disk is offline");
tokio::time::sleep(Duration::from_millis(200)).await;
let updated_stats = heal_manager.get_statistics().await;
assert!(
updated_stats.total_tasks > baseline_stats.total_tasks,
"Scanner should enqueue erasure set heal when disk is offline (before {}, after {})",
baseline_stats.total_tasks,
updated_stats.total_tasks
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
#[serial]
async fn test_scanner_submits_erasure_set_heal_when_listing_volumes_fails() {
let (disk_paths, ecstore, heal_storage) = setup_test_env().await;
let bucket_name = format!("scanner-list-volumes-{}", uuid::Uuid::new_v4().simple());
let object_name = "scanner-list-volumes-object";
create_test_bucket(&ecstore, &bucket_name).await;
upload_test_object(&ecstore, &bucket_name, object_name, b"disk list volumes failure").await;
let heal_cfg = HealConfig {
enable_auto_heal: true,
heal_interval: Duration::from_millis(20),
max_concurrent_heals: 2,
..Default::default()
};
let heal_manager = Arc::new(HealManager::new(heal_storage.clone(), Some(heal_cfg)));
heal_manager.start().await.unwrap();
let scanner = Scanner::new(None, Some(heal_manager.clone()));
scanner.initialize_with_ecstore().await;
scanner.set_config_enable_healing(true).await;
scanner.set_config_scan_mode(ScanMode::Deep).await;
scanner
.scan_cycle()
.await
.expect("Initial scan should succeed before simulating disk permission issues");
let baseline_stats = heal_manager.get_statistics().await;
let disk_root = disk_paths[0].clone();
assert!(disk_root.exists(), "Disk root should exist so we can simulate permission failures");
{
let _root_perm_guard =
PermissionGuard::new(disk_root.clone(), 0o000).expect("Failed to change disk root permissions");
let scan_result = scanner.scan_cycle().await;
assert!(
scan_result.is_ok(),
"Scan cycle should continue even if disk volumes cannot be listed: {:?}",
scan_result
);
tokio::time::sleep(Duration::from_millis(200)).await;
let updated_stats = heal_manager.get_statistics().await;
assert!(
updated_stats.total_tasks > baseline_stats.total_tasks,
"Scanner should enqueue erasure set heal when listing volumes fails (before {}, after {})",
baseline_stats.total_tasks,
updated_stats.total_tasks
);
}
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
#[serial]
async fn test_scanner_submits_erasure_set_heal_when_disk_access_fails() {
let (disk_paths, ecstore, heal_storage) = setup_test_env().await;
let bucket_name = format!("scanner-access-error-{}", uuid::Uuid::new_v4().simple());
let object_name = "scanner-access-error-object.txt";
create_test_bucket(&ecstore, &bucket_name).await;
upload_test_object(&ecstore, &bucket_name, object_name, b"disk access failure").await;
let bucket_path = disk_paths[0].join(&bucket_name);
assert!(bucket_path.exists(), "Bucket path should exist on disk for access test");
let _perm_guard = PermissionGuard::new(bucket_path.clone(), 0o000).expect("Failed to change permissions");
let heal_cfg = HealConfig {
enable_auto_heal: true,
heal_interval: Duration::from_millis(20),
max_concurrent_heals: 2,
..Default::default()
};
let heal_manager = Arc::new(HealManager::new(heal_storage.clone(), Some(heal_cfg)));
heal_manager.start().await.unwrap();
let scanner = Scanner::new(None, Some(heal_manager.clone()));
scanner.initialize_with_ecstore().await;
scanner.set_config_enable_healing(true).await;
scanner.set_config_scan_mode(ScanMode::Deep).await;
let baseline_stats = heal_manager.get_statistics().await;
let scan_result = scanner.scan_cycle().await;
assert!(
scan_result.is_ok(),
"Scan cycle should complete even if a disk volume has access errors: {:?}",
scan_result
);
tokio::time::sleep(Duration::from_millis(200)).await;
let updated_stats = heal_manager.get_statistics().await;
assert!(
updated_stats.total_tasks > baseline_stats.total_tasks,
"Scanner should enqueue erasure set heal when disk access fails (before {}, after {})",
baseline_stats.total_tasks,
updated_stats.total_tasks
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
#[serial]
async fn test_scanner_detects_missing_bucket_directory_and_queues_bucket_heal() {
let (disk_paths, ecstore, heal_storage) = setup_test_env().await;
let bucket_name = format!("scanner-missing-bucket-{}", uuid::Uuid::new_v4().simple());
create_test_bucket(&ecstore, &bucket_name).await;
upload_test_object(&ecstore, &bucket_name, "seed-object", b"bucket heal data").await;
let scanner_heal_cfg = HealConfig {
enable_auto_heal: true,
heal_interval: Duration::from_millis(20),
max_concurrent_heals: 4,
..Default::default()
};
let scanner_heal_manager = Arc::new(HealManager::new(heal_storage.clone(), Some(scanner_heal_cfg)));
scanner_heal_manager.start().await.unwrap();
let scanner = Scanner::new(None, Some(scanner_heal_manager.clone()));
scanner.initialize_with_ecstore().await;
scanner.set_config_enable_healing(true).await;
scanner.set_config_scan_mode(ScanMode::Normal).await;
scanner
.scan_cycle()
.await
.expect("Initial scan should succeed before deleting bucket directory");
let baseline_stats = scanner_heal_manager.get_statistics().await;
let missing_dir = disk_paths[0].join(&bucket_name);
assert!(missing_dir.exists());
std::fs::remove_dir_all(&missing_dir).expect("Failed to remove bucket directory for heal simulation");
assert!(!missing_dir.exists(), "Bucket directory should be removed on disk to trigger heal");
scanner
.run_volume_consistency_check()
.await
.expect("Volume consistency check should run after bucket removal");
tokio::time::sleep(Duration::from_millis(800)).await;
let updated_stats = scanner_heal_manager.get_statistics().await;
assert!(
updated_stats.total_tasks > baseline_stats.total_tasks,
"Scanner should submit bucket heal tasks when a bucket directory is missing"
);
tokio::time::sleep(Duration::from_secs(1)).await;
assert!(missing_dir.exists(), "Bucket directory should be restored after heal");
}
}

View File

@@ -495,26 +495,6 @@ mod serial_tests {
object_name,
} = &elm.1;
println!("cache row:{ver_no} {ver_id} {mod_time} {type_:?} {object_name}");
//eval_inner(&oi.to_lifecycle_opts(), OffsetDateTime::now_utc()).await;
eval_inner(
&lifecycle::ObjectOpts {
name: oi.name.clone(),
user_tags: oi.user_tags.clone(),
version_id: oi.version_id.map(|v| v.to_string()).unwrap_or_default(),
mod_time: oi.mod_time,
size: oi.size as usize,
is_latest: oi.is_latest,
num_versions: oi.num_versions,
delete_marker: oi.delete_marker,
successor_mod_time: oi.successor_mod_time,
restore_ongoing: oi.restore_ongoing,
restore_expires: oi.restore_expires,
transition_status: oi.transitioned_object.status.clone(),
..Default::default()
},
OffsetDateTime::now_utc(),
)
.await;
}
println!("row:{row:?}");
}
@@ -526,261 +506,3 @@ mod serial_tests {
println!("Lifecycle cache test completed");
}
}
async fn eval_inner(&self, obj: &ObjectOpts, now: OffsetDateTime) -> Event {
let mut events = Vec::<Event>::new();
info!(
"eval_inner: object={}, mod_time={:?}, now={:?}, is_latest={}, delete_marker={}",
obj.name, obj.mod_time, now, obj.is_latest, obj.delete_marker
);
if obj.mod_time.expect("err").unix_timestamp() == 0 {
info!("eval_inner: mod_time is 0, returning default event");
return Event::default();
}
if let Some(restore_expires) = obj.restore_expires {
if !restore_expires.unix_timestamp() == 0 && now.unix_timestamp() > restore_expires.unix_timestamp() {
let mut action = IlmAction::DeleteRestoredAction;
if !obj.is_latest {
action = IlmAction::DeleteRestoredVersionAction;
}
events.push(Event {
action,
due: Some(now),
rule_id: "".into(),
noncurrent_days: 0,
newer_noncurrent_versions: 0,
storage_class: "".into(),
});
}
}
if let Some(ref lc_rules) = self.filter_rules(obj).await {
for rule in lc_rules.iter() {
if obj.expired_object_deletemarker() {
if let Some(expiration) = rule.expiration.as_ref() {
if let Some(expired_object_delete_marker) = expiration.expired_object_delete_marker {
events.push(Event {
action: IlmAction::DeleteVersionAction,
rule_id: rule.id.clone().expect("err!"),
due: Some(now),
noncurrent_days: 0,
newer_noncurrent_versions: 0,
storage_class: "".into(),
});
break;
}
if let Some(days) = expiration.days {
let expected_expiry = expected_expiry_time(obj.mod_time.unwrap(), days /*, date*/);
if now.unix_timestamp() >= expected_expiry.unix_timestamp() {
events.push(Event {
action: IlmAction::DeleteVersionAction,
rule_id: rule.id.clone().expect("err!"),
due: Some(expected_expiry),
noncurrent_days: 0,
newer_noncurrent_versions: 0,
storage_class: "".into(),
});
break;
}
}
}
}
if obj.is_latest {
if let Some(ref expiration) = rule.expiration {
if let Some(expired_object_delete_marker) = expiration.expired_object_delete_marker {
if obj.delete_marker && expired_object_delete_marker {
let due = expiration.next_due(obj);
if let Some(due) = due {
if now.unix_timestamp() >= due.unix_timestamp() {
events.push(Event {
action: IlmAction::DelMarkerDeleteAllVersionsAction,
rule_id: rule.id.clone().expect("err!"),
due: Some(due),
noncurrent_days: 0,
newer_noncurrent_versions: 0,
storage_class: "".into(),
});
}
}
continue;
}
}
}
}
if !obj.is_latest {
if let Some(ref noncurrent_version_expiration) = rule.noncurrent_version_expiration {
if let Some(newer_noncurrent_versions) = noncurrent_version_expiration.newer_noncurrent_versions {
if newer_noncurrent_versions > 0 {
continue;
}
}
}
}
if !obj.is_latest {
if let Some(ref noncurrent_version_expiration) = rule.noncurrent_version_expiration {
if let Some(noncurrent_days) = noncurrent_version_expiration.noncurrent_days {
if noncurrent_days != 0 {
if let Some(successor_mod_time) = obj.successor_mod_time {
let expected_expiry = expected_expiry_time(successor_mod_time, noncurrent_days);
if now.unix_timestamp() >= expected_expiry.unix_timestamp() {
events.push(Event {
action: IlmAction::DeleteVersionAction,
rule_id: rule.id.clone().expect("err!"),
due: Some(expected_expiry),
noncurrent_days: 0,
newer_noncurrent_versions: 0,
storage_class: "".into(),
});
}
}
}
}
}
}
if !obj.is_latest {
if let Some(ref noncurrent_version_transitions) = rule.noncurrent_version_transitions {
if let Some(ref storage_class) = noncurrent_version_transitions[0].storage_class {
if storage_class.as_str() != "" && !obj.delete_marker && obj.transition_status != TRANSITION_COMPLETE {
let due = rule.noncurrent_version_transitions.as_ref().unwrap()[0].next_due(obj);
if let Some(due0) = due {
if now.unix_timestamp() == 0 || now.unix_timestamp() > due0.unix_timestamp() {
events.push(Event {
action: IlmAction::TransitionVersionAction,
rule_id: rule.id.clone().expect("err!"),
due,
storage_class: rule.noncurrent_version_transitions.as_ref().unwrap()[0]
.storage_class
.clone()
.unwrap()
.as_str()
.to_string(),
..Default::default()
});
}
}
}
}
}
}
info!(
"eval_inner: checking expiration condition - is_latest={}, delete_marker={}, version_id={:?}, condition_met={}",
obj.is_latest,
obj.delete_marker,
obj.version_id,
(obj.is_latest || obj.version_id.is_empty()) && !obj.delete_marker
);
// Allow expiration for latest objects OR non-versioned objects (empty version_id)
if (obj.is_latest || obj.version_id.is_empty()) && !obj.delete_marker {
info!("eval_inner: entering expiration check");
if let Some(ref expiration) = rule.expiration {
if let Some(ref date) = expiration.date {
let date0 = OffsetDateTime::from(date.clone());
if date0.unix_timestamp() != 0 && (now.unix_timestamp() >= date0.unix_timestamp()) {
info!("eval_inner: expiration by date - date0={:?}", date0);
events.push(Event {
action: IlmAction::DeleteAction,
rule_id: rule.id.clone().expect("err!"),
due: Some(date0),
noncurrent_days: 0,
newer_noncurrent_versions: 0,
storage_class: "".into(),
});
}
} else if let Some(days) = expiration.days {
let expected_expiry: OffsetDateTime = expected_expiry_time(obj.mod_time.unwrap(), days);
info!(
"eval_inner: expiration check - days={}, obj_time={:?}, expiry_time={:?}, now={:?}, should_expire={}",
days,
obj.mod_time.expect("err!"),
expected_expiry,
now,
now.unix_timestamp() > expected_expiry.unix_timestamp()
);
if now.unix_timestamp() >= expected_expiry.unix_timestamp() {
info!("eval_inner: object should expire, adding DeleteAction");
let mut event = Event {
action: IlmAction::DeleteAction,
rule_id: rule.id.clone().expect("err!"),
due: Some(expected_expiry),
noncurrent_days: 0,
newer_noncurrent_versions: 0,
storage_class: "".into(),
};
/*if rule.expiration.expect("err!").delete_all.val {
event.action = IlmAction::DeleteAllVersionsAction
}*/
events.push(event);
}
} else {
info!("eval_inner: expiration.days is None");
}
} else {
info!("eval_inner: rule.expiration is None");
}
if obj.transition_status != TRANSITION_COMPLETE {
if let Some(ref transitions) = rule.transitions {
let due = transitions[0].next_due(obj);
if let Some(due0) = due {
if now.unix_timestamp() == 0 || now.unix_timestamp() > due0.unix_timestamp() {
events.push(Event {
action: IlmAction::TransitionAction,
rule_id: rule.id.clone().expect("err!"),
due,
storage_class: transitions[0].storage_class.clone().expect("err!").as_str().to_string(),
noncurrent_days: 0,
newer_noncurrent_versions: 0,
});
}
}
}
}
}
}
}
if events.len() > 0 {
events.sort_by(|a, b| {
if now.unix_timestamp() > a.due.expect("err!").unix_timestamp()
&& now.unix_timestamp() > b.due.expect("err").unix_timestamp()
|| a.due.expect("err").unix_timestamp() == b.due.expect("err").unix_timestamp()
{
match a.action {
IlmAction::DeleteAllVersionsAction
| IlmAction::DelMarkerDeleteAllVersionsAction
| IlmAction::DeleteAction
| IlmAction::DeleteVersionAction => {
return Ordering::Less;
}
_ => (),
}
match b.action {
IlmAction::DeleteAllVersionsAction
| IlmAction::DelMarkerDeleteAllVersionsAction
| IlmAction::DeleteAction
| IlmAction::DeleteVersionAction => {
return Ordering::Greater;
}
_ => (),
}
return Ordering::Less;
}
if a.due.expect("err").unix_timestamp() < b.due.expect("err").unix_timestamp() {
return Ordering::Less;
}
return Ordering::Greater;
});
return events[0].clone();
}
Event::default()
}

View File

@@ -12,10 +12,18 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use async_trait::async_trait;
use rustfs_ahm::scanner::{Scanner, data_scanner::ScannerConfig};
use rustfs_ecstore::{
bucket::metadata::BUCKET_LIFECYCLE_CONFIG,
bucket::metadata_sys,
bucket::{
metadata::BUCKET_LIFECYCLE_CONFIG,
metadata_sys,
replication::{
DeletedObjectReplicationInfo, DynReplicationPool, GLOBAL_REPLICATION_POOL, ReplicationPoolTrait, ReplicationPriority,
},
target::{BucketTarget, BucketTargetType, BucketTargets},
utils::serialize,
},
disk::endpoint::Endpoint,
endpoints::{EndpointServerPools, Endpoints, PoolEndpoints},
global::GLOBAL_TierConfigMgr,
@@ -23,18 +31,27 @@ use rustfs_ecstore::{
store_api::{MakeBucketOptions, ObjectIO, ObjectOptions, PutObjReader, StorageAPI},
tier::tier_config::{TierConfig, TierMinIO, TierType},
};
use rustfs_filemeta::{ReplicateObjectInfo, ReplicationStatusType};
use rustfs_utils::http::headers::{AMZ_BUCKET_REPLICATION_STATUS, RESERVED_METADATA_PREFIX_LOWER};
use s3s::dto::{
BucketVersioningStatus, Destination, ExistingObjectReplication, ExistingObjectReplicationStatus, ReplicationConfiguration,
ReplicationRule, ReplicationRuleStatus, VersioningConfiguration,
};
use serial_test::serial;
use std::{
path::PathBuf,
sync::{Arc, Once, OnceLock},
time::Duration,
};
use time::{Duration as TimeDuration, OffsetDateTime};
use tokio::fs;
use tokio::sync::Mutex;
use tokio_util::sync::CancellationToken;
use tracing::info;
static GLOBAL_ENV: OnceLock<(Vec<PathBuf>, Arc<ECStore>)> = OnceLock::new();
static INIT: Once = Once::new();
const TEST_REPLICATION_TARGET_ARN: &str = "arn:aws:s3:::rustfs-lifecycle-replication-test";
fn init_tracing() {
INIT.call_once(|| {
@@ -159,6 +176,167 @@ async fn upload_test_object(ecstore: &Arc<ECStore>, bucket: &str, object: &str,
info!("Uploaded test object: {}/{} ({} bytes)", bucket, object, object_info.size);
}
#[derive(Debug, Default)]
struct RecordingReplicationPool {
replica_tasks: Mutex<Vec<ReplicateObjectInfo>>,
delete_tasks: Mutex<Vec<DeletedObjectReplicationInfo>>,
}
impl RecordingReplicationPool {
async fn take_replica_tasks(&self) -> Vec<ReplicateObjectInfo> {
let mut guard = self.replica_tasks.lock().await;
guard.drain(..).collect()
}
}
#[async_trait]
impl ReplicationPoolTrait for RecordingReplicationPool {
async fn queue_replica_task(&self, ri: ReplicateObjectInfo) {
self.replica_tasks.lock().await.push(ri);
}
async fn queue_replica_delete_task(&self, ri: DeletedObjectReplicationInfo) {
self.delete_tasks.lock().await.push(ri);
}
async fn resize(&self, _priority: ReplicationPriority, _max_workers: usize, _max_l_workers: usize) {}
async fn init_resync(
self: Arc<Self>,
_cancellation_token: CancellationToken,
_buckets: Vec<String>,
) -> Result<(), rustfs_ecstore::error::Error> {
Ok(())
}
}
async fn ensure_test_replication_pool() -> Arc<RecordingReplicationPool> {
static POOL: OnceLock<Arc<RecordingReplicationPool>> = OnceLock::new();
if let Some(existing) = POOL.get() {
existing.replica_tasks.lock().await.clear();
existing.delete_tasks.lock().await.clear();
return existing.clone();
}
let pool = Arc::new(RecordingReplicationPool::default());
let dyn_pool: Arc<DynReplicationPool> = pool.clone();
GLOBAL_REPLICATION_POOL
.get_or_init(|| {
let pool_clone = dyn_pool.clone();
async move { pool_clone }
})
.await;
let _ = POOL.set(pool.clone());
pool
}
async fn configure_bucket_replication(bucket: &str) {
let meta = metadata_sys::get(bucket)
.await
.expect("bucket metadata should exist for replication configuration");
let mut metadata = (*meta).clone();
let replication_rule = ReplicationRule {
delete_marker_replication: None,
delete_replication: None,
destination: Destination {
access_control_translation: None,
account: None,
bucket: TEST_REPLICATION_TARGET_ARN.to_string(),
encryption_configuration: None,
metrics: None,
replication_time: None,
storage_class: None,
},
existing_object_replication: Some(ExistingObjectReplication {
status: ExistingObjectReplicationStatus::from_static(ExistingObjectReplicationStatus::ENABLED),
}),
filter: None,
id: Some("lifecycle-replication-rule".to_string()),
prefix: Some(String::new()),
priority: Some(1),
source_selection_criteria: None,
status: ReplicationRuleStatus::from_static(ReplicationRuleStatus::ENABLED),
};
let replication_cfg = ReplicationConfiguration {
role: TEST_REPLICATION_TARGET_ARN.to_string(),
rules: vec![replication_rule],
};
let bucket_targets = BucketTargets {
targets: vec![BucketTarget {
source_bucket: bucket.to_string(),
endpoint: "replication.invalid".to_string(),
target_bucket: "replication-target".to_string(),
arn: TEST_REPLICATION_TARGET_ARN.to_string(),
target_type: BucketTargetType::ReplicationService,
..Default::default()
}],
};
metadata.replication_config = Some(replication_cfg.clone());
metadata.replication_config_xml = serialize(&replication_cfg).expect("serialize replication config");
metadata.bucket_target_config = Some(bucket_targets.clone());
metadata.bucket_targets_config_json = serde_json::to_vec(&bucket_targets).expect("serialize bucket targets");
let versioning_cfg = VersioningConfiguration {
status: Some(BucketVersioningStatus::from_static(BucketVersioningStatus::ENABLED)),
..Default::default()
};
metadata.versioning_config = Some(versioning_cfg.clone());
metadata.versioning_config_xml = serialize(&versioning_cfg).expect("serialize versioning config");
metadata_sys::set_bucket_metadata(bucket.to_string(), metadata)
.await
.expect("failed to persist bucket metadata with replication config");
}
async fn upload_object_with_replication_status(
ecstore: &Arc<ECStore>,
bucket: &str,
object: &str,
status: ReplicationStatusType,
) {
let mut reader = PutObjReader::from_vec(b"replication-state".to_vec());
let mut opts = ObjectOptions::default();
opts.user_defined
.insert(AMZ_BUCKET_REPLICATION_STATUS.to_string(), status.as_str().to_string());
let internal_key = format!("{}replication-status", RESERVED_METADATA_PREFIX_LOWER);
opts.user_defined
.insert(internal_key, format!("{}={};", TEST_REPLICATION_TARGET_ARN, status.as_str()));
(**ecstore)
.put_object(bucket, object, &mut reader, &opts)
.await
.expect("failed to upload replication test object");
}
async fn upload_object_with_retention(ecstore: &Arc<ECStore>, bucket: &str, object: &str, data: &[u8], retain_for: Duration) {
use s3s::header::{X_AMZ_OBJECT_LOCK_MODE, X_AMZ_OBJECT_LOCK_RETAIN_UNTIL_DATE};
use time::format_description::well_known::Rfc3339;
let mut reader = PutObjReader::from_vec(data.to_vec());
let mut opts = ObjectOptions::default();
let retain_duration = TimeDuration::try_from(retain_for).unwrap_or_else(|_| TimeDuration::seconds(0));
let retain_until = OffsetDateTime::now_utc() + retain_duration;
let retain_until_str = retain_until.format(&Rfc3339).expect("format retain date");
let lock_mode_key = X_AMZ_OBJECT_LOCK_MODE.as_str().to_string();
let lock_mode_lower = lock_mode_key.to_lowercase();
opts.user_defined.insert(lock_mode_lower, "GOVERNANCE".to_string());
opts.user_defined.insert(lock_mode_key, "GOVERNANCE".to_string());
let retain_key = X_AMZ_OBJECT_LOCK_RETAIN_UNTIL_DATE.as_str().to_string();
let retain_key_lower = retain_key.to_lowercase();
opts.user_defined.insert(retain_key_lower, retain_until_str.clone());
opts.user_defined.insert(retain_key, retain_until_str);
(**ecstore)
.put_object(bucket, object, &mut reader, &opts)
.await
.expect("Failed to upload retained object");
}
/// Test helper: Set bucket lifecycle configuration
async fn set_bucket_lifecycle(bucket_name: &str) -> Result<(), Box<dyn std::error::Error>> {
// Create a simple lifecycle configuration XML with 0 days expiry for immediate testing
@@ -263,16 +441,6 @@ async fn create_test_tier(server: u32) {
region: "".to_string(),
..Default::default()
})
} else if server == 2 {
Some(TierMinIO {
access_key: "minioadmin".to_string(),
secret_key: "minioadmin".to_string(),
bucket: "mblock2".to_string(),
endpoint: "http://m1ddns.pvtool.com:9020".to_string(),
prefix: format!("mypre{}/", uuid::Uuid::new_v4()),
region: "".to_string(),
..Default::default()
})
} else {
Some(TierMinIO {
access_key: "minioadmin".to_string(),
@@ -612,7 +780,7 @@ mod serial_tests {
async fn test_lifecycle_transition_basic() {
let (_disk_paths, ecstore) = setup_test_env().await;
create_test_tier(2).await;
create_test_tier(1).await;
// Create test bucket and object
let suffix = uuid::Uuid::new_v4().simple().to_string();
@@ -620,15 +788,8 @@ mod serial_tests {
let object_name = "test/object.txt"; // Match the lifecycle rule prefix "test/"
let test_data = b"Hello, this is test data for lifecycle expiry!";
create_test_lock_bucket(&ecstore, bucket_name.as_str()).await;
upload_test_object(
&ecstore,
bucket_name.as_str(),
object_name,
b"Hello, this is test data for lifecycle expiry 1111-11111111-1111 !",
)
.await;
//create_test_bucket(&ecstore, bucket_name.as_str()).await;
//create_test_lock_bucket(&ecstore, bucket_name.as_str()).await;
create_test_bucket(&ecstore, bucket_name.as_str()).await;
upload_test_object(&ecstore, bucket_name.as_str(), object_name, test_data).await;
// Verify object exists initially
@@ -711,4 +872,127 @@ mod serial_tests {
println!("Lifecycle transition basic test completed");
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
#[serial]
async fn test_lifecycle_respects_object_lock_retention() {
let (_disk_paths, ecstore) = setup_test_env().await;
let suffix = uuid::Uuid::new_v4().simple().to_string();
let bucket_name = format!("test-lc-lock-retention-{}", &suffix[..8]);
let object_name = "test/locked-object.txt";
let test_data = b"retained payload";
create_test_lock_bucket(&ecstore, bucket_name.as_str()).await;
upload_object_with_retention(&ecstore, bucket_name.as_str(), object_name, test_data, Duration::from_secs(3600)).await;
assert!(
object_exists(&ecstore, bucket_name.as_str(), object_name).await,
"Object should exist before lifecycle processing"
);
set_bucket_lifecycle(bucket_name.as_str())
.await
.expect("Failed to set lifecycle configuration");
let scanner_config = ScannerConfig {
scan_interval: Duration::from_millis(100),
deep_scan_interval: Duration::from_millis(500),
max_concurrent_scans: 1,
..Default::default()
};
let scanner = Scanner::new(Some(scanner_config), None);
scanner.start().await.expect("Failed to start scanner");
for _ in 0..3 {
scanner.scan_cycle().await.expect("scan cycle should succeed");
tokio::time::sleep(Duration::from_millis(200)).await;
}
assert!(
object_exists(&ecstore, bucket_name.as_str(), object_name).await,
"Object with active retention should not be deleted by lifecycle"
);
}
#[tokio::test(flavor = "multi_thread", worker_threads = 4)]
#[serial]
async fn test_lifecycle_triggers_replication_heal_for_lagging_and_failed_objects() {
let (_disk_paths, ecstore) = setup_test_env().await;
let suffix = uuid::Uuid::new_v4().simple().to_string();
let bucket_name = format!("lc-replication-{}", &suffix[..8]);
create_test_bucket(&ecstore, bucket_name.as_str()).await;
configure_bucket_replication(bucket_name.as_str()).await;
let replication_pool = ensure_test_replication_pool().await;
upload_object_with_replication_status(
&ecstore,
bucket_name.as_str(),
"test/lagging-pending",
ReplicationStatusType::Pending,
)
.await;
upload_object_with_replication_status(
&ecstore,
bucket_name.as_str(),
"test/failed-object",
ReplicationStatusType::Failed,
)
.await;
let scanner_config = ScannerConfig {
scan_interval: Duration::from_millis(100),
deep_scan_interval: Duration::from_millis(500),
max_concurrent_scans: 2,
replication_pending_grace: Duration::from_secs(0),
..Default::default()
};
let scanner = Scanner::new(Some(scanner_config), None);
scanner.scan_cycle().await.expect("scan cycle should complete");
tokio::time::sleep(Duration::from_millis(200)).await;
let replica_tasks = replication_pool.take_replica_tasks().await;
assert!(
replica_tasks.iter().any(|t| t.name == "test/lagging-pending"),
"Pending object should be enqueued for replication heal: {:?}",
replica_tasks
);
assert!(
replica_tasks.iter().any(|t| t.name == "test/failed-object"),
"Failed object should be enqueued for replication heal: {:?}",
replica_tasks
);
let metrics = scanner.get_metrics().await;
assert_eq!(
metrics.replication_tasks_queued,
replica_tasks.len() as u64,
"Replication tasks queued metric should match recorded tasks"
);
assert!(
metrics.replication_pending_objects >= 1,
"Pending replication metric should be incremented"
);
assert!(metrics.replication_failed_objects >= 1, "Failed replication metric should be incremented");
assert!(
metrics.replication_lagging_objects >= 1,
"Lagging replication metric should track pending object beyond grace"
);
let bucket_metrics = metrics
.bucket_metrics
.get(&bucket_name)
.expect("bucket metrics should contain replication counters");
assert!(
bucket_metrics.replication_pending >= 1 && bucket_metrics.replication_failed >= 1,
"Bucket-level replication metrics should reflect observed statuses"
);
assert_eq!(
bucket_metrics.replication_tasks_queued,
replica_tasks.len() as u64,
"Bucket-level queued counter should match enqueued tasks"
);
}
}

View File

@@ -29,7 +29,6 @@ categories = ["web-programming", "development-tools", "asynchronous", "api-bindi
rustfs-targets = { workspace = true }
rustfs-config = { workspace = true, features = ["audit", "constants"] }
rustfs-ecstore = { workspace = true }
async-trait = { workspace = true }
chrono = { workspace = true }
const-str = { workspace = true }
futures = { workspace = true }

View File

@@ -1,224 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::AuditEntry;
use async_trait::async_trait;
use hashbrown::HashSet;
use rumqttc::QoS;
use rustfs_config::audit::{AUDIT_MQTT_KEYS, AUDIT_WEBHOOK_KEYS, ENV_AUDIT_MQTT_KEYS, ENV_AUDIT_WEBHOOK_KEYS};
use rustfs_config::{
AUDIT_DEFAULT_DIR, DEFAULT_LIMIT, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR,
MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN, WEBHOOK_CLIENT_CERT,
WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_QUEUE_DIR, WEBHOOK_QUEUE_LIMIT,
};
use rustfs_ecstore::config::KVS;
use rustfs_targets::{
Target,
error::TargetError,
target::{mqtt::MQTTArgs, webhook::WebhookArgs},
};
use std::time::Duration;
use tracing::{debug, warn};
use url::Url;
/// Trait for creating targets from configuration
#[async_trait]
pub trait TargetFactory: Send + Sync {
/// Creates a target from configuration
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target<AuditEntry> + Send + Sync>, TargetError>;
/// Validates target configuration
fn validate_config(&self, id: &str, config: &KVS) -> Result<(), TargetError>;
/// Returns a set of valid configuration field names for this target type.
/// This is used to filter environment variables.
fn get_valid_fields(&self) -> HashSet<String>;
/// Returns a set of valid configuration env field names for this target type.
/// This is used to filter environment variables.
fn get_valid_env_fields(&self) -> HashSet<String>;
}
/// Factory for creating Webhook targets
pub struct WebhookTargetFactory;
#[async_trait]
impl TargetFactory for WebhookTargetFactory {
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target<AuditEntry> + Send + Sync>, TargetError> {
// All config values are now read directly from the merged `config` KVS.
let endpoint = config
.lookup(WEBHOOK_ENDPOINT)
.ok_or_else(|| TargetError::Configuration("Missing webhook endpoint".to_string()))?;
let parsed_endpoint = endpoint.trim();
let endpoint_url = Url::parse(parsed_endpoint)
.map_err(|e| TargetError::Configuration(format!("Invalid endpoint URL: {e} (value: '{parsed_endpoint}')")))?;
let args = WebhookArgs {
enable: true, // If we are here, it's already enabled.
endpoint: endpoint_url,
auth_token: config.lookup(WEBHOOK_AUTH_TOKEN).unwrap_or_default(),
queue_dir: config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or(AUDIT_DEFAULT_DIR.to_string()),
queue_limit: config
.lookup(WEBHOOK_QUEUE_LIMIT)
.and_then(|v| v.parse::<u64>().ok())
.unwrap_or(DEFAULT_LIMIT),
client_cert: config.lookup(WEBHOOK_CLIENT_CERT).unwrap_or_default(),
client_key: config.lookup(WEBHOOK_CLIENT_KEY).unwrap_or_default(),
target_type: rustfs_targets::target::TargetType::AuditLog,
};
let target = rustfs_targets::target::webhook::WebhookTarget::new(id, args)?;
Ok(Box::new(target))
}
fn validate_config(&self, _id: &str, config: &KVS) -> Result<(), TargetError> {
// Validation also uses the merged `config` KVS directly.
let endpoint = config
.lookup(WEBHOOK_ENDPOINT)
.ok_or_else(|| TargetError::Configuration("Missing webhook endpoint".to_string()))?;
debug!("endpoint: {}", endpoint);
let parsed_endpoint = endpoint.trim();
Url::parse(parsed_endpoint)
.map_err(|e| TargetError::Configuration(format!("Invalid endpoint URL: {e} (value: '{parsed_endpoint}')")))?;
let client_cert = config.lookup(WEBHOOK_CLIENT_CERT).unwrap_or_default();
let client_key = config.lookup(WEBHOOK_CLIENT_KEY).unwrap_or_default();
if client_cert.is_empty() != client_key.is_empty() {
return Err(TargetError::Configuration(
"Both client_cert and client_key must be specified together".to_string(),
));
}
let queue_dir = config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or(AUDIT_DEFAULT_DIR.to_string());
if !queue_dir.is_empty() && !std::path::Path::new(&queue_dir).is_absolute() {
return Err(TargetError::Configuration("Webhook queue directory must be an absolute path".to_string()));
}
Ok(())
}
fn get_valid_fields(&self) -> HashSet<String> {
AUDIT_WEBHOOK_KEYS.iter().map(|s| s.to_string()).collect()
}
fn get_valid_env_fields(&self) -> HashSet<String> {
ENV_AUDIT_WEBHOOK_KEYS.iter().map(|s| s.to_string()).collect()
}
}
/// Factory for creating MQTT targets
pub struct MQTTTargetFactory;
#[async_trait]
impl TargetFactory for MQTTTargetFactory {
async fn create_target(&self, id: String, config: &KVS) -> Result<Box<dyn Target<AuditEntry> + Send + Sync>, TargetError> {
let broker = config
.lookup(MQTT_BROKER)
.ok_or_else(|| TargetError::Configuration("Missing MQTT broker".to_string()))?;
let broker_url = Url::parse(&broker)
.map_err(|e| TargetError::Configuration(format!("Invalid broker URL: {e} (value: '{broker}')")))?;
let topic = config
.lookup(MQTT_TOPIC)
.ok_or_else(|| TargetError::Configuration("Missing MQTT topic".to_string()))?;
let args = MQTTArgs {
enable: true, // Assumed enabled.
broker: broker_url,
topic,
qos: config
.lookup(MQTT_QOS)
.and_then(|v| v.parse::<u8>().ok())
.map(|q| match q {
0 => QoS::AtMostOnce,
1 => QoS::AtLeastOnce,
2 => QoS::ExactlyOnce,
_ => QoS::AtLeastOnce,
})
.unwrap_or(QoS::AtLeastOnce),
username: config.lookup(MQTT_USERNAME).unwrap_or_default(),
password: config.lookup(MQTT_PASSWORD).unwrap_or_default(),
max_reconnect_interval: config
.lookup(MQTT_RECONNECT_INTERVAL)
.and_then(|v| v.parse::<u64>().ok())
.map(Duration::from_secs)
.unwrap_or_else(|| Duration::from_secs(5)),
keep_alive: config
.lookup(MQTT_KEEP_ALIVE_INTERVAL)
.and_then(|v| v.parse::<u64>().ok())
.map(Duration::from_secs)
.unwrap_or_else(|| Duration::from_secs(30)),
queue_dir: config.lookup(MQTT_QUEUE_DIR).unwrap_or(AUDIT_DEFAULT_DIR.to_string()),
queue_limit: config
.lookup(MQTT_QUEUE_LIMIT)
.and_then(|v| v.parse::<u64>().ok())
.unwrap_or(DEFAULT_LIMIT),
target_type: rustfs_targets::target::TargetType::AuditLog,
};
let target = rustfs_targets::target::mqtt::MQTTTarget::new(id, args)?;
Ok(Box::new(target))
}
fn validate_config(&self, _id: &str, config: &KVS) -> Result<(), TargetError> {
let broker = config
.lookup(MQTT_BROKER)
.ok_or_else(|| TargetError::Configuration("Missing MQTT broker".to_string()))?;
let url = Url::parse(&broker)
.map_err(|e| TargetError::Configuration(format!("Invalid broker URL: {e} (value: '{broker}')")))?;
match url.scheme() {
"tcp" | "ssl" | "ws" | "wss" | "mqtt" | "mqtts" => {}
_ => {
return Err(TargetError::Configuration("Unsupported broker URL scheme".to_string()));
}
}
if config.lookup(MQTT_TOPIC).is_none() {
return Err(TargetError::Configuration("Missing MQTT topic".to_string()));
}
if let Some(qos_str) = config.lookup(MQTT_QOS) {
let qos = qos_str
.parse::<u8>()
.map_err(|_| TargetError::Configuration("Invalid QoS value".to_string()))?;
if qos > 2 {
return Err(TargetError::Configuration("QoS must be 0, 1, or 2".to_string()));
}
}
let queue_dir = config.lookup(MQTT_QUEUE_DIR).unwrap_or_default();
if !queue_dir.is_empty() {
if !std::path::Path::new(&queue_dir).is_absolute() {
return Err(TargetError::Configuration("MQTT queue directory must be an absolute path".to_string()));
}
if let Some(qos_str) = config.lookup(MQTT_QOS) {
if qos_str == "0" {
warn!("Using queue_dir with QoS 0 may result in event loss");
}
}
}
Ok(())
}
fn get_valid_fields(&self) -> HashSet<String> {
AUDIT_MQTT_KEYS.iter().map(|s| s.to_string()).collect()
}
fn get_valid_env_fields(&self) -> HashSet<String> {
ENV_AUDIT_MQTT_KEYS.iter().map(|s| s.to_string()).collect()
}
}

View File

@@ -20,7 +20,6 @@
pub mod entity;
pub mod error;
pub mod factory;
pub mod global;
pub mod observability;
pub mod registry;

View File

@@ -12,26 +12,29 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::{
AuditEntry, AuditError, AuditResult,
factory::{MQTTTargetFactory, TargetFactory, WebhookTargetFactory},
};
use futures::StreamExt;
use futures::stream::FuturesUnordered;
use crate::{AuditEntry, AuditError, AuditResult};
use futures::{StreamExt, stream::FuturesUnordered};
use hashbrown::{HashMap, HashSet};
use rustfs_config::{DEFAULT_DELIMITER, ENABLE_KEY, ENV_PREFIX, EnableState, audit::AUDIT_ROUTE_PREFIX};
use rustfs_config::{
DEFAULT_DELIMITER, ENABLE_KEY, ENV_PREFIX, MQTT_BROKER, MQTT_KEEP_ALIVE_INTERVAL, MQTT_PASSWORD, MQTT_QOS, MQTT_QUEUE_DIR,
MQTT_QUEUE_LIMIT, MQTT_RECONNECT_INTERVAL, MQTT_TOPIC, MQTT_USERNAME, WEBHOOK_AUTH_TOKEN, WEBHOOK_BATCH_SIZE,
WEBHOOK_CLIENT_CERT, WEBHOOK_CLIENT_KEY, WEBHOOK_ENDPOINT, WEBHOOK_HTTP_TIMEOUT, WEBHOOK_MAX_RETRY, WEBHOOK_QUEUE_DIR,
WEBHOOK_QUEUE_LIMIT, WEBHOOK_RETRY_INTERVAL, audit::AUDIT_ROUTE_PREFIX,
};
use rustfs_ecstore::config::{Config, KVS};
use rustfs_targets::{Target, TargetError, target::ChannelTargetType};
use std::str::FromStr;
use rustfs_targets::{
Target, TargetError,
target::{ChannelTargetType, TargetType, mqtt::MQTTArgs, webhook::WebhookArgs},
};
use std::sync::Arc;
use std::time::Duration;
use tracing::{debug, error, info, warn};
use url::Url;
/// Registry for managing audit targets
pub struct AuditRegistry {
/// Storage for created targets
targets: HashMap<String, Box<dyn Target<AuditEntry> + Send + Sync>>,
/// Factories for creating targets
factories: HashMap<String, Box<dyn TargetFactory>>,
}
impl Default for AuditRegistry {
@@ -43,207 +46,162 @@ impl Default for AuditRegistry {
impl AuditRegistry {
/// Creates a new AuditRegistry
pub fn new() -> Self {
let mut registry = AuditRegistry {
factories: HashMap::new(),
targets: HashMap::new(),
};
// Register built-in factories
registry.register(ChannelTargetType::Webhook.as_str(), Box::new(WebhookTargetFactory));
registry.register(ChannelTargetType::Mqtt.as_str(), Box::new(MQTTTargetFactory));
registry
Self { targets: HashMap::new() }
}
/// Registers a new factory for a target type
///
/// # Arguments
/// * `target_type` - The type of the target (e.g., "webhook", "mqtt").
/// * `factory` - The factory instance to create targets of this type.
pub fn register(&mut self, target_type: &str, factory: Box<dyn TargetFactory>) {
self.factories.insert(target_type.to_string(), factory);
}
/// Creates a target of the specified type with the given ID and configuration
///
/// # Arguments
/// * `target_type` - The type of the target (e.g., "webhook", "mqtt").
/// * `id` - The identifier for the target instance.
/// * `config` - The configuration key-value store for the target.
///
/// # Returns
/// * `Result<Box<dyn Target<AuditEntry> + Send + Sync>, TargetError>` - The created target or an error.
pub async fn create_target(
&self,
target_type: &str,
id: String,
config: &KVS,
) -> Result<Box<dyn Target<AuditEntry> + Send + Sync>, TargetError> {
let factory = self
.factories
.get(target_type)
.ok_or_else(|| TargetError::Configuration(format!("Unknown target type: {target_type}")))?;
// Validate configuration before creating target
factory.validate_config(&id, config)?;
// Create target
factory.create_target(id, config).await
}
/// Creates all targets from a configuration
/// Create all notification targets from system configuration and environment variables.
/// Creates all audit targets from system configuration and environment variables.
/// This method processes the creation of each target concurrently as follows:
/// 1. Iterate through all registered target types (e.g. webhooks, mqtt).
/// 2. For each type, resolve its configuration in the configuration file and environment variables.
/// 1. Iterate through supported target types (webhook, mqtt).
/// 2. For each type, resolve its configuration from file and environment variables.
/// 3. Identify all target instance IDs that need to be created.
/// 4. Combine the default configuration, file configuration, and environment variable configuration for each instance.
/// 5. If the instance is enabled, create an asynchronous task for it to instantiate.
/// 6. Concurrency executes all creation tasks and collects results.
pub async fn create_audit_targets_from_config(
&self,
/// 4. Merge configurations with precedence: ENV > file instance > file default.
/// 5. Create async tasks for enabled instances.
/// 6. Execute tasks concurrently and collect successful targets.
/// 7. Persist successful configurations back to system storage.
pub async fn create_targets_from_config(
&mut self,
config: &Config,
) -> AuditResult<Vec<Box<dyn Target<AuditEntry> + Send + Sync>>> {
// Collect only environment variables with the relevant prefix to reduce memory usage
let all_env: Vec<(String, String)> = std::env::vars().filter(|(key, _)| key.starts_with(ENV_PREFIX)).collect();
// A collection of asynchronous tasks for concurrently executing target creation
let mut tasks = FuturesUnordered::new();
// let final_config = config.clone(); // Clone a configuration for aggregating the final result
// let final_config = config.clone();
// Record the defaults for each segment so that the segment can eventually be rebuilt
let mut section_defaults: HashMap<String, KVS> = HashMap::new();
// 1. Traverse all registered plants and process them by target type
for (target_type, factory) in &self.factories {
tracing::Span::current().record("target_type", target_type.as_str());
info!("Start working on target types...");
// Supported target types for audit
let target_types = vec![ChannelTargetType::Webhook.as_str(), ChannelTargetType::Mqtt.as_str()];
// 1. Traverse all target types and process them
for target_type in target_types {
let span = tracing::Span::current();
span.record("target_type", target_type);
info!(target_type = %target_type, "Starting audit target type processing");
// 2. Prepare the configuration source
// 2.1. Get the configuration segment in the file, e.g. 'audit_webhook'
let section_name = format!("{AUDIT_ROUTE_PREFIX}{target_type}").to_lowercase();
let file_configs = config.0.get(&section_name).cloned().unwrap_or_default();
// 2.2. Get the default configuration for that type
let default_cfg = file_configs.get(DEFAULT_DELIMITER).cloned().unwrap_or_default();
debug!(?default_cfg, "Get the default configuration");
debug!(?default_cfg, "Retrieved default configuration");
// Save defaults for eventual write back
section_defaults.insert(section_name.clone(), default_cfg.clone());
// *** Optimization point 1: Get all legitimate fields of the current target type ***
let valid_fields = factory.get_valid_fields();
debug!(?valid_fields, "Get the legitimate configuration fields");
// Get valid fields for the target type
let valid_fields = match target_type {
"webhook" => get_webhook_valid_fields(),
"mqtt" => get_mqtt_valid_fields(),
_ => {
warn!(target_type = %target_type, "Unknown target type, skipping");
continue;
}
};
debug!(?valid_fields, "Retrieved valid configuration fields");
// 3. Resolve instance IDs and configuration overrides from environment variables
let mut instance_ids_from_env = HashSet::new();
// 3.1. Instance discovery: Based on the '..._ENABLE_INSTANCEID' format
let enable_prefix =
format!("{ENV_PREFIX}{AUDIT_ROUTE_PREFIX}{target_type}{DEFAULT_DELIMITER}{ENABLE_KEY}{DEFAULT_DELIMITER}")
.to_uppercase();
for (key, value) in &all_env {
if EnableState::from_str(value).ok().map(|s| s.is_enabled()).unwrap_or(false) {
if let Some(id) = key.strip_prefix(&enable_prefix) {
if !id.is_empty() {
instance_ids_from_env.insert(id.to_lowercase());
}
}
}
}
// 3.2. Parse all relevant environment variable configurations
// 3.2.1. Build environment variable prefixes such as 'RUSTFS_AUDIT_WEBHOOK_'
let env_prefix = format!("{ENV_PREFIX}{AUDIT_ROUTE_PREFIX}{target_type}{DEFAULT_DELIMITER}").to_uppercase();
// 3.2.2. 'env_overrides' is used to store configurations parsed from environment variables in the format: {instance id -> {field -> value}}
let mut env_overrides: HashMap<String, HashMap<String, String>> = HashMap::new();
for (key, value) in &all_env {
if let Some(rest) = key.strip_prefix(&env_prefix) {
// Use rsplitn to split from the right side to properly extract the INSTANCE_ID at the end
// Format: <FIELD_NAME>_<INSTANCE_ID> or <FIELD_NAME>
let mut parts = rest.rsplitn(2, DEFAULT_DELIMITER);
// The first part from the right is INSTANCE_ID
let instance_id_part = parts.next().unwrap_or(DEFAULT_DELIMITER);
// The remaining part is FIELD_NAME
let field_name_part = parts.next();
for (env_key, env_value) in &all_env {
let audit_prefix = format!("{ENV_PREFIX}{AUDIT_ROUTE_PREFIX}{target_type}").to_uppercase();
if !env_key.starts_with(&audit_prefix) {
continue;
}
let (field_name, instance_id) = match field_name_part {
// Case 1: The format is <FIELD_NAME>_<INSTANCE_ID>
// e.g., rest = "ENDPOINT_PRIMARY" -> field_name="ENDPOINT", instance_id="PRIMARY"
Some(field) => (field.to_lowercase(), instance_id_part.to_lowercase()),
// Case 2: The format is <FIELD_NAME> (without INSTANCE_ID)
// e.g., rest = "ENABLE" -> field_name="ENABLE", instance_id="" (Universal configuration `_ DEFAULT_DELIMITER`)
None => (instance_id_part.to_lowercase(), DEFAULT_DELIMITER.to_string()),
};
let suffix = &env_key[audit_prefix.len()..];
if suffix.is_empty() {
continue;
}
// *** Optimization point 2: Verify whether the parsed field_name is legal ***
if !field_name.is_empty() && valid_fields.contains(&field_name) {
debug!(
instance_id = %if instance_id.is_empty() { DEFAULT_DELIMITER } else { &instance_id },
%field_name,
%value,
"Parsing to environment variables"
);
env_overrides
.entry(instance_id)
.or_default()
.insert(field_name, value.clone());
// Parse field and instance from suffix (FIELD_INSTANCE or FIELD)
let (field_name, instance_id) = if let Some(last_underscore) = suffix.rfind('_') {
let potential_field = &suffix[1..last_underscore]; // Skip leading _
let potential_instance = &suffix[last_underscore + 1..];
// Check if the part before the last underscore is a valid field
if valid_fields.contains(&potential_field.to_lowercase()) {
(potential_field.to_lowercase(), potential_instance.to_lowercase())
} else {
// Ignore illegal field names
warn!(
field_name = %field_name,
"Ignore environment variable fields, not found in the list of valid fields for target type {}",
target_type
);
// Treat the entire suffix as field name with default instance
(suffix[1..].to_lowercase(), DEFAULT_DELIMITER.to_string())
}
} else {
// No underscore, treat as field with default instance
(suffix[1..].to_lowercase(), DEFAULT_DELIMITER.to_string())
};
if valid_fields.contains(&field_name) {
if instance_id != DEFAULT_DELIMITER {
instance_ids_from_env.insert(instance_id.clone());
}
env_overrides
.entry(instance_id)
.or_default()
.insert(field_name, env_value.clone());
} else {
debug!(
env_key = %env_key,
field_name = %field_name,
"Ignoring environment variable field not found in valid fields for target type {}",
target_type
);
}
}
debug!(?env_overrides, "Complete the environment variable analysis");
debug!(?env_overrides, "Completed environment variable analysis");
// 4. Determine all instance IDs that need to be processed
let mut all_instance_ids: HashSet<String> =
file_configs.keys().filter(|k| *k != DEFAULT_DELIMITER).cloned().collect();
all_instance_ids.extend(instance_ids_from_env);
debug!(?all_instance_ids, "Determine all instance IDs");
debug!(?all_instance_ids, "Determined all instance IDs");
// 5. Merge configurations and create tasks for each instance
for id in all_instance_ids {
// 5.1. Merge configuration, priority: Environment variables > File instance configuration > File default configuration
// 5.1. Merge configuration, priority: Environment variables > File instance > File default
let mut merged_config = default_cfg.clone();
// Instance-specific configuration in application files
// Apply file instance configuration if available
if let Some(file_instance_cfg) = file_configs.get(&id) {
merged_config.extend(file_instance_cfg.clone());
}
// Application instance-specific environment variable configuration
// Apply environment variable overrides
if let Some(env_instance_cfg) = env_overrides.get(&id) {
// Convert HashMap<String, String> to KVS
let mut kvs_from_env = KVS::new();
for (k, v) in env_instance_cfg {
kvs_from_env.insert(k.clone(), v.clone());
}
merged_config.extend(kvs_from_env);
}
debug!(instance_id = %id, ?merged_config, "Complete configuration merge");
debug!(instance_id = %id, ?merged_config, "Completed configuration merge");
// 5.2. Check if the instance is enabled
let enabled = merged_config
.lookup(ENABLE_KEY)
.map(|v| {
EnableState::from_str(v.as_str())
.ok()
.map(|s| s.is_enabled())
.unwrap_or(false)
})
.map(|v| parse_enable_value(&v))
.unwrap_or(false);
if enabled {
info!(instance_id = %id, "Target is enabled, ready to create a task");
// 5.3. Create asynchronous tasks for enabled instances
let target_type_clone = target_type.clone();
let tid = id.clone();
let merged_config_arc = Arc::new(merged_config);
tasks.push(async move {
let result = factory.create_target(tid.clone(), &merged_config_arc).await;
(target_type_clone, tid, result, Arc::clone(&merged_config_arc))
info!(instance_id = %id, "Creating audit target");
// Create task for concurrent execution
let target_type_clone = target_type.to_string();
let id_clone = id.clone();
let merged_config_arc = Arc::new(merged_config.clone());
let task = tokio::spawn(async move {
let result = create_audit_target(&target_type_clone, &id_clone, &merged_config_arc).await;
(target_type_clone, id_clone, result, merged_config_arc)
});
tasks.push(task);
// Update final config with successful instance
// final_config.0.entry(section_name.clone()).or_default().insert(id, merged_config);
} else {
info!(instance_id = %id, "Skip the disabled target and will be removed from the final configuration");
info!(instance_id = %id, "Skipping disabled audit target, will be removed from final configuration");
// Remove disabled target from final configuration
// final_config.0.entry(section_name.clone()).or_default().remove(&id);
}
@@ -253,28 +211,30 @@ impl AuditRegistry {
// 6. Concurrently execute all creation tasks and collect results
let mut successful_targets = Vec::new();
let mut successful_configs = Vec::new();
while let Some((target_type, id, result, final_config)) = tasks.next().await {
match result {
Ok(target) => {
info!(target_type = %target_type, instance_id = %id, "Create a target successfully");
successful_targets.push(target);
successful_configs.push((target_type, id, final_config));
}
while let Some(task_result) = tasks.next().await {
match task_result {
Ok((target_type, id, result, kvs_arc)) => match result {
Ok(target) => {
info!(target_type = %target_type, instance_id = %id, "Created audit target successfully");
successful_targets.push(target);
successful_configs.push((target_type, id, kvs_arc));
}
Err(e) => {
error!(target_type = %target_type, instance_id = %id, error = %e, "Failed to create audit target");
}
},
Err(e) => {
error!(target_type = %target_type, instance_id = %id, error = %e, "Failed to create a target");
error!(error = %e, "Task execution failed");
}
}
}
// 7. Aggregate new configuration and write back to system configuration
// Rebuild in pieces based on "default items + successful instances" and overwrite writeback to ensure that deleted/disabled instances will not be "resurrected"
if !successful_configs.is_empty() || !section_defaults.is_empty() {
info!(
"Prepare to update {} successfully created target configurations to the system configuration...",
successful_configs.len()
);
info!("Prepare to rebuild and save target configurations to the system configuration...");
// Aggregate successful instances into segments
let mut successes_by_section: HashMap<String, HashMap<String, KVS>> = HashMap::new();
for (target_type, id, kvs) in successful_configs {
let section_name = format!("{AUDIT_ROUTE_PREFIX}{target_type}").to_lowercase();
successes_by_section
@@ -284,99 +244,76 @@ impl AuditRegistry {
}
let mut new_config = config.clone();
// Collection of segments that need to be processed: Collect all segments where default items exist or where successful instances exist
let mut sections: HashSet<String> = HashSet::new();
sections.extend(section_defaults.keys().cloned());
sections.extend(successes_by_section.keys().cloned());
for section in sections {
for section_name in sections {
let mut section_map: std::collections::HashMap<String, KVS> = std::collections::HashMap::new();
// Add default item
if let Some(default_kvs) = section_defaults.get(&section) {
if !default_kvs.is_empty() {
section_map.insert(DEFAULT_DELIMITER.to_string(), default_kvs.clone());
// The default entry (if present) is written back to `_`
if let Some(default_cfg) = section_defaults.get(&section_name) {
if !default_cfg.is_empty() {
section_map.insert(DEFAULT_DELIMITER.to_string(), default_cfg.clone());
}
}
// Add successful instance item
if let Some(instances) = successes_by_section.get(&section) {
// Successful instance write back
if let Some(instances) = successes_by_section.get(&section_name) {
for (id, kvs) in instances {
section_map.insert(id.clone(), kvs.clone());
}
}
// Empty breaks are removed and non-empty breaks are replaced entirely.
// Empty segments are removed and non-empty segments are replaced as a whole.
if section_map.is_empty() {
new_config.0.remove(&section);
new_config.0.remove(&section_name);
} else {
new_config.0.insert(section, section_map);
new_config.0.insert(section_name, section_map);
}
}
let Some(store) = rustfs_ecstore::global::new_object_layer_fn() else {
// 7. Save the new configuration to the system
let Some(store) = rustfs_ecstore::new_object_layer_fn() else {
return Err(AuditError::StorageNotAvailable(
"Failed to save target configuration: server storage not initialized".to_string(),
));
};
match rustfs_ecstore::config::com::save_server_config(store, &new_config).await {
Ok(_) => {
info!("The new configuration was saved to the system successfully.")
}
Ok(_) => info!("New audit configuration saved to system successfully"),
Err(e) => {
error!("Failed to save the new configuration: {}", e);
error!(error = %e, "Failed to save new audit configuration");
return Err(AuditError::SaveConfig(Box::new(e)));
}
}
}
info!(count = successful_targets.len(), "All target processing completed");
Ok(successful_targets)
}
/// Adds a target to the registry
///
/// # Arguments
/// * `id` - The identifier for the target.
/// * `target` - The target instance to be added.
pub fn add_target(&mut self, id: String, target: Box<dyn Target<AuditEntry> + Send + Sync>) {
self.targets.insert(id, target);
}
/// Removes a target from the registry
///
/// # Arguments
/// * `id` - The identifier for the target to be removed.
///
/// # Returns
/// * `Option<Box<dyn Target<AuditEntry> + Send + Sync>>` - The removed target if it existed.
pub fn remove_target(&mut self, id: &str) -> Option<Box<dyn Target<AuditEntry> + Send + Sync>> {
self.targets.remove(id)
}
/// Gets a target from the registry
///
/// # Arguments
/// * `id` - The identifier for the target to be retrieved.
///
/// # Returns
/// * `Option<&(dyn Target<AuditEntry> + Send + Sync)>` - The target if it exists.
pub fn get_target(&self, id: &str) -> Option<&(dyn Target<AuditEntry> + Send + Sync)> {
self.targets.get(id).map(|t| t.as_ref())
}
/// Lists all target IDs
///
/// # Returns
/// * `Vec<String>` - A vector of all target IDs in the registry.
pub fn list_targets(&self) -> Vec<String> {
self.targets.keys().cloned().collect()
}
/// Closes all targets and clears the registry
///
/// # Returns
/// * `AuditResult<()>` - Result indicating success or failure.
pub async fn close_all(&mut self) -> AuditResult<()> {
let mut errors = Vec::new();
@@ -394,3 +331,152 @@ impl AuditRegistry {
Ok(())
}
}
/// Creates an audit target based on type and configuration
async fn create_audit_target(
target_type: &str,
id: &str,
config: &KVS,
) -> Result<Box<dyn Target<AuditEntry> + Send + Sync>, TargetError> {
match target_type {
val if val == ChannelTargetType::Webhook.as_str() => {
let args = parse_webhook_args(id, config)?;
let target = rustfs_targets::target::webhook::WebhookTarget::new(id.to_string(), args)?;
Ok(Box::new(target))
}
val if val == ChannelTargetType::Mqtt.as_str() => {
let args = parse_mqtt_args(id, config)?;
let target = rustfs_targets::target::mqtt::MQTTTarget::new(id.to_string(), args)?;
Ok(Box::new(target))
}
_ => Err(TargetError::Configuration(format!("Unknown target type: {target_type}"))),
}
}
/// Gets valid field names for webhook configuration
fn get_webhook_valid_fields() -> HashSet<String> {
vec![
ENABLE_KEY.to_string(),
WEBHOOK_ENDPOINT.to_string(),
WEBHOOK_AUTH_TOKEN.to_string(),
WEBHOOK_CLIENT_CERT.to_string(),
WEBHOOK_CLIENT_KEY.to_string(),
WEBHOOK_BATCH_SIZE.to_string(),
WEBHOOK_QUEUE_LIMIT.to_string(),
WEBHOOK_QUEUE_DIR.to_string(),
WEBHOOK_MAX_RETRY.to_string(),
WEBHOOK_RETRY_INTERVAL.to_string(),
WEBHOOK_HTTP_TIMEOUT.to_string(),
]
.into_iter()
.collect()
}
/// Gets valid field names for MQTT configuration
fn get_mqtt_valid_fields() -> HashSet<String> {
vec![
ENABLE_KEY.to_string(),
MQTT_BROKER.to_string(),
MQTT_TOPIC.to_string(),
MQTT_USERNAME.to_string(),
MQTT_PASSWORD.to_string(),
MQTT_QOS.to_string(),
MQTT_KEEP_ALIVE_INTERVAL.to_string(),
MQTT_RECONNECT_INTERVAL.to_string(),
MQTT_QUEUE_DIR.to_string(),
MQTT_QUEUE_LIMIT.to_string(),
]
.into_iter()
.collect()
}
/// Parses webhook arguments from KVS configuration
fn parse_webhook_args(_id: &str, config: &KVS) -> Result<WebhookArgs, TargetError> {
let endpoint = config
.lookup(WEBHOOK_ENDPOINT)
.filter(|s| !s.is_empty())
.ok_or_else(|| TargetError::Configuration("webhook endpoint is required".to_string()))?;
let endpoint_url =
Url::parse(&endpoint).map_err(|e| TargetError::Configuration(format!("invalid webhook endpoint URL: {e}")))?;
let args = WebhookArgs {
enable: true, // Already validated as enabled
endpoint: endpoint_url,
auth_token: config.lookup(WEBHOOK_AUTH_TOKEN).unwrap_or_default(),
queue_dir: config.lookup(WEBHOOK_QUEUE_DIR).unwrap_or_default(),
queue_limit: config
.lookup(WEBHOOK_QUEUE_LIMIT)
.and_then(|s| s.parse().ok())
.unwrap_or(100000),
client_cert: config.lookup(WEBHOOK_CLIENT_CERT).unwrap_or_default(),
client_key: config.lookup(WEBHOOK_CLIENT_KEY).unwrap_or_default(),
target_type: TargetType::AuditLog,
};
args.validate()?;
Ok(args)
}
/// Parses MQTT arguments from KVS configuration
fn parse_mqtt_args(_id: &str, config: &KVS) -> Result<MQTTArgs, TargetError> {
let broker = config
.lookup(MQTT_BROKER)
.filter(|s| !s.is_empty())
.ok_or_else(|| TargetError::Configuration("MQTT broker is required".to_string()))?;
let broker_url = Url::parse(&broker).map_err(|e| TargetError::Configuration(format!("invalid MQTT broker URL: {e}")))?;
let topic = config
.lookup(MQTT_TOPIC)
.filter(|s| !s.is_empty())
.ok_or_else(|| TargetError::Configuration("MQTT topic is required".to_string()))?;
let qos = config
.lookup(MQTT_QOS)
.and_then(|s| s.parse::<u8>().ok())
.and_then(|q| match q {
0 => Some(rumqttc::QoS::AtMostOnce),
1 => Some(rumqttc::QoS::AtLeastOnce),
2 => Some(rumqttc::QoS::ExactlyOnce),
_ => None,
})
.unwrap_or(rumqttc::QoS::AtLeastOnce);
let args = MQTTArgs {
enable: true, // Already validated as enabled
broker: broker_url,
topic,
qos,
username: config.lookup(MQTT_USERNAME).unwrap_or_default(),
password: config.lookup(MQTT_PASSWORD).unwrap_or_default(),
max_reconnect_interval: parse_duration(&config.lookup(MQTT_RECONNECT_INTERVAL).unwrap_or_else(|| "5s".to_string()))
.unwrap_or(Duration::from_secs(5)),
keep_alive: parse_duration(&config.lookup(MQTT_KEEP_ALIVE_INTERVAL).unwrap_or_else(|| "60s".to_string()))
.unwrap_or(Duration::from_secs(60)),
queue_dir: config.lookup(MQTT_QUEUE_DIR).unwrap_or_default(),
queue_limit: config.lookup(MQTT_QUEUE_LIMIT).and_then(|s| s.parse().ok()).unwrap_or(100000),
target_type: TargetType::AuditLog,
};
args.validate()?;
Ok(args)
}
/// Parses enable value from string
fn parse_enable_value(value: &str) -> bool {
matches!(value.to_lowercase().as_str(), "1" | "on" | "true" | "yes")
}
/// Parses duration from string (e.g., "3s", "5m")
fn parse_duration(s: &str) -> Option<Duration> {
if let Some(stripped) = s.strip_suffix('s') {
stripped.parse::<u64>().ok().map(Duration::from_secs)
} else if let Some(stripped) = s.strip_suffix('m') {
stripped.parse::<u64>().ok().map(|m| Duration::from_secs(m * 60))
} else if let Some(stripped) = s.strip_suffix("ms") {
stripped.parse::<u64>().ok().map(Duration::from_millis)
} else {
s.parse::<u64>().ok().map(Duration::from_secs)
}
}

View File

@@ -58,12 +58,6 @@ impl AuditSystem {
}
/// Starts the audit system with the given configuration
///
/// # Arguments
/// * `config` - The configuration to use for starting the audit system
///
/// # Returns
/// * `AuditResult<()>` - Result indicating success or failure
pub async fn start(&self, config: Config) -> AuditResult<()> {
let state = self.state.write().await;
@@ -93,7 +87,7 @@ impl AuditSystem {
// Create targets from configuration
let mut registry = self.registry.lock().await;
match registry.create_audit_targets_from_config(&config).await {
match registry.create_targets_from_config(&config).await {
Ok(targets) => {
if targets.is_empty() {
info!("No enabled audit targets found, keeping audit system stopped");
@@ -149,9 +143,6 @@ impl AuditSystem {
}
/// Pauses the audit system
///
/// # Returns
/// * `AuditResult<()>` - Result indicating success or failure
pub async fn pause(&self) -> AuditResult<()> {
let mut state = self.state.write().await;
@@ -170,9 +161,6 @@ impl AuditSystem {
}
/// Resumes the audit system
///
/// # Returns
/// * `AuditResult<()>` - Result indicating success or failure
pub async fn resume(&self) -> AuditResult<()> {
let mut state = self.state.write().await;
@@ -191,9 +179,6 @@ impl AuditSystem {
}
/// Stops the audit system and closes all targets
///
/// # Returns
/// * `AuditResult<()>` - Result indicating success or failure
pub async fn close(&self) -> AuditResult<()> {
let mut state = self.state.write().await;
@@ -238,20 +223,11 @@ impl AuditSystem {
}
/// Checks if the audit system is running
///
/// # Returns
/// * `bool` - True if running, false otherwise
pub async fn is_running(&self) -> bool {
matches!(*self.state.read().await, AuditSystemState::Running)
}
/// Dispatches an audit log entry to all active targets
///
/// # Arguments
/// * `entry` - The audit log entry to dispatch
///
/// # Returns
/// * `AuditResult<()>` - Result indicating success or failure
pub async fn dispatch(&self, entry: Arc<AuditEntry>) -> AuditResult<()> {
let start_time = std::time::Instant::now();
@@ -343,13 +319,6 @@ impl AuditSystem {
Ok(())
}
/// Dispatches a batch of audit log entries to all active targets
///
/// # Arguments
/// * `entries` - A vector of audit log entries to dispatch
///
/// # Returns
/// * `AuditResult<()>` - Result indicating success or failure
pub async fn dispatch_batch(&self, entries: Vec<Arc<AuditEntry>>) -> AuditResult<()> {
let start_time = std::time::Instant::now();
@@ -417,13 +386,7 @@ impl AuditSystem {
Ok(())
}
/// Starts the audit stream processing for a target with batching and retry logic
/// # Arguments
/// * `store` - The store from which to read audit entries
/// * `target` - The target to which audit entries will be sent
///
/// This function spawns a background task that continuously reads audit entries from the provided store
/// and attempts to send them to the specified target. It implements retry logic with exponential backoff
// New: Audit flow background tasks, based on send_from_store, including retries and exponential backoffs
fn start_audit_stream_with_batching(
&self,
store: Box<dyn Store<EntityTarget<AuditEntry>, Error = StoreError, Key = Key> + Send>,
@@ -499,12 +462,6 @@ impl AuditSystem {
}
/// Enables a specific target
///
/// # Arguments
/// * `target_id` - The ID of the target to enable
///
/// # Returns
/// * `AuditResult<()>` - Result indicating success or failure
pub async fn enable_target(&self, target_id: &str) -> AuditResult<()> {
// This would require storing enabled/disabled state per target
// For now, just check if target exists
@@ -518,12 +475,6 @@ impl AuditSystem {
}
/// Disables a specific target
///
/// # Arguments
/// * `target_id` - The ID of the target to disable
///
/// # Returns
/// * `AuditResult<()>` - Result indicating success or failure
pub async fn disable_target(&self, target_id: &str) -> AuditResult<()> {
// This would require storing enabled/disabled state per target
// For now, just check if target exists
@@ -537,12 +488,6 @@ impl AuditSystem {
}
/// Removes a target from the system
///
/// # Arguments
/// * `target_id` - The ID of the target to remove
///
/// # Returns
/// * `AuditResult<()>` - Result indicating success or failure
pub async fn remove_target(&self, target_id: &str) -> AuditResult<()> {
let mut registry = self.registry.lock().await;
if let Some(target) = registry.remove_target(target_id) {
@@ -557,13 +502,6 @@ impl AuditSystem {
}
/// Updates or inserts a target
///
/// # Arguments
/// * `target_id` - The ID of the target to upsert
/// * `target` - The target instance to insert or update
///
/// # Returns
/// * `AuditResult<()>` - Result indicating success or failure
pub async fn upsert_target(&self, target_id: String, target: Box<dyn Target<AuditEntry> + Send + Sync>) -> AuditResult<()> {
let mut registry = self.registry.lock().await;
@@ -585,33 +523,18 @@ impl AuditSystem {
}
/// Lists all targets
///
/// # Returns
/// * `Vec<String>` - List of target IDs
pub async fn list_targets(&self) -> Vec<String> {
let registry = self.registry.lock().await;
registry.list_targets()
}
/// Gets information about a specific target
///
/// # Arguments
/// * `target_id` - The ID of the target to retrieve
///
/// # Returns
/// * `Option<String>` - Target ID if found
pub async fn get_target(&self, target_id: &str) -> Option<String> {
let registry = self.registry.lock().await;
registry.get_target(target_id).map(|target| target.id().to_string())
}
/// Reloads configuration and updates targets
///
/// # Arguments
/// * `new_config` - The new configuration to load
///
/// # Returns
/// * `AuditResult<()>` - Result indicating success or failure
pub async fn reload_config(&self, new_config: Config) -> AuditResult<()> {
info!("Reloading audit system configuration");
@@ -631,7 +554,7 @@ impl AuditSystem {
}
// Create new targets from updated configuration
match registry.create_audit_targets_from_config(&new_config).await {
match registry.create_targets_from_config(&new_config).await {
Ok(targets) => {
info!(target_count = targets.len(), "Reloaded audit targets successfully");
@@ -671,22 +594,16 @@ impl AuditSystem {
}
/// Gets current audit system metrics
///
/// # Returns
/// * `AuditMetricsReport` - Current metrics report
pub async fn get_metrics(&self) -> observability::AuditMetricsReport {
observability::get_metrics_report().await
}
/// Validates system performance against requirements
///
/// # Returns
/// * `PerformanceValidation` - Performance validation results
pub async fn validate_performance(&self) -> observability::PerformanceValidation {
observability::validate_performance().await
}
/// Resets all metrics to initial state
/// Resets all metrics
pub async fn reset_metrics(&self) {
observability::reset_metrics().await;
}

View File

@@ -43,11 +43,11 @@ async fn test_config_parsing_webhook() {
audit_webhook_section.insert("_".to_string(), default_kvs);
config.0.insert("audit_webhook".to_string(), audit_webhook_section);
let registry = AuditRegistry::new();
let mut registry = AuditRegistry::new();
// This should not fail even if server storage is not initialized
// as it's an integration test
let result = registry.create_audit_targets_from_config(&config).await;
let result = registry.create_targets_from_config(&config).await;
// We expect this to fail due to server storage not being initialized
// but the parsing should work correctly

View File

@@ -44,7 +44,7 @@ async fn test_audit_system_startup_performance() {
#[tokio::test]
async fn test_concurrent_target_creation() {
// Test that multiple targets can be created concurrently
let registry = AuditRegistry::new();
let mut registry = AuditRegistry::new();
// Create config with multiple webhook instances
let mut config = rustfs_ecstore::config::Config(std::collections::HashMap::new());
@@ -63,7 +63,7 @@ async fn test_concurrent_target_creation() {
let start = Instant::now();
// This will fail due to server storage not being initialized, but we can measure timing
let result = registry.create_audit_targets_from_config(&config).await;
let result = registry.create_targets_from_config(&config).await;
let elapsed = start.elapsed();
println!("Concurrent target creation took: {elapsed:?}");

View File

@@ -135,7 +135,7 @@ async fn test_global_audit_functions() {
#[tokio::test]
async fn test_config_parsing_with_multiple_instances() {
let registry = AuditRegistry::new();
let mut registry = AuditRegistry::new();
// Create config with multiple webhook instances
let mut config = Config(HashMap::new());
@@ -164,7 +164,7 @@ async fn test_config_parsing_with_multiple_instances() {
config.0.insert("audit_webhook".to_string(), webhook_section);
// Try to create targets from config
let result = registry.create_audit_targets_from_config(&config).await;
let result = registry.create_targets_from_config(&config).await;
// Should fail due to server storage not initialized, but parsing should work
match result {

View File

@@ -19,26 +19,21 @@ use std::sync::LazyLock;
use tokio::sync::RwLock;
use tonic::transport::Channel;
pub static GLOBAL_LOCAL_NODE_NAME: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
pub static GLOBAL_RUSTFS_HOST: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
pub static GLOBAL_RUSTFS_PORT: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("9000".to_string()));
pub static GLOBAL_RUSTFS_ADDR: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
pub static GLOBAL_CONN_MAP: LazyLock<RwLock<HashMap<String, Channel>>> = LazyLock::new(|| RwLock::new(HashMap::new()));
pub static GLOBAL_ROOT_CERT: LazyLock<RwLock<Option<Vec<u8>>>> = LazyLock::new(|| RwLock::new(None));
pub static GLOBAL_Local_Node_Name: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
pub static GLOBAL_Rustfs_Host: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
pub static GLOBAL_Rustfs_Port: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("9000".to_string()));
pub static GLOBAL_Rustfs_Addr: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
pub static GLOBAL_Conn_Map: LazyLock<RwLock<HashMap<String, Channel>>> = LazyLock::new(|| RwLock::new(HashMap::new()));
pub async fn set_global_addr(addr: &str) {
*GLOBAL_RUSTFS_ADDR.write().await = addr.to_string();
}
pub async fn set_global_root_cert(cert: Vec<u8>) {
*GLOBAL_ROOT_CERT.write().await = Some(cert);
*GLOBAL_Rustfs_Addr.write().await = addr.to_string();
}
/// Evict a stale/dead connection from the global connection cache.
/// This is critical for cluster recovery when a node dies unexpectedly (e.g., power-off).
/// By removing the cached connection, subsequent requests will establish a fresh connection.
pub async fn evict_connection(addr: &str) {
let removed = GLOBAL_CONN_MAP.write().await.remove(addr);
let removed = GLOBAL_Conn_Map.write().await.remove(addr);
if removed.is_some() {
tracing::warn!("Evicted stale connection from cache: {}", addr);
}
@@ -46,12 +41,12 @@ pub async fn evict_connection(addr: &str) {
/// Check if a connection exists in the cache for the given address.
pub async fn has_cached_connection(addr: &str) -> bool {
GLOBAL_CONN_MAP.read().await.contains_key(addr)
GLOBAL_Conn_Map.read().await.contains_key(addr)
}
/// Clear all cached connections. Useful for full cluster reset/recovery.
pub async fn clear_all_connections() {
let mut map = GLOBAL_CONN_MAP.write().await;
let mut map = GLOBAL_Conn_Map.write().await;
let count = map.len();
map.clear();
if count > 0 {

View File

@@ -212,8 +212,6 @@ pub struct HealChannelRequest {
pub bucket: String,
/// Object prefix (optional)
pub object_prefix: Option<String>,
/// Object version ID (optional)
pub object_version_id: Option<String>,
/// Force start heal
pub force_start: bool,
/// Priority
@@ -348,7 +346,6 @@ pub fn create_heal_request(
id: Uuid::new_v4().to_string(),
bucket,
object_prefix,
object_version_id: None,
force_start,
priority: priority.unwrap_or_default(),
pool_index: None,
@@ -377,7 +374,6 @@ pub fn create_heal_request_with_options(
id: Uuid::new_v4().to_string(),
bucket,
object_prefix,
object_version_id: None,
force_start,
priority: priority.unwrap_or_default(),
pool_index,
@@ -507,7 +503,6 @@ pub async fn send_heal_disk(set_disk_id: String, priority: Option<HealChannelPri
bucket: "".to_string(),
object_prefix: None,
disk: Some(set_disk_id),
object_version_id: None,
force_start: false,
priority: priority.unwrap_or_default(),
pool_index: None,

View File

@@ -19,10 +19,6 @@ pub mod globals;
pub mod heal_channel;
pub mod last_minute;
pub mod metrics;
mod readiness;
pub use globals::*;
pub use readiness::{GlobalReadiness, SystemStage};
// is ','
pub static DEFAULT_DELIMITER: u8 = 44;

View File

@@ -1,136 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::atomic::{AtomicU8, Ordering};
/// Represents the various stages of system startup
#[repr(u8)]
pub enum SystemStage {
Booting = 0,
StorageReady = 1, // Disks online, Quorum met
IamReady = 2, // Users and Policies loaded into cache
FullReady = 3, // System ready to serve all traffic
}
/// Global readiness tracker for the service
/// This struct uses atomic operations to track the readiness status of various components
/// of the service in a thread-safe manner.
pub struct GlobalReadiness {
status: AtomicU8,
}
impl Default for GlobalReadiness {
fn default() -> Self {
Self::new()
}
}
impl GlobalReadiness {
/// Create a new GlobalReadiness instance with initial status as Starting
/// # Returns
/// A new instance of GlobalReadiness
pub fn new() -> Self {
Self {
status: AtomicU8::new(SystemStage::Booting as u8),
}
}
/// Update the system to a new stage
///
/// # Arguments
/// * `step` - The SystemStage step to mark as ready
pub fn mark_stage(&self, step: SystemStage) {
self.status.fetch_max(step as u8, Ordering::SeqCst);
}
/// Check if the service is fully ready
/// # Returns
/// `true` if the service is fully ready, `false` otherwise
pub fn is_ready(&self) -> bool {
self.status.load(Ordering::SeqCst) == SystemStage::FullReady as u8
}
}
#[cfg(test)]
mod tests {
use super::*;
use std::sync::Arc;
use std::thread;
#[test]
fn test_initial_state() {
let readiness = GlobalReadiness::new();
assert!(!readiness.is_ready());
assert_eq!(readiness.status.load(Ordering::SeqCst), SystemStage::Booting as u8);
}
#[test]
fn test_mark_stage_progression() {
let readiness = GlobalReadiness::new();
readiness.mark_stage(SystemStage::StorageReady);
assert!(!readiness.is_ready());
assert_eq!(readiness.status.load(Ordering::SeqCst), SystemStage::StorageReady as u8);
readiness.mark_stage(SystemStage::IamReady);
assert!(!readiness.is_ready());
assert_eq!(readiness.status.load(Ordering::SeqCst), SystemStage::IamReady as u8);
readiness.mark_stage(SystemStage::FullReady);
assert!(readiness.is_ready());
}
#[test]
fn test_no_regression() {
let readiness = GlobalReadiness::new();
readiness.mark_stage(SystemStage::FullReady);
readiness.mark_stage(SystemStage::IamReady); // Should not regress
assert!(readiness.is_ready());
}
#[test]
fn test_concurrent_marking() {
let readiness = Arc::new(GlobalReadiness::new());
let mut handles = vec![];
for _ in 0..10 {
let r = Arc::clone(&readiness);
handles.push(thread::spawn(move || {
r.mark_stage(SystemStage::StorageReady);
r.mark_stage(SystemStage::IamReady);
r.mark_stage(SystemStage::FullReady);
}));
}
for h in handles {
h.join().unwrap();
}
assert!(readiness.is_ready());
}
#[test]
fn test_is_ready_only_at_full_ready() {
let readiness = GlobalReadiness::new();
assert!(!readiness.is_ready());
readiness.mark_stage(SystemStage::StorageReady);
assert!(!readiness.is_ready());
readiness.mark_stage(SystemStage::IamReady);
assert!(!readiness.is_ready());
readiness.mark_stage(SystemStage::FullReady);
assert!(readiness.is_ready());
}
}

View File

@@ -29,7 +29,7 @@ pub const AUDIT_PREFIX: &str = "audit";
pub const AUDIT_ROUTE_PREFIX: &str = const_str::concat!(AUDIT_PREFIX, DEFAULT_DELIMITER);
pub const AUDIT_WEBHOOK_SUB_SYS: &str = "audit_webhook";
pub const AUDIT_MQTT_SUB_SYS: &str = "audit_mqtt";
pub const AUDIT_MQTT_SUB_SYS: &str = "mqtt_webhook";
pub const AUDIT_STORE_EXTENSION: &str = ".audit";
#[allow(dead_code)]

View File

@@ -89,30 +89,6 @@ pub const RUSTFS_TLS_KEY: &str = "rustfs_key.pem";
/// This is the default cert for TLS.
pub const RUSTFS_TLS_CERT: &str = "rustfs_cert.pem";
/// Default public certificate filename for rustfs
/// This is the default public certificate filename for rustfs.
/// It is used to store the public certificate of the application.
/// Default value: public.crt
pub const RUSTFS_PUBLIC_CERT: &str = "public.crt";
/// Default CA certificate filename for rustfs
/// This is the default CA certificate filename for rustfs.
/// It is used to store the CA certificate of the application.
/// Default value: ca.crt
pub const RUSTFS_CA_CERT: &str = "ca.crt";
/// Default HTTP prefix for rustfs
/// This is the default HTTP prefix for rustfs.
/// It is used to identify HTTP URLs.
/// Default value: http://
pub const RUSTFS_HTTP_PREFIX: &str = "http://";
/// Default HTTPS prefix for rustfs
/// This is the default HTTPS prefix for rustfs.
/// It is used to identify HTTPS URLs.
/// Default value: https://
pub const RUSTFS_HTTPS_PREFIX: &str = "https://";
/// Default port for rustfs
/// This is the default port for rustfs.
/// This is used to bind the server to a specific port.

View File

@@ -1,56 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Request body size limits for admin API endpoints
//!
//! These limits prevent DoS attacks through unbounded memory allocation
//! while allowing legitimate use cases.
/// Maximum size for standard admin API request bodies (1 MB)
/// Used for: user creation/update, policies, tier config, KMS config, events, groups, service accounts
/// Rationale: Admin API payloads are typically JSON/XML configs under 100KB.
/// AWS IAM policy limit is 6KB-10KB. 1MB provides generous headroom.
pub const MAX_ADMIN_REQUEST_BODY_SIZE: usize = 1024 * 1024; // 1 MB
/// Maximum size for IAM import/export operations (10 MB)
/// Used for: IAM entity imports/exports containing multiple users, policies, groups
/// Rationale: ZIP archives with hundreds of IAM entities. 10MB allows ~10,000 small configs.
pub const MAX_IAM_IMPORT_SIZE: usize = 10 * 1024 * 1024; // 10 MB
/// Maximum size for bucket metadata import operations (100 MB)
/// Used for: Bucket metadata import containing configurations for many buckets
/// Rationale: Large deployments may have thousands of buckets with various configs.
/// 100MB allows importing metadata for ~10,000 buckets with reasonable configs.
pub const MAX_BUCKET_METADATA_IMPORT_SIZE: usize = 100 * 1024 * 1024; // 100 MB
/// Maximum size for healing operation requests (1 MB)
/// Used for: Healing parameters and configuration
/// Rationale: Healing requests contain bucket/object paths and options. Should be small.
pub const MAX_HEAL_REQUEST_SIZE: usize = 1024 * 1024; // 1 MB
/// Maximum size for S3 client response bodies (10 MB)
/// Used for: Reading responses from remote S3-compatible services (ACL, attributes, lists)
/// Rationale: Responses from external services should be bounded.
/// Large responses (>10MB) indicate misconfiguration or potential attack.
/// Typical responses: ACL XML < 10KB, List responses < 1MB
///
/// Rationale: Responses from external S3-compatible services should be bounded.
/// - ACL XML responses: typically < 10KB
/// - Object attributes: typically < 100KB
/// - List responses: typically < 1MB (1000 objects with metadata)
/// - Location/error responses: typically < 10KB
///
/// 10MB provides generous headroom for legitimate responses while preventing
/// memory exhaustion from malicious or misconfigured remote services.
pub const MAX_S3_CLIENT_RESPONSE_SIZE: usize = 10 * 1024 * 1024; // 10 MB

View File

@@ -1,61 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! HTTP Response Compression Configuration
//!
//! This module provides configuration options for HTTP response compression.
//! By default, compression is disabled (aligned with MinIO behavior).
//! When enabled via `RUSTFS_COMPRESS_ENABLE=on`, compression can be configured
//! to apply only to specific file extensions, MIME types, and minimum file sizes.
/// Environment variable to enable/disable HTTP response compression
/// Default: off (disabled)
/// Values: on, off, true, false, yes, no, 1, 0
/// Example: RUSTFS_COMPRESS_ENABLE=on
pub const ENV_COMPRESS_ENABLE: &str = "RUSTFS_COMPRESS_ENABLE";
/// Default compression enable state
/// Aligned with MinIO behavior - compression is disabled by default
pub const DEFAULT_COMPRESS_ENABLE: bool = false;
/// Environment variable for file extensions that should be compressed
/// Comma-separated list of file extensions (with or without leading dot)
/// Default: "" (empty, meaning use MIME type matching only)
/// Example: RUSTFS_COMPRESS_EXTENSIONS=.txt,.log,.csv,.json,.xml,.html,.css,.js
pub const ENV_COMPRESS_EXTENSIONS: &str = "RUSTFS_COMPRESS_EXTENSIONS";
/// Default file extensions for compression
/// Empty by default - relies on MIME type matching
pub const DEFAULT_COMPRESS_EXTENSIONS: &str = "";
/// Environment variable for MIME types that should be compressed
/// Comma-separated list of MIME types, supports wildcard (*) for subtypes
/// Default: "text/*,application/json,application/xml,application/javascript"
/// Example: RUSTFS_COMPRESS_MIME_TYPES=text/*,application/json,application/xml
pub const ENV_COMPRESS_MIME_TYPES: &str = "RUSTFS_COMPRESS_MIME_TYPES";
/// Default MIME types for compression
/// Includes common text-based content types that benefit from compression
pub const DEFAULT_COMPRESS_MIME_TYPES: &str = "text/*,application/json,application/xml,application/javascript";
/// Environment variable for minimum file size to apply compression
/// Files smaller than this size will not be compressed
/// Default: 1000 (bytes)
/// Example: RUSTFS_COMPRESS_MIN_SIZE=1000
pub const ENV_COMPRESS_MIN_SIZE: &str = "RUSTFS_COMPRESS_MIN_SIZE";
/// Default minimum file size for compression (in bytes)
/// Files smaller than 1000 bytes typically don't benefit from compression
/// and the compression overhead may outweigh the benefits
pub const DEFAULT_COMPRESS_MIN_SIZE: u64 = 1000;

View File

@@ -16,8 +16,7 @@ pub const DEFAULT_DELIMITER: &str = "_";
pub const ENV_PREFIX: &str = "RUSTFS_";
pub const ENV_WORD_DELIMITER: &str = "_";
pub const EVENT_DEFAULT_DIR: &str = "/opt/rustfs/events"; // Default directory for event store
pub const AUDIT_DEFAULT_DIR: &str = "/opt/rustfs/audit"; // Default directory for audit store
pub const DEFAULT_DIR: &str = "/opt/rustfs/events"; // Default directory for event store
pub const DEFAULT_LIMIT: u64 = 100000; // Default store limit
/// Standard config keys and values.

View File

@@ -13,14 +13,11 @@
// limitations under the License.
pub(crate) mod app;
pub(crate) mod body_limits;
pub(crate) mod compress;
pub(crate) mod console;
pub(crate) mod env;
pub(crate) mod heal;
pub(crate) mod object;
pub(crate) mod profiler;
pub(crate) mod runtime;
pub(crate) mod scanner;
pub(crate) mod targets;
pub(crate) mod tls;

View File

@@ -39,10 +39,3 @@ pub const DEFAULT_MAX_IO_EVENTS_PER_TICK: usize = 1024;
/// Event polling default (Tokio default 61)
pub const DEFAULT_EVENT_INTERVAL: u32 = 61;
pub const DEFAULT_RNG_SEED: Option<u64> = None; // None means random
/// Threshold for small object seek support in megabytes.
///
/// When an object is smaller than this size, rustfs will provide seek support.
///
/// Default is set to 10MB.
pub const DEFAULT_OBJECT_SEEK_SUPPORT_THRESHOLD: usize = 10 * 1024 * 1024;

View File

@@ -1,28 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/// Environment variable name that specifies the data scanner start delay in seconds.
/// - Purpose: Define the delay between data scanner operations.
/// - Unit: seconds (u64).
/// - Valid values: any positive integer.
/// - Semantics: This delay controls how frequently the data scanner checks for and processes data; shorter delays lead to more responsive scanning but may increase system load.
/// - Example: `export RUSTFS_DATA_SCANNER_START_DELAY_SECS=10`
/// - Note: Choose an appropriate delay that balances scanning responsiveness with overall system performance.
pub const ENV_DATA_SCANNER_START_DELAY_SECS: &str = "RUSTFS_DATA_SCANNER_START_DELAY_SECS";
/// Default data scanner start delay in seconds if not specified in the environment variable.
/// - Value: 10 seconds.
/// - Rationale: This default interval provides a reasonable balance between scanning responsiveness and system load for most deployments.
/// - Adjustments: Users may modify this value via the `RUSTFS_DATA_SCANNER_START_DELAY_SECS` environment variable based on their specific scanning requirements and system performance.
pub const DEFAULT_DATA_SCANNER_START_DELAY_SECS: u64 = 60;

View File

@@ -12,26 +12,4 @@
// See the License for the specific language governing permissions and
// limitations under the License.
/// TLS related environment variable names and default values
/// Environment variable to enable TLS key logging
/// When set to "1", RustFS will log TLS keys to the specified file for debugging purposes.
/// By default, this is disabled.
/// To enable, set the environment variable RUSTFS_TLS_KEYLOG=1
pub const ENV_TLS_KEYLOG: &str = "RUSTFS_TLS_KEYLOG";
/// Default value for TLS key logging
/// By default, RustFS does not log TLS keys.
/// To change this behavior, set the environment variable RUSTFS_TLS_KEYLOG=1
pub const DEFAULT_TLS_KEYLOG: bool = false;
/// Environment variable to trust system CA certificates
/// When set to "1", RustFS will trust system CA certificates in addition to any
/// custom CA certificates provided in the configuration.
/// By default, this is disabled.
/// To enable, set the environment variable RUSTFS_TRUST_SYSTEM_CA=1
pub const ENV_TRUST_SYSTEM_CA: &str = "RUSTFS_TRUST_SYSTEM_CA";
/// Default value for trusting system CA certificates
/// By default, RustFS does not trust system CA certificates.
/// To change this behavior, set the environment variable RUSTFS_TRUST_SYSTEM_CA=1
pub const DEFAULT_TRUST_SYSTEM_CA: bool = false;

View File

@@ -17,10 +17,6 @@ pub mod constants;
#[cfg(feature = "constants")]
pub use constants::app::*;
#[cfg(feature = "constants")]
pub use constants::body_limits::*;
#[cfg(feature = "constants")]
pub use constants::compress::*;
#[cfg(feature = "constants")]
pub use constants::console::*;
#[cfg(feature = "constants")]
pub use constants::env::*;
@@ -33,8 +29,6 @@ pub use constants::profiler::*;
#[cfg(feature = "constants")]
pub use constants::runtime::*;
#[cfg(feature = "constants")]
pub use constants::scanner::*;
#[cfg(feature = "constants")]
pub use constants::targets::*;
#[cfg(feature = "constants")]
pub use constants::tls::*;

View File

@@ -24,45 +24,13 @@ pub use webhook::*;
use crate::DEFAULT_DELIMITER;
/// Default target identifier for notifications,
/// Used in notification system when no specific target is provided,
/// Represents the default target stream or endpoint for notifications when no specific target is provided.
// --- Configuration Constants ---
pub const DEFAULT_TARGET: &str = "1";
/// Notification prefix for routing and identification,
/// Used in notification system,
/// This prefix is utilized in constructing routes and identifiers related to notifications within the system.
pub const NOTIFY_PREFIX: &str = "notify";
/// Notification route prefix combining the notification prefix and default delimiter
/// Combines the notification prefix with the default delimiter
/// Used in notification system for defining routes related to notifications.
/// Example: "notify:/"
pub const NOTIFY_ROUTE_PREFIX: &str = const_str::concat!(NOTIFY_PREFIX, DEFAULT_DELIMITER);
/// Name of the environment variable that configures target stream concurrency.
/// Controls how many target streams are processed in parallel by the notification system.
/// Defaults to [`DEFAULT_NOTIFY_TARGET_STREAM_CONCURRENCY`] if not set.
/// Example: `RUSTFS_NOTIFY_TARGET_STREAM_CONCURRENCY=20`.
pub const ENV_NOTIFY_TARGET_STREAM_CONCURRENCY: &str = "RUSTFS_NOTIFY_TARGET_STREAM_CONCURRENCY";
/// Default concurrency for target stream processing in the notification system
/// This value is used if the environment variable `RUSTFS_NOTIFY_TARGET_STREAM_CONCURRENCY` is not set.
/// It defines how many target streams can be processed in parallel by the notification system at any given time.
/// Adjust this value based on your system's capabilities and expected load.
pub const DEFAULT_NOTIFY_TARGET_STREAM_CONCURRENCY: usize = 20;
/// Name of the environment variable that configures send concurrency.
/// Controls how many send operations are processed in parallel by the notification system.
/// Defaults to [`DEFAULT_NOTIFY_SEND_CONCURRENCY`] if not set.
/// Example: `RUSTFS_NOTIFY_SEND_CONCURRENCY=64`.
pub const ENV_NOTIFY_SEND_CONCURRENCY: &str = "RUSTFS_NOTIFY_SEND_CONCURRENCY";
/// Default concurrency for send operations in the notification system
/// This value is used if the environment variable `RUSTFS_NOTIFY_SEND_CONCURRENCY` is not set.
/// It defines how many send operations can be processed in parallel by the notification system at any given time.
/// Adjust this value based on your system's capabilities and expected load.
pub const DEFAULT_NOTIFY_SEND_CONCURRENCY: usize = 64;
#[allow(dead_code)]
pub const NOTIFY_SUB_SYSTEMS: &[&str] = &[NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS];

View File

@@ -15,5 +15,5 @@
pub const DEFAULT_EXT: &str = ".unknown"; // Default file extension
pub const COMPRESS_EXT: &str = ".snappy"; // Extension for compressed files
/// NOTIFY_STORE_EXTENSION - file extension of an event file in store
pub const NOTIFY_STORE_EXTENSION: &str = ".event";
/// STORE_EXTENSION - file extension of an event file in store
pub const STORE_EXTENSION: &str = ".event";

View File

@@ -30,7 +30,7 @@ workspace = true
[dependencies]
aes-gcm = { workspace = true, optional = true }
argon2 = { workspace = true, optional = true }
argon2 = { workspace = true, features = ["std"], optional = true }
cfg-if = { workspace = true }
chacha20poly1305 = { workspace = true, optional = true }
jsonwebtoken = { workspace = true }

View File

@@ -25,7 +25,6 @@ workspace = true
[dependencies]
rustfs-ecstore.workspace = true
rustfs-common.workspace = true
flatbuffers.workspace = true
futures.workspace = true
rustfs-lock.workspace = true
@@ -50,4 +49,4 @@ uuid = { workspace = true }
base64 = { workspace = true }
rand = { workspace = true }
chrono = { workspace = true }
md5 = { workspace = true }
md5 = { workspace = true }

View File

@@ -327,8 +327,7 @@ pub async fn execute_awscurl(
if !output.status.success() {
let stderr = String::from_utf8_lossy(&output.stderr);
let stdout = String::from_utf8_lossy(&output.stdout);
return Err(format!("awscurl failed: stderr='{stderr}', stdout='{stdout}'").into());
return Err(format!("awscurl failed: {stderr}").into());
}
let response = String::from_utf8_lossy(&output.stdout).to_string();
@@ -353,13 +352,3 @@ pub async fn awscurl_get(
) -> Result<String, Box<dyn std::error::Error + Send + Sync>> {
execute_awscurl(url, "GET", None, access_key, secret_key).await
}
/// Helper function for PUT requests
pub async fn awscurl_put(
url: &str,
body: &str,
access_key: &str,
secret_key: &str,
) -> Result<String, Box<dyn std::error::Error + Send + Sync>> {
execute_awscurl(url, "PUT", Some(body), access_key, secret_key).await
}

View File

@@ -1,85 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! End-to-end test for Content-Encoding header handling
//!
//! Tests that the Content-Encoding header is correctly stored during PUT
//! and returned in GET/HEAD responses. This is important for clients that
//! upload pre-compressed content and rely on the header for decompression.
#[cfg(test)]
mod tests {
use crate::common::{RustFSTestEnvironment, init_logging};
use aws_sdk_s3::primitives::ByteStream;
use serial_test::serial;
use tracing::info;
/// Verify Content-Encoding header roundtrips through PUT, GET, and HEAD operations
#[tokio::test]
#[serial]
async fn test_content_encoding_roundtrip() {
init_logging();
info!("Starting Content-Encoding roundtrip test");
let mut env = RustFSTestEnvironment::new().await.expect("Failed to create test environment");
env.start_rustfs_server(vec![]).await.expect("Failed to start RustFS");
let client = env.create_s3_client();
let bucket = "content-encoding-test";
let key = "logs/app.log.zst";
let content = b"2024-01-15 10:23:45 INFO Application started\n2024-01-15 10:23:46 DEBUG Loading config\n";
client
.create_bucket()
.bucket(bucket)
.send()
.await
.expect("Failed to create bucket");
info!("Uploading object with Content-Encoding: zstd");
client
.put_object()
.bucket(bucket)
.key(key)
.content_type("text/plain")
.content_encoding("zstd")
.body(ByteStream::from_static(content))
.send()
.await
.expect("PUT failed");
info!("Verifying GET response includes Content-Encoding");
let get_resp = client.get_object().bucket(bucket).key(key).send().await.expect("GET failed");
assert_eq!(get_resp.content_encoding(), Some("zstd"), "GET should return Content-Encoding: zstd");
assert_eq!(get_resp.content_type(), Some("text/plain"), "GET should return correct Content-Type");
let body = get_resp.body.collect().await.unwrap().into_bytes();
assert_eq!(body.as_ref(), content, "Body content mismatch");
info!("Verifying HEAD response includes Content-Encoding");
let head_resp = client
.head_object()
.bucket(bucket)
.key(key)
.send()
.await
.expect("HEAD failed");
assert_eq!(head_resp.content_encoding(), Some("zstd"), "HEAD should return Content-Encoding: zstd");
assert_eq!(head_resp.content_type(), Some("text/plain"), "HEAD should return correct Content-Type");
env.stop_server();
}
}

View File

@@ -1,73 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use aws_sdk_s3::primitives::ByteStream;
use rustfs_common::data_usage::DataUsageInfo;
use serial_test::serial;
use crate::common::{RustFSTestEnvironment, TEST_BUCKET, awscurl_get, init_logging};
/// Regression test for data usage accuracy (issue #1012).
/// Launches rustfs, writes 1000 objects, then asserts admin data usage reports the full count.
#[tokio::test(flavor = "multi_thread")]
#[serial]
#[ignore = "Starts a rustfs server and requires awscurl; enable when running full E2E"]
async fn data_usage_reports_all_objects() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
init_logging();
let mut env = RustFSTestEnvironment::new().await?;
env.start_rustfs_server(vec![]).await?;
let client = env.create_s3_client();
// Create bucket and upload objects
client.create_bucket().bucket(TEST_BUCKET).send().await?;
for i in 0..1000 {
let key = format!("obj-{i:04}");
client
.put_object()
.bucket(TEST_BUCKET)
.key(key)
.body(ByteStream::from_static(b"hello-world"))
.send()
.await?;
}
// Query admin data usage API
let url = format!("{}/rustfs/admin/v3/datausageinfo", env.url);
let resp = awscurl_get(&url, &env.access_key, &env.secret_key).await?;
let usage: DataUsageInfo = serde_json::from_str(&resp)?;
// Assert total object count and per-bucket count are not truncated
let bucket_usage = usage
.buckets_usage
.get(TEST_BUCKET)
.cloned()
.expect("bucket usage should exist");
assert!(
usage.objects_total_count >= 1000,
"total object count should be at least 1000, got {}",
usage.objects_total_count
);
assert!(
bucket_usage.objects_count >= 1000,
"bucket object count should be at least 1000, got {}",
bucket_usage.objects_count
);
env.stop_server();
Ok(())
}

View File

@@ -18,10 +18,6 @@ mod reliant;
#[cfg(test)]
pub mod common;
// Data usage regression tests
#[cfg(test)]
mod data_usage_test;
// KMS-specific test modules
#[cfg(test)]
mod kms;
@@ -29,11 +25,3 @@ mod kms;
// Special characters in path test modules
#[cfg(test)]
mod special_chars_test;
// Content-Encoding header preservation test
#[cfg(test)]
mod content_encoding_test;
// Policy variables tests
#[cfg(test)]
mod policy;

View File

@@ -1,39 +0,0 @@
# RustFS Policy Variables Tests
This directory contains comprehensive end-to-end tests for AWS IAM policy variables in RustFS.
## Test Overview
The tests cover the following AWS policy variable scenarios:
1. **Single-value variables** - Basic variable resolution like `${aws:username}`
2. **Multi-value variables** - Variables that can have multiple values
3. **Variable concatenation** - Combining variables with static text like `prefix-${aws:username}-suffix`
4. **Nested variables** - Complex nested variable patterns like `${${aws:username}-test}`
5. **Deny scenarios** - Testing deny policies with variables
## Prerequisites
- RustFS server binary
- `awscurl` utility for admin API calls
- AWS SDK for Rust (included in the project)
## Running Tests
### Run All Policy Tests Using Unified Test Runner
```bash
# Run all policy tests with comprehensive reporting
# Note: Requires a RustFS server running on localhost:9000
cargo test -p e2e_test policy::test_runner::test_policy_full_suite -- --nocapture --ignored --test-threads=1
# Run only critical policy tests
cargo test -p e2e_test policy::test_runner::test_policy_critical_suite -- --nocapture --ignored --test-threads=1
```
### Run All Policy Tests
```bash
# From the project root directory
cargo test -p e2e_test policy:: -- --nocapture --ignored --test-threads=1
```

View File

@@ -1,798 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Tests for AWS IAM policy variables with single-value, multi-value, and nested scenarios
use crate::common::{awscurl_put, init_logging};
use crate::policy::test_env::PolicyTestEnvironment;
use aws_sdk_s3::primitives::ByteStream;
use serial_test::serial;
use tracing::info;
/// Helper function to create a regular user with given credentials
async fn create_user(
env: &PolicyTestEnvironment,
username: &str,
password: &str,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let create_user_body = serde_json::json!({
"secretKey": password,
"status": "enabled"
})
.to_string();
let create_user_url = format!("{}/rustfs/admin/v3/add-user?accessKey={}", env.url, username);
awscurl_put(&create_user_url, &create_user_body, &env.access_key, &env.secret_key).await?;
Ok(())
}
/// Helper function to create an STS user with given credentials
async fn create_sts_user(
env: &PolicyTestEnvironment,
username: &str,
password: &str,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// For STS, we create a regular user first, then use it to assume roles
create_user(env, username, password).await?;
Ok(())
}
/// Helper function to create and attach a policy
async fn create_and_attach_policy(
env: &PolicyTestEnvironment,
policy_name: &str,
username: &str,
policy_document: serde_json::Value,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let policy_string = policy_document.to_string();
// Create policy
let add_policy_url = format!("{}/rustfs/admin/v3/add-canned-policy?name={}", env.url, policy_name);
awscurl_put(&add_policy_url, &policy_string, &env.access_key, &env.secret_key).await?;
// Attach policy to user
let attach_policy_url = format!(
"{}/rustfs/admin/v3/set-user-or-group-policy?policyName={}&userOrGroup={}&isGroup=false",
env.url, policy_name, username
);
awscurl_put(&attach_policy_url, "", &env.access_key, &env.secret_key).await?;
Ok(())
}
/// Helper function to clean up test resources
async fn cleanup_user_and_policy(env: &PolicyTestEnvironment, username: &str, policy_name: &str) {
// Create admin client for cleanup
let admin_client = env.create_s3_client(&env.access_key, &env.secret_key);
// Delete buckets that might have been created by this user
let bucket_patterns = [
format!("{username}-test-bucket"),
format!("{username}-bucket1"),
format!("{username}-bucket2"),
format!("{username}-bucket3"),
format!("prefix-{username}-suffix"),
format!("{username}-test"),
format!("{username}-sts-bucket"),
format!("{username}-service-bucket"),
"private-test-bucket".to_string(), // For deny test
];
// Try to delete objects and buckets
for bucket_name in &bucket_patterns {
let _ = admin_client
.delete_object()
.bucket(bucket_name)
.key("test-object.txt")
.send()
.await;
let _ = admin_client
.delete_object()
.bucket(bucket_name)
.key("test-sts-object.txt")
.send()
.await;
let _ = admin_client
.delete_object()
.bucket(bucket_name)
.key("test-service-object.txt")
.send()
.await;
let _ = admin_client.delete_bucket().bucket(bucket_name).send().await;
}
// Remove user
let remove_user_url = format!("{}/rustfs/admin/v3/remove-user?accessKey={}", env.url, username);
let _ = awscurl_put(&remove_user_url, "", &env.access_key, &env.secret_key).await;
// Remove policy
let remove_policy_url = format!("{}/rustfs/admin/v3/remove-canned-policy?name={}", env.url, policy_name);
let _ = awscurl_put(&remove_policy_url, "", &env.access_key, &env.secret_key).await;
}
/// Test AWS policy variables with single-value scenarios
#[tokio::test(flavor = "multi_thread")]
#[serial]
#[ignore = "Starts a rustfs server; enable when running full E2E"]
pub async fn test_aws_policy_variables_single_value() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
test_aws_policy_variables_single_value_impl().await
}
/// Implementation function for single-value policy variables test
pub async fn test_aws_policy_variables_single_value_impl() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
init_logging();
info!("Starting AWS policy variables single-value test");
let env = PolicyTestEnvironment::with_address("127.0.0.1:9000").await?;
test_aws_policy_variables_single_value_impl_with_env(&env).await
}
/// Implementation function for single-value policy variables test with shared environment
pub async fn test_aws_policy_variables_single_value_impl_with_env(
env: &PolicyTestEnvironment,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Create test user
let test_user = "testuser1";
let test_password = "testpassword123";
let policy_name = "test-single-value-policy";
// Create cleanup function
let cleanup = || async {
cleanup_user_and_policy(env, test_user, policy_name).await;
};
let create_user_body = serde_json::json!({
"secretKey": test_password,
"status": "enabled"
})
.to_string();
let create_user_url = format!("{}/rustfs/admin/v3/add-user?accessKey={}", env.url, test_user);
awscurl_put(&create_user_url, &create_user_body, &env.access_key, &env.secret_key).await?;
// Create policy with single-value AWS variables
let policy_document = serde_json::json!({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": ["arn:aws:s3:::*"]
},
{
"Effect": "Allow",
"Action": ["s3:CreateBucket"],
"Resource": [format!("arn:aws:s3:::{}-*", "${aws:username}")]
},
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": [format!("arn:aws:s3:::{}-*", "${aws:username}")]
},
{
"Effect": "Allow",
"Action": ["s3:PutObject", "s3:GetObject"],
"Resource": [format!("arn:aws:s3:::{}-*/*", "${aws:username}")]
}
]
})
.to_string();
let add_policy_url = format!("{}/rustfs/admin/v3/add-canned-policy?name={}", env.url, policy_name);
awscurl_put(&add_policy_url, &policy_document, &env.access_key, &env.secret_key).await?;
// Attach policy to user
let attach_policy_url = format!(
"{}/rustfs/admin/v3/set-user-or-group-policy?policyName={}&userOrGroup={}&isGroup=false",
env.url, policy_name, test_user
);
awscurl_put(&attach_policy_url, "", &env.access_key, &env.secret_key).await?;
// Create S3 client for test user
let test_client = env.create_s3_client(test_user, test_password);
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
// Test 1: User should be able to list buckets (allowed by policy)
info!("Test 1: User listing buckets");
let list_result = test_client.list_buckets().send().await;
if let Err(e) = list_result {
cleanup().await;
return Err(format!("User should be able to list buckets: {e}").into());
}
// Test 2: User should be able to create bucket matching username pattern
info!("Test 2: User creating bucket matching pattern");
let bucket_name = format!("{test_user}-test-bucket");
let create_result = test_client.create_bucket().bucket(&bucket_name).send().await;
if let Err(e) = create_result {
cleanup().await;
return Err(format!("User should be able to create bucket matching username pattern: {e}").into());
}
// Test 3: User should be able to list objects in their own bucket
info!("Test 3: User listing objects in their bucket");
let list_objects_result = test_client.list_objects_v2().bucket(&bucket_name).send().await;
if let Err(e) = list_objects_result {
cleanup().await;
return Err(format!("User should be able to list objects in their own bucket: {e}").into());
}
// Test 4: User should be able to put object in their own bucket
info!("Test 4: User putting object in their bucket");
let put_result = test_client
.put_object()
.bucket(&bucket_name)
.key("test-object.txt")
.body(ByteStream::from_static(b"Hello, Policy Variables!"))
.send()
.await;
if let Err(e) = put_result {
cleanup().await;
return Err(format!("User should be able to put object in their own bucket: {e}").into());
}
// Test 5: User should be able to get object from their own bucket
info!("Test 5: User getting object from their bucket");
let get_result = test_client
.get_object()
.bucket(&bucket_name)
.key("test-object.txt")
.send()
.await;
if let Err(e) = get_result {
cleanup().await;
return Err(format!("User should be able to get object from their own bucket: {e}").into());
}
// Test 6: User should NOT be able to create bucket NOT matching username pattern
info!("Test 6: User attempting to create bucket NOT matching pattern");
let other_bucket_name = "other-user-bucket";
let create_other_result = test_client.create_bucket().bucket(other_bucket_name).send().await;
if create_other_result.is_ok() {
cleanup().await;
return Err("User should NOT be able to create bucket NOT matching username pattern".into());
}
// Cleanup
info!("Cleaning up test resources");
cleanup().await;
info!("AWS policy variables single-value test completed successfully");
Ok(())
}
/// Test AWS policy variables with multi-value scenarios
#[tokio::test(flavor = "multi_thread")]
#[serial]
#[ignore = "Starts a rustfs server; enable when running full E2E"]
pub async fn test_aws_policy_variables_multi_value() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
test_aws_policy_variables_multi_value_impl().await
}
/// Implementation function for multi-value policy variables test
pub async fn test_aws_policy_variables_multi_value_impl() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
init_logging();
info!("Starting AWS policy variables multi-value test");
let env = PolicyTestEnvironment::with_address("127.0.0.1:9000").await?;
test_aws_policy_variables_multi_value_impl_with_env(&env).await
}
/// Implementation function for multi-value policy variables test with shared environment
pub async fn test_aws_policy_variables_multi_value_impl_with_env(
env: &PolicyTestEnvironment,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Create test user
let test_user = "testuser2";
let test_password = "testpassword123";
let policy_name = "test-multi-value-policy";
// Create cleanup function
let cleanup = || async {
cleanup_user_and_policy(env, test_user, policy_name).await;
};
// Create user
create_user(env, test_user, test_password).await?;
// Create policy with multi-value AWS variables
let policy_document = serde_json::json!({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": ["arn:aws:s3:::*"]
},
{
"Effect": "Allow",
"Action": ["s3:CreateBucket"],
"Resource": [
format!("arn:aws:s3:::{}-bucket1", "${aws:username}"),
format!("arn:aws:s3:::{}-bucket2", "${aws:username}"),
format!("arn:aws:s3:::{}-bucket3", "${aws:username}")
]
},
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": [
format!("arn:aws:s3:::{}-bucket1", "${aws:username}"),
format!("arn:aws:s3:::{}-bucket2", "${aws:username}"),
format!("arn:aws:s3:::{}-bucket3", "${aws:username}")
]
}
]
});
create_and_attach_policy(env, policy_name, test_user, policy_document).await?;
// Create S3 client for test user
let test_client = env.create_s3_client(test_user, test_password);
// Test 1: User should be able to create buckets matching any of the multi-value patterns
info!("Test 1: User creating first bucket matching multi-value pattern");
let bucket1_name = format!("{test_user}-bucket1");
let create_result1 = test_client.create_bucket().bucket(&bucket1_name).send().await;
if let Err(e) = create_result1 {
cleanup().await;
return Err(format!("User should be able to create first bucket matching multi-value pattern: {e}").into());
}
info!("Test 2: User creating second bucket matching multi-value pattern");
let bucket2_name = format!("{test_user}-bucket2");
let create_result2 = test_client.create_bucket().bucket(&bucket2_name).send().await;
if let Err(e) = create_result2 {
cleanup().await;
return Err(format!("User should be able to create second bucket matching multi-value pattern: {e}").into());
}
info!("Test 3: User creating third bucket matching multi-value pattern");
let bucket3_name = format!("{test_user}-bucket3");
let create_result3 = test_client.create_bucket().bucket(&bucket3_name).send().await;
if let Err(e) = create_result3 {
cleanup().await;
return Err(format!("User should be able to create third bucket matching multi-value pattern: {e}").into());
}
// Test 4: User should NOT be able to create bucket NOT matching any multi-value pattern
info!("Test 4: User attempting to create bucket NOT matching any pattern");
let other_bucket_name = format!("{test_user}-other-bucket");
let create_other_result = test_client.create_bucket().bucket(&other_bucket_name).send().await;
if create_other_result.is_ok() {
cleanup().await;
return Err("User should NOT be able to create bucket NOT matching any multi-value pattern".into());
}
// Test 5: User should be able to list objects in their allowed buckets
info!("Test 5: User listing objects in allowed buckets");
let list_objects_result1 = test_client.list_objects_v2().bucket(&bucket1_name).send().await;
if let Err(e) = list_objects_result1 {
cleanup().await;
return Err(format!("User should be able to list objects in first allowed bucket: {e}").into());
}
let list_objects_result2 = test_client.list_objects_v2().bucket(&bucket2_name).send().await;
if let Err(e) = list_objects_result2 {
cleanup().await;
return Err(format!("User should be able to list objects in second allowed bucket: {e}").into());
}
// Cleanup
info!("Cleaning up test resources");
cleanup().await;
info!("AWS policy variables multi-value test completed successfully");
Ok(())
}
/// Test AWS policy variables with variable concatenation
#[tokio::test(flavor = "multi_thread")]
#[serial]
#[ignore = "Starts a rustfs server; enable when running full E2E"]
pub async fn test_aws_policy_variables_concatenation() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
test_aws_policy_variables_concatenation_impl().await
}
/// Implementation function for concatenation policy variables test
pub async fn test_aws_policy_variables_concatenation_impl() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
init_logging();
info!("Starting AWS policy variables concatenation test");
let env = PolicyTestEnvironment::with_address("127.0.0.1:9000").await?;
test_aws_policy_variables_concatenation_impl_with_env(&env).await
}
/// Implementation function for concatenation policy variables test with shared environment
pub async fn test_aws_policy_variables_concatenation_impl_with_env(
env: &PolicyTestEnvironment,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Create test user
let test_user = "testuser3";
let test_password = "testpassword123";
let policy_name = "test-concatenation-policy";
// Create cleanup function
let cleanup = || async {
cleanup_user_and_policy(env, test_user, policy_name).await;
};
// Create user
create_user(env, test_user, test_password).await?;
// Create policy with variable concatenation
let policy_document = serde_json::json!({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": ["arn:aws:s3:::*"]
},
{
"Effect": "Allow",
"Action": ["s3:CreateBucket"],
"Resource": [format!("arn:aws:s3:::prefix-{}-suffix", "${aws:username}")]
},
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": [format!("arn:aws:s3:::prefix-{}-suffix", "${aws:username}")]
}
]
});
create_and_attach_policy(env, policy_name, test_user, policy_document).await?;
// Create S3 client for test user
let test_client = env.create_s3_client(test_user, test_password);
// Add a small delay to allow policy to propagate
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
// Test: User should be able to create bucket matching concatenated pattern
info!("Test: User creating bucket matching concatenated pattern");
let bucket_name = format!("prefix-{test_user}-suffix");
let create_result = test_client.create_bucket().bucket(&bucket_name).send().await;
if let Err(e) = create_result {
cleanup().await;
return Err(format!("User should be able to create bucket matching concatenated pattern: {e}").into());
}
// Test: User should be able to list objects in the concatenated pattern bucket
info!("Test: User listing objects in concatenated pattern bucket");
let list_objects_result = test_client.list_objects_v2().bucket(&bucket_name).send().await;
if let Err(e) = list_objects_result {
cleanup().await;
return Err(format!("User should be able to list objects in concatenated pattern bucket: {e}").into());
}
// Cleanup
info!("Cleaning up test resources");
cleanup().await;
info!("AWS policy variables concatenation test completed successfully");
Ok(())
}
/// Test AWS policy variables with nested scenarios
#[tokio::test(flavor = "multi_thread")]
#[serial]
#[ignore = "Starts a rustfs server; enable when running full E2E"]
pub async fn test_aws_policy_variables_nested() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
test_aws_policy_variables_nested_impl().await
}
/// Implementation function for nested policy variables test
pub async fn test_aws_policy_variables_nested_impl() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
init_logging();
info!("Starting AWS policy variables nested test");
let env = PolicyTestEnvironment::with_address("127.0.0.1:9000").await?;
test_aws_policy_variables_nested_impl_with_env(&env).await
}
/// Test AWS policy variables with STS temporary credentials
#[tokio::test(flavor = "multi_thread")]
#[serial]
#[ignore = "Starts a rustfs server; enable when running full E2E"]
pub async fn test_aws_policy_variables_sts() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
test_aws_policy_variables_sts_impl().await
}
/// Implementation function for STS policy variables test
pub async fn test_aws_policy_variables_sts_impl() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
init_logging();
info!("Starting AWS policy variables STS test");
let env = PolicyTestEnvironment::with_address("127.0.0.1:9000").await?;
test_aws_policy_variables_sts_impl_with_env(&env).await
}
/// Implementation function for nested policy variables test with shared environment
pub async fn test_aws_policy_variables_nested_impl_with_env(
env: &PolicyTestEnvironment,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Create test user
let test_user = "testuser4";
let test_password = "testpassword123";
let policy_name = "test-nested-policy";
// Create cleanup function
let cleanup = || async {
cleanup_user_and_policy(env, test_user, policy_name).await;
};
// Create user
create_user(env, test_user, test_password).await?;
// Create policy with nested variables - this tests complex variable resolution
let policy_document = serde_json::json!({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": ["arn:aws:s3:::*"]
},
{
"Effect": "Allow",
"Action": ["s3:CreateBucket"],
"Resource": ["arn:aws:s3:::${${aws:username}-test}"]
},
{
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": ["arn:aws:s3:::${${aws:username}-test}"]
}
]
});
create_and_attach_policy(env, policy_name, test_user, policy_document).await?;
// Create S3 client for test user
let test_client = env.create_s3_client(test_user, test_password);
// Add a small delay to allow policy to propagate
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
// Test nested variable resolution
info!("Test: Nested variable resolution");
// Create bucket with expected resolved name
let expected_bucket = format!("{test_user}-test");
// Attempt to create bucket with resolved name
let create_result = test_client.create_bucket().bucket(&expected_bucket).send().await;
// Verify bucket creation succeeds (nested variable resolved correctly)
if let Err(e) = create_result {
cleanup().await;
return Err(format!("User should be able to create bucket with nested variable: {e}").into());
}
// Verify bucket creation fails with unresolved variable
let unresolved_bucket = format!("${{}}-test {test_user}");
let create_unresolved = test_client.create_bucket().bucket(&unresolved_bucket).send().await;
if create_unresolved.is_ok() {
cleanup().await;
return Err("User should NOT be able to create bucket with unresolved variable".into());
}
// Cleanup
info!("Cleaning up test resources");
cleanup().await;
info!("AWS policy variables nested test completed successfully");
Ok(())
}
/// Implementation function for STS policy variables test with shared environment
pub async fn test_aws_policy_variables_sts_impl_with_env(
env: &PolicyTestEnvironment,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Create test user for STS
let test_user = "testuser-sts";
let test_password = "testpassword123";
let policy_name = "test-sts-policy";
// Create cleanup function
let cleanup = || async {
cleanup_user_and_policy(env, test_user, policy_name).await;
};
// Create STS user
create_sts_user(env, test_user, test_password).await?;
// Create policy with STS-compatible variables
let policy_document = serde_json::json!({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": ["arn:aws:s3:::*"]
},
{
"Effect": "Allow",
"Action": ["s3:CreateBucket"],
"Resource": [format!("arn:aws:s3:::{}-sts-bucket", "${aws:username}")]
},
{
"Effect": "Allow",
"Action": ["s3:ListBucket", "s3:PutObject", "s3:GetObject"],
"Resource": [format!("arn:aws:s3:::{}-sts-bucket/*", "${aws:username}")]
}
]
});
create_and_attach_policy(env, policy_name, test_user, policy_document).await?;
// Create S3 client for test user
let test_client = env.create_s3_client(test_user, test_password);
// Add a small delay to allow policy to propagate
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
// Test: User should be able to create bucket matching STS pattern
info!("Test: User creating bucket matching STS pattern");
let bucket_name = format!("{test_user}-sts-bucket");
let create_result = test_client.create_bucket().bucket(&bucket_name).send().await;
if let Err(e) = create_result {
cleanup().await;
return Err(format!("User should be able to create STS bucket: {e}").into());
}
// Test: User should be able to put object in STS bucket
info!("Test: User putting object in STS bucket");
let put_result = test_client
.put_object()
.bucket(&bucket_name)
.key("test-sts-object.txt")
.body(ByteStream::from_static(b"STS Test Object"))
.send()
.await;
if let Err(e) = put_result {
cleanup().await;
return Err(format!("User should be able to put object in STS bucket: {e}").into());
}
// Test: User should be able to get object from STS bucket
info!("Test: User getting object from STS bucket");
let get_result = test_client
.get_object()
.bucket(&bucket_name)
.key("test-sts-object.txt")
.send()
.await;
if let Err(e) = get_result {
cleanup().await;
return Err(format!("User should be able to get object from STS bucket: {e}").into());
}
// Test: User should be able to list objects in STS bucket
info!("Test: User listing objects in STS bucket");
let list_result = test_client.list_objects_v2().bucket(&bucket_name).send().await;
if let Err(e) = list_result {
cleanup().await;
return Err(format!("User should be able to list objects in STS bucket: {e}").into());
}
// Cleanup
info!("Cleaning up test resources");
cleanup().await;
info!("AWS policy variables STS test completed successfully");
Ok(())
}
/// Test AWS policy variables with deny scenarios
#[tokio::test(flavor = "multi_thread")]
#[serial]
#[ignore = "Starts a rustfs server; enable when running full E2E"]
pub async fn test_aws_policy_variables_deny() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
test_aws_policy_variables_deny_impl().await
}
/// Implementation function for deny policy variables test
pub async fn test_aws_policy_variables_deny_impl() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
init_logging();
info!("Starting AWS policy variables deny test");
let env = PolicyTestEnvironment::with_address("127.0.0.1:9000").await?;
test_aws_policy_variables_deny_impl_with_env(&env).await
}
/// Implementation function for deny policy variables test with shared environment
pub async fn test_aws_policy_variables_deny_impl_with_env(
env: &PolicyTestEnvironment,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
// Create test user
let test_user = "testuser5";
let test_password = "testpassword123";
let policy_name = "test-deny-policy";
// Create cleanup function
let cleanup = || async {
cleanup_user_and_policy(env, test_user, policy_name).await;
};
// Create user
create_user(env, test_user, test_password).await?;
// Create policy with both allow and deny statements
let policy_document = serde_json::json!({
"Version": "2012-10-17",
"Statement": [
// Allow general access
{
"Effect": "Allow",
"Action": ["s3:ListAllMyBuckets"],
"Resource": ["arn:aws:s3:::*"]
},
// Allow creating buckets matching username pattern
{
"Effect": "Allow",
"Action": ["s3:CreateBucket"],
"Resource": [format!("arn:aws:s3:::{}-*", "${aws:username}")]
},
// Deny creating buckets with "private" in the name
{
"Effect": "Deny",
"Action": ["s3:CreateBucket"],
"Resource": ["arn:aws:s3:::*private*"]
}
]
});
create_and_attach_policy(env, policy_name, test_user, policy_document).await?;
// Create S3 client for test user
let test_client = env.create_s3_client(test_user, test_password);
// Add a small delay to allow policy to propagate
tokio::time::sleep(std::time::Duration::from_millis(500)).await;
// Test 1: User should be able to create bucket matching username pattern
info!("Test 1: User creating bucket matching username pattern");
let bucket_name = format!("{test_user}-test-bucket");
let create_result = test_client.create_bucket().bucket(&bucket_name).send().await;
if let Err(e) = create_result {
cleanup().await;
return Err(format!("User should be able to create bucket matching username pattern: {e}").into());
}
// Test 2: User should NOT be able to create bucket with "private" in the name (deny rule)
info!("Test 2: User attempting to create bucket with 'private' in name (should be denied)");
let private_bucket_name = "private-test-bucket";
let create_private_result = test_client.create_bucket().bucket(private_bucket_name).send().await;
if create_private_result.is_ok() {
cleanup().await;
return Err("User should NOT be able to create bucket with 'private' in name due to deny rule".into());
}
// Cleanup
info!("Cleaning up test resources");
cleanup().await;
info!("AWS policy variables deny test completed successfully");
Ok(())
}

View File

@@ -1,100 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Custom test environment for policy variables tests
//!
//! This module provides a custom test environment that doesn't automatically
//! stop servers when destroyed, addressing the server stopping issue.
use aws_sdk_s3::Client;
use aws_sdk_s3::config::{Config, Credentials, Region};
use std::net::TcpStream;
use std::time::Duration;
use tokio::time::sleep;
use tracing::{info, warn};
// Default credentials
const DEFAULT_ACCESS_KEY: &str = "rustfsadmin";
const DEFAULT_SECRET_KEY: &str = "rustfsadmin";
/// Custom test environment that doesn't automatically stop servers
pub struct PolicyTestEnvironment {
pub temp_dir: String,
pub address: String,
pub url: String,
pub access_key: String,
pub secret_key: String,
}
impl PolicyTestEnvironment {
/// Create a new test environment with specific address
/// This environment won't stop any server when dropped
pub async fn with_address(address: &str) -> Result<Self, Box<dyn std::error::Error + Send + Sync>> {
let temp_dir = format!("/tmp/rustfs_policy_test_{}", uuid::Uuid::new_v4());
tokio::fs::create_dir_all(&temp_dir).await?;
let url = format!("http://{address}");
Ok(Self {
temp_dir,
address: address.to_string(),
url,
access_key: DEFAULT_ACCESS_KEY.to_string(),
secret_key: DEFAULT_SECRET_KEY.to_string(),
})
}
/// Create an AWS S3 client configured for this RustFS instance
pub fn create_s3_client(&self, access_key: &str, secret_key: &str) -> Client {
let credentials = Credentials::new(access_key, secret_key, None, None, "policy-test");
let config = Config::builder()
.credentials_provider(credentials)
.region(Region::new("us-east-1"))
.endpoint_url(&self.url)
.force_path_style(true)
.behavior_version_latest()
.build();
Client::from_conf(config)
}
/// Wait for RustFS server to be ready by checking TCP connectivity
pub async fn wait_for_server_ready(&self) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
info!("Waiting for RustFS server to be ready on {}", self.address);
for i in 0..30 {
if TcpStream::connect(&self.address).is_ok() {
info!("✅ RustFS server is ready after {} attempts", i + 1);
return Ok(());
}
if i == 29 {
return Err("RustFS server failed to become ready within 30 seconds".into());
}
sleep(Duration::from_secs(1)).await;
}
Ok(())
}
}
// Implement Drop trait that doesn't stop servers
impl Drop for PolicyTestEnvironment {
fn drop(&mut self) {
// Clean up temp directory only, don't stop any server
if let Err(e) = std::fs::remove_dir_all(&self.temp_dir) {
warn!("Failed to clean up temp directory {}: {}", self.temp_dir, e);
}
}
}

View File

@@ -1,247 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::common::init_logging;
use crate::policy::test_env::PolicyTestEnvironment;
use serial_test::serial;
use std::time::Instant;
use tokio::time::{Duration, sleep};
use tracing::{error, info};
/// Core test categories
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum TestCategory {
SingleValue,
MultiValue,
Concatenation,
Nested,
DenyScenarios,
}
impl TestCategory {}
/// Test case definition
#[derive(Debug, Clone)]
pub struct TestDefinition {
pub name: String,
#[allow(dead_code)]
pub category: TestCategory,
pub is_critical: bool,
}
impl TestDefinition {
pub fn new(name: impl Into<String>, category: TestCategory, is_critical: bool) -> Self {
Self {
name: name.into(),
category,
is_critical,
}
}
}
/// Test result
#[derive(Debug, Clone)]
pub struct TestResult {
pub test_name: String,
pub success: bool,
pub error_message: Option<String>,
}
impl TestResult {
pub fn success(test_name: String) -> Self {
Self {
test_name,
success: true,
error_message: None,
}
}
pub fn failure(test_name: String, error: String) -> Self {
Self {
test_name,
success: false,
error_message: Some(error),
}
}
}
/// Test suite configuration
#[derive(Debug, Clone, Default)]
pub struct TestSuiteConfig {
pub include_critical_only: bool,
}
/// Policy test suite
pub struct PolicyTestSuite {
tests: Vec<TestDefinition>,
config: TestSuiteConfig,
}
impl PolicyTestSuite {
/// Create default test suite
pub fn new() -> Self {
let tests = vec![
TestDefinition::new("test_aws_policy_variables_single_value", TestCategory::SingleValue, true),
TestDefinition::new("test_aws_policy_variables_multi_value", TestCategory::MultiValue, true),
TestDefinition::new("test_aws_policy_variables_concatenation", TestCategory::Concatenation, true),
TestDefinition::new("test_aws_policy_variables_nested", TestCategory::Nested, true),
TestDefinition::new("test_aws_policy_variables_deny", TestCategory::DenyScenarios, true),
TestDefinition::new("test_aws_policy_variables_sts", TestCategory::SingleValue, true),
];
Self {
tests,
config: TestSuiteConfig::default(),
}
}
/// Configure test suite
pub fn with_config(mut self, config: TestSuiteConfig) -> Self {
self.config = config;
self
}
/// Run test suite
pub async fn run_test_suite(&self) -> Vec<TestResult> {
init_logging();
info!("Starting Policy Variables test suite");
let start_time = Instant::now();
let mut results = Vec::new();
// Create test environment
let env = match PolicyTestEnvironment::with_address("127.0.0.1:9000").await {
Ok(env) => env,
Err(e) => {
error!("Failed to create test environment: {}", e);
return vec![TestResult::failure("env_creation".into(), e.to_string())];
}
};
// Wait for server to be ready
if env.wait_for_server_ready().await.is_err() {
error!("Server is not ready");
return vec![TestResult::failure("server_check".into(), "Server not ready".into())];
}
// Filter tests
let tests_to_run: Vec<&TestDefinition> = self
.tests
.iter()
.filter(|test| !self.config.include_critical_only || test.is_critical)
.collect();
info!("Scheduled {} tests", tests_to_run.len());
// Run tests
for (i, test_def) in tests_to_run.iter().enumerate() {
info!("Running test {}/{}: {}", i + 1, tests_to_run.len(), test_def.name);
let test_start = Instant::now();
let result = self.run_single_test(test_def, &env).await;
let test_duration = test_start.elapsed();
match result {
Ok(_) => {
info!("Test passed: {} ({:.2}s)", test_def.name, test_duration.as_secs_f64());
results.push(TestResult::success(test_def.name.clone()));
}
Err(e) => {
error!("Test failed: {} ({:.2}s): {}", test_def.name, test_duration.as_secs_f64(), e);
results.push(TestResult::failure(test_def.name.clone(), e.to_string()));
}
}
// Delay between tests to avoid resource conflicts
if i < tests_to_run.len() - 1 {
sleep(Duration::from_secs(2)).await;
}
}
// Print summary
self.print_summary(&results, start_time.elapsed());
results
}
/// Run a single test
async fn run_single_test(
&self,
test_def: &TestDefinition,
env: &PolicyTestEnvironment,
) -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
match test_def.name.as_str() {
"test_aws_policy_variables_single_value" => {
super::policy_variables_test::test_aws_policy_variables_single_value_impl_with_env(env).await
}
"test_aws_policy_variables_multi_value" => {
super::policy_variables_test::test_aws_policy_variables_multi_value_impl_with_env(env).await
}
"test_aws_policy_variables_concatenation" => {
super::policy_variables_test::test_aws_policy_variables_concatenation_impl_with_env(env).await
}
"test_aws_policy_variables_nested" => {
super::policy_variables_test::test_aws_policy_variables_nested_impl_with_env(env).await
}
"test_aws_policy_variables_deny" => {
super::policy_variables_test::test_aws_policy_variables_deny_impl_with_env(env).await
}
"test_aws_policy_variables_sts" => {
super::policy_variables_test::test_aws_policy_variables_sts_impl_with_env(env).await
}
_ => Err(format!("Test {} not implemented", test_def.name).into()),
}
}
/// Print test summary
fn print_summary(&self, results: &[TestResult], total_duration: Duration) {
info!("=== Test Suite Summary ===");
info!("Total duration: {:.2}s", total_duration.as_secs_f64());
info!("Total tests: {}", results.len());
let passed = results.iter().filter(|r| r.success).count();
let failed = results.len() - passed;
let success_rate = (passed as f64 / results.len() as f64) * 100.0;
info!("Passed: {} | Failed: {}", passed, failed);
info!("Success rate: {:.1}%", success_rate);
if failed > 0 {
error!("Failed tests:");
for result in results.iter().filter(|r| !r.success) {
error!(" - {}: {}", result.test_name, result.error_message.as_ref().unwrap());
}
}
}
}
/// Test suite
#[tokio::test]
#[serial]
#[ignore = "Connects to existing rustfs server"]
async fn test_policy_critical_suite() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
let config = TestSuiteConfig {
include_critical_only: true,
};
let suite = PolicyTestSuite::new().with_config(config);
let results = suite.run_test_suite().await;
let failed = results.iter().filter(|r| !r.success).count();
if failed > 0 {
return Err(format!("Critical tests failed: {failed} failures").into());
}
info!("All critical tests passed");
Ok(())
}

View File

@@ -127,12 +127,12 @@ async fn test_get_deleted_object_returns_nosuchkey() -> Result<(), Box<dyn std::
info!("Service error code: {:?}", s3_err.meta().code());
// The error should be NoSuchKey
assert!(s3_err.is_no_such_key(), "Error should be NoSuchKey, got: {s3_err:?}");
assert!(s3_err.is_no_such_key(), "Error should be NoSuchKey, got: {:?}", s3_err);
info!("✅ Test passed: GetObject on deleted object correctly returns NoSuchKey");
}
other_err => {
panic!("Expected ServiceError with NoSuchKey, but got: {other_err:?}");
panic!("Expected ServiceError with NoSuchKey, but got: {:?}", other_err);
}
}
@@ -182,12 +182,13 @@ async fn test_head_deleted_object_returns_nosuchkey() -> Result<(), Box<dyn std:
let s3_err = service_err.into_err();
assert!(
s3_err.meta().code() == Some("NoSuchKey") || s3_err.meta().code() == Some("NotFound"),
"Error should be NoSuchKey or NotFound, got: {s3_err:?}"
"Error should be NoSuchKey or NotFound, got: {:?}",
s3_err
);
info!("✅ HeadObject correctly returns NoSuchKey/NotFound");
}
other_err => {
panic!("Expected ServiceError but got: {other_err:?}");
panic!("Expected ServiceError but got: {:?}", other_err);
}
}
@@ -219,11 +220,11 @@ async fn test_get_nonexistent_object_returns_nosuchkey() -> Result<(), Box<dyn s
match get_result.unwrap_err() {
SdkError::ServiceError(service_err) => {
let s3_err = service_err.into_err();
assert!(s3_err.is_no_such_key(), "Error should be NoSuchKey, got: {s3_err:?}");
assert!(s3_err.is_no_such_key(), "Error should be NoSuchKey, got: {:?}", s3_err);
info!("✅ GetObject correctly returns NoSuchKey for non-existent object");
}
other_err => {
panic!("Expected ServiceError with NoSuchKey, but got: {other_err:?}");
panic!("Expected ServiceError with NoSuchKey, but got: {:?}", other_err);
}
}
@@ -265,15 +266,15 @@ async fn test_multiple_gets_deleted_object() -> Result<(), Box<dyn std::error::E
info!("Attempt {} to get deleted object", i);
let get_result = client.get_object().bucket(BUCKET).key(key).send().await;
assert!(get_result.is_err(), "Attempt {i}: should return error");
assert!(get_result.is_err(), "Attempt {}: should return error", i);
match get_result.unwrap_err() {
SdkError::ServiceError(service_err) => {
let s3_err = service_err.into_err();
assert!(s3_err.is_no_such_key(), "Attempt {i}: Error should be NoSuchKey, got: {s3_err:?}");
assert!(s3_err.is_no_such_key(), "Attempt {}: Error should be NoSuchKey, got: {:?}", i, s3_err);
}
other_err => {
panic!("Attempt {i}: Expected ServiceError but got: {other_err:?}");
panic!("Attempt {}: Expected ServiceError but got: {:?}", i, other_err);
}
}
}

View File

@@ -1,138 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Test for HeadObject on deleted objects with versioning enabled
//!
//! This test reproduces the issue where getting a deleted object returns
//! 200 OK instead of 404 NoSuchKey when versioning is enabled.
#![cfg(test)]
use aws_config::meta::region::RegionProviderChain;
use aws_sdk_s3::Client;
use aws_sdk_s3::config::{Credentials, Region};
use aws_sdk_s3::error::SdkError;
use aws_sdk_s3::types::{BucketVersioningStatus, VersioningConfiguration};
use bytes::Bytes;
use serial_test::serial;
use std::error::Error;
use tracing::info;
const ENDPOINT: &str = "http://localhost:9000";
const ACCESS_KEY: &str = "rustfsadmin";
const SECRET_KEY: &str = "rustfsadmin";
const BUCKET: &str = "test-head-deleted-versioning-bucket";
async fn create_aws_s3_client() -> Result<Client, Box<dyn Error>> {
let region_provider = RegionProviderChain::default_provider().or_else(Region::new("us-east-1"));
let shared_config = aws_config::defaults(aws_config::BehaviorVersion::latest())
.region(region_provider)
.credentials_provider(Credentials::new(ACCESS_KEY, SECRET_KEY, None, None, "static"))
.endpoint_url(ENDPOINT)
.load()
.await;
let client = Client::from_conf(
aws_sdk_s3::Config::from(&shared_config)
.to_builder()
.force_path_style(true)
.build(),
);
Ok(client)
}
/// Setup test bucket, creating it if it doesn't exist, and enable versioning
async fn setup_test_bucket(client: &Client) -> Result<(), Box<dyn Error>> {
match client.create_bucket().bucket(BUCKET).send().await {
Ok(_) => {}
Err(SdkError::ServiceError(e)) => {
let e = e.into_err();
let error_code = e.meta().code().unwrap_or("");
if !error_code.eq("BucketAlreadyExists") && !error_code.eq("BucketAlreadyOwnedByYou") {
return Err(e.into());
}
}
Err(e) => {
return Err(e.into());
}
}
// Enable versioning
client
.put_bucket_versioning()
.bucket(BUCKET)
.versioning_configuration(
VersioningConfiguration::builder()
.status(BucketVersioningStatus::Enabled)
.build(),
)
.send()
.await?;
Ok(())
}
/// Test that HeadObject on a deleted object returns NoSuchKey when versioning is enabled
#[tokio::test]
#[serial]
#[ignore = "requires running RustFS server at localhost:9000"]
async fn test_head_deleted_object_versioning_returns_nosuchkey() -> Result<(), Box<dyn std::error::Error>> {
let _ = tracing_subscriber::fmt()
.with_max_level(tracing::Level::INFO)
.with_test_writer()
.try_init();
info!("🧪 Starting test_head_deleted_object_versioning_returns_nosuchkey");
let client = create_aws_s3_client().await?;
setup_test_bucket(&client).await?;
let key = "test-head-deleted-versioning.txt";
let content = b"Test content for HeadObject with versioning";
// Upload and verify
client
.put_object()
.bucket(BUCKET)
.key(key)
.body(Bytes::from_static(content).into())
.send()
.await?;
// Delete the object (creates a delete marker)
client.delete_object().bucket(BUCKET).key(key).send().await?;
// Try to head the deleted object (latest version is delete marker)
let head_result = client.head_object().bucket(BUCKET).key(key).send().await;
assert!(head_result.is_err(), "HeadObject on deleted object should return an error");
match head_result.unwrap_err() {
SdkError::ServiceError(service_err) => {
let s3_err = service_err.into_err();
assert!(
s3_err.meta().code() == Some("NoSuchKey")
|| s3_err.meta().code() == Some("NotFound")
|| s3_err.meta().code() == Some("404"),
"Error should be NoSuchKey or NotFound, got: {s3_err:?}"
);
info!("✅ HeadObject correctly returns NoSuchKey/NotFound");
}
other_err => {
panic!("Expected ServiceError but got: {other_err:?}");
}
}
Ok(())
}

View File

@@ -14,7 +14,6 @@
mod conditional_writes;
mod get_deleted_object_test;
mod head_deleted_object_versioning_test;
mod lifecycle;
mod lock;
mod node_interact_test;

View File

@@ -256,7 +256,7 @@ mod tests {
let output = result.unwrap();
let body_bytes = output.body.collect().await.unwrap().into_bytes();
assert_eq!(body_bytes.as_ref(), *content, "Content mismatch for key '{key}'");
assert_eq!(body_bytes.as_ref(), *content, "Content mismatch for key '{}'", key);
info!("✅ PUT/GET succeeded for key: {}", key);
}
@@ -472,7 +472,7 @@ mod tests {
info!("Testing COPY from '{}' to '{}'", src_key, dest_key);
// COPY object
let copy_source = format!("{bucket}/{src_key}");
let copy_source = format!("{}/{}", bucket, src_key);
let result = client
.copy_object()
.bucket(bucket)
@@ -543,7 +543,7 @@ mod tests {
let output = result.unwrap();
let body_bytes = output.body.collect().await.unwrap().into_bytes();
assert_eq!(body_bytes.as_ref(), *content, "Content mismatch for Unicode key '{key}'");
assert_eq!(body_bytes.as_ref(), *content, "Content mismatch for Unicode key '{}'", key);
info!("✅ PUT/GET succeeded for Unicode key: {}", key);
}
@@ -610,7 +610,7 @@ mod tests {
let output = result.unwrap();
let body_bytes = output.body.collect().await.unwrap().into_bytes();
assert_eq!(body_bytes.as_ref(), *content, "Content mismatch for key '{key}'");
assert_eq!(body_bytes.as_ref(), *content, "Content mismatch for key '{}'", key);
info!("✅ PUT/GET succeeded for key: {}", key);
}
@@ -658,7 +658,7 @@ mod tests {
// Note: The validation happens on the server side, so we expect an error
// For null byte, newline, and carriage return
if key.contains('\0') || key.contains('\n') || key.contains('\r') {
assert!(result.is_err(), "Control character should be rejected for key: {key:?}");
assert!(result.is_err(), "Control character should be rejected for key: {:?}", key);
if let Err(e) = result {
info!("✅ Control character correctly rejected: {:?}", e);
}

View File

@@ -45,7 +45,6 @@ glob = { workspace = true }
thiserror.workspace = true
flatbuffers.workspace = true
futures.workspace = true
futures-util.workspace = true
tracing.workspace = true
serde.workspace = true
time.workspace = true
@@ -54,8 +53,6 @@ serde_json.workspace = true
quick-xml = { workspace = true, features = ["serialize", "async-tokio"] }
s3s.workspace = true
http.workspace = true
http-body = { workspace = true }
http-body-util.workspace = true
url.workspace = true
uuid = { workspace = true, features = ["v4", "fast-rng", "serde"] }
reed-solomon-simd = { workspace = true }
@@ -111,12 +108,17 @@ google-cloud-auth = { workspace = true }
aws-config = { workspace = true }
faster-hex = { workspace = true }
[target.'cfg(not(windows))'.dependencies]
nix = { workspace = true }
[target.'cfg(windows)'.dependencies]
winapi = { workspace = true }
[dev-dependencies]
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }
criterion = { workspace = true, features = ["html_reports"] }
temp-env = { workspace = true }
tracing-subscriber = { workspace = true }
[build-dependencies]
shadow-rs = { workspace = true, features = ["build", "metadata"] }

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/bash
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -23,7 +23,7 @@ use crate::{
};
use crate::data_usage::load_data_usage_cache;
use rustfs_common::{GLOBAL_LOCAL_NODE_NAME, heal_channel::DriveState};
use rustfs_common::{globals::GLOBAL_Local_Node_Name, heal_channel::DriveState};
use rustfs_madmin::{
BackendDisks, Disk, ErasureSetInfo, ITEM_INITIALIZING, ITEM_OFFLINE, ITEM_ONLINE, InfoMessage, ServerProperties,
};
@@ -128,7 +128,7 @@ async fn is_server_resolvable(endpoint: &Endpoint) -> Result<()> {
}
pub async fn get_local_server_property() -> ServerProperties {
let addr = GLOBAL_LOCAL_NODE_NAME.read().await.clone();
let addr = GLOBAL_Local_Node_Name.read().await.clone();
let mut pool_numbers = HashSet::new();
let mut network = HashMap::new();

View File

@@ -953,7 +953,7 @@ impl LifecycleOps for ObjectInfo {
lifecycle::ObjectOpts {
name: self.name.clone(),
user_tags: self.user_tags.clone(),
version_id: self.version_id.clone(),
version_id: self.version_id.map(|v| v.to_string()).unwrap_or_default(),
mod_time: self.mod_time,
size: self.size as usize,
is_latest: self.is_latest,
@@ -1067,7 +1067,7 @@ pub async fn eval_action_from_lifecycle(
event
}
pub async fn apply_transition_rule(event: &lifecycle::Event, src: &LcEventSrc, oi: &ObjectInfo) -> bool {
async fn apply_transition_rule(event: &lifecycle::Event, src: &LcEventSrc, oi: &ObjectInfo) -> bool {
if oi.delete_marker || oi.is_dir {
return false;
}
@@ -1161,7 +1161,7 @@ pub async fn apply_expiry_on_non_transitioned_objects(
true
}
pub async fn apply_expiry_rule(event: &lifecycle::Event, src: &LcEventSrc, oi: &ObjectInfo) -> bool {
async fn apply_expiry_rule(event: &lifecycle::Event, src: &LcEventSrc, oi: &ObjectInfo) -> bool {
let mut expiry_state = GLOBAL_ExpiryState.write().await;
expiry_state.enqueue_by_days(oi, event, src).await;
true

View File

@@ -1,192 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use s3s::dto::{
BucketLifecycleConfiguration, ObjectLockConfiguration, ObjectLockEnabled, ObjectLockLegalHoldStatus, ObjectLockRetentionMode,
};
use time::OffsetDateTime;
use tracing::info;
use crate::bucket::lifecycle::lifecycle::{Event, Lifecycle, ObjectOpts};
use crate::bucket::object_lock::ObjectLockStatusExt;
use crate::bucket::object_lock::objectlock::{get_object_legalhold_meta, get_object_retention_meta, utc_now_ntp};
use crate::bucket::replication::ReplicationConfig;
use rustfs_common::metrics::IlmAction;
/// Evaluator - evaluates lifecycle policy on objects for the given lifecycle
/// configuration, lock retention configuration and replication configuration.
pub struct Evaluator {
policy: Arc<BucketLifecycleConfiguration>,
lock_retention: Option<Arc<ObjectLockConfiguration>>,
repl_cfg: Option<Arc<ReplicationConfig>>,
}
impl Evaluator {
/// NewEvaluator - creates a new evaluator with the given lifecycle
pub fn new(policy: Arc<BucketLifecycleConfiguration>) -> Self {
Self {
policy,
lock_retention: None,
repl_cfg: None,
}
}
/// WithLockRetention - sets the lock retention configuration for the evaluator
pub fn with_lock_retention(mut self, lr: Option<Arc<ObjectLockConfiguration>>) -> Self {
self.lock_retention = lr;
self
}
/// WithReplicationConfig - sets the replication configuration for the evaluator
pub fn with_replication_config(mut self, rcfg: Option<Arc<ReplicationConfig>>) -> Self {
self.repl_cfg = rcfg;
self
}
/// IsPendingReplication checks if the object is pending replication.
pub fn is_pending_replication(&self, obj: &ObjectOpts) -> bool {
use crate::bucket::replication::ReplicationConfigurationExt;
if self.repl_cfg.is_none() {
return false;
}
if let Some(rcfg) = &self.repl_cfg {
if rcfg
.config
.as_ref()
.is_some_and(|config| config.has_active_rules(obj.name.as_str(), true))
&& !obj.version_purge_status.is_empty()
{
return true;
}
}
false
}
/// IsObjectLocked checks if it is appropriate to remove an
/// object according to locking configuration when this is lifecycle/ bucket quota asking.
/// (copied over from enforceRetentionForDeletion)
pub fn is_object_locked(&self, obj: &ObjectOpts) -> bool {
if self.lock_retention.as_ref().is_none_or(|v| {
v.object_lock_enabled
.as_ref()
.is_none_or(|v| v.as_str() != ObjectLockEnabled::ENABLED)
}) {
return false;
}
if obj.delete_marker {
return false;
}
let lhold = get_object_legalhold_meta(obj.user_defined.clone());
if lhold
.status
.is_some_and(|v| v.valid() && v.as_str() == ObjectLockLegalHoldStatus::ON)
{
return true;
}
let ret = get_object_retention_meta(obj.user_defined.clone());
if ret
.mode
.is_some_and(|v| matches!(v.as_str(), ObjectLockRetentionMode::COMPLIANCE | ObjectLockRetentionMode::GOVERNANCE))
{
let t = utc_now_ntp();
if let Some(retain_until) = ret.retain_until_date {
if OffsetDateTime::from(retain_until).gt(&t) {
return true;
}
}
}
false
}
/// eval will return a lifecycle event for each object in objs for a given time.
async fn eval_inner(&self, objs: &[ObjectOpts], now: OffsetDateTime) -> Vec<Event> {
let mut events = vec![Event::default(); objs.len()];
let mut newer_noncurrent_versions = 0;
'top_loop: {
for (i, obj) in objs.iter().enumerate() {
let mut event = self.policy.eval_inner(obj, now, newer_noncurrent_versions).await;
match event.action {
IlmAction::DeleteAllVersionsAction | IlmAction::DelMarkerDeleteAllVersionsAction => {
// Skip if bucket has object locking enabled; To prevent the
// possibility of violating an object retention on one of the
// noncurrent versions of this object.
if self.lock_retention.as_ref().is_some_and(|v| {
v.object_lock_enabled
.as_ref()
.is_some_and(|v| v.as_str() == ObjectLockEnabled::ENABLED)
}) {
event = Event::default();
} else {
// No need to evaluate remaining versions' lifecycle
// events after DeleteAllVersionsAction*
events[i] = event;
info!("eval_inner: skipping remaining versions' lifecycle events after DeleteAllVersionsAction*");
break 'top_loop;
}
}
IlmAction::DeleteVersionAction | IlmAction::DeleteRestoredVersionAction => {
// Defensive code, should never happen
if obj.version_id.is_none_or(|v| v.is_nil()) {
event.action = IlmAction::NoneAction;
}
if self.is_object_locked(obj) {
event = Event::default();
}
if self.is_pending_replication(obj) {
event = Event::default();
}
}
_ => {}
}
if !obj.is_latest {
match event.action {
IlmAction::DeleteVersionAction => {
// this noncurrent version will be expired, nothing to add
}
_ => {
// this noncurrent version will be spared
newer_noncurrent_versions += 1;
}
}
}
events[i] = event;
}
}
events
}
/// Eval will return a lifecycle event for each object in objs
pub async fn eval(&self, objs: &[ObjectOpts]) -> Result<Vec<Event>, std::io::Error> {
if objs.is_empty() {
return Ok(vec![]);
}
if objs.len() != objs[0].num_versions {
return Err(std::io::Error::new(
std::io::ErrorKind::InvalidInput,
format!("number of versions mismatch, expected {}, got {}", objs[0].num_versions, objs.len()),
));
}
Ok(self.eval_inner(objs, OffsetDateTime::now_utc()).await)
}
}

View File

@@ -18,23 +18,19 @@
#![allow(unused_must_use)]
#![allow(clippy::all)]
use rustfs_filemeta::{ReplicationStatusType, VersionPurgeStatusType};
use s3s::dto::{
BucketLifecycleConfiguration, ExpirationStatus, LifecycleExpiration, LifecycleRule, NoncurrentVersionTransition,
ObjectLockConfiguration, ObjectLockEnabled, Prefix, RestoreRequest, Transition,
ObjectLockConfiguration, ObjectLockEnabled, RestoreRequest, Transition,
};
use std::cmp::Ordering;
use std::collections::HashMap;
use std::env;
use std::fmt::Display;
use std::sync::Arc;
use time::macros::{datetime, offset};
use time::{self, Duration, OffsetDateTime};
use tracing::info;
use uuid::Uuid;
use crate::bucket::lifecycle::rule::TransitionOps;
use crate::store_api::ObjectInfo;
pub const TRANSITION_COMPLETE: &str = "complete";
pub const TRANSITION_PENDING: &str = "pending";
@@ -135,11 +131,11 @@ impl RuleValidate for LifecycleRule {
pub trait Lifecycle {
async fn has_transition(&self) -> bool;
fn has_expiry(&self) -> bool;
fn has_active_rules(&self, prefix: &str) -> bool;
async fn has_active_rules(&self, prefix: &str) -> bool;
async fn validate(&self, lr: &ObjectLockConfiguration) -> Result<(), std::io::Error>;
async fn filter_rules(&self, obj: &ObjectOpts) -> Option<Vec<LifecycleRule>>;
async fn eval(&self, obj: &ObjectOpts) -> Event;
async fn eval_inner(&self, obj: &ObjectOpts, now: OffsetDateTime, newer_noncurrent_versions: usize) -> Event;
async fn eval_inner(&self, obj: &ObjectOpts, now: OffsetDateTime) -> Event;
//fn set_prediction_headers(&self, w: http.ResponseWriter, obj: ObjectOpts);
async fn noncurrent_versions_expiration_limit(self: Arc<Self>, obj: &ObjectOpts) -> Event;
}
@@ -164,7 +160,7 @@ impl Lifecycle for BucketLifecycleConfiguration {
false
}
fn has_active_rules(&self, prefix: &str) -> bool {
async fn has_active_rules(&self, prefix: &str) -> bool {
if self.rules.len() == 0 {
return false;
}
@@ -173,51 +169,44 @@ impl Lifecycle for BucketLifecycleConfiguration {
continue;
}
let rule_prefix = &rule.prefix.clone().unwrap_or_default();
let rule_prefix = rule.prefix.as_ref().expect("err!");
if prefix.len() > 0 && rule_prefix.len() > 0 && !prefix.starts_with(rule_prefix) && !rule_prefix.starts_with(&prefix)
{
continue;
}
if let Some(rule_noncurrent_version_expiration) = &rule.noncurrent_version_expiration {
if let Some(noncurrent_days) = rule_noncurrent_version_expiration.noncurrent_days {
if noncurrent_days > 0 {
return true;
}
}
if let Some(newer_noncurrent_versions) = rule_noncurrent_version_expiration.newer_noncurrent_versions {
if newer_noncurrent_versions > 0 {
return true;
}
}
}
if rule.noncurrent_version_transitions.is_some() {
let rule_noncurrent_version_expiration = rule.noncurrent_version_expiration.as_ref().expect("err!");
if rule_noncurrent_version_expiration.noncurrent_days.expect("err!") > 0 {
return true;
}
if let Some(rule_expiration) = &rule.expiration {
if let Some(date1) = rule_expiration.date.clone() {
if OffsetDateTime::from(date1).unix_timestamp() < OffsetDateTime::now_utc().unix_timestamp() {
return true;
}
}
if rule_expiration.date.is_some() {
return true;
}
if let Some(expired_object_delete_marker) = rule_expiration.expired_object_delete_marker
&& expired_object_delete_marker
{
return true;
}
if rule_noncurrent_version_expiration.newer_noncurrent_versions.expect("err!") > 0 {
return true;
}
if let Some(rule_transitions) = &rule.transitions {
let rule_transitions_0 = rule_transitions[0].clone();
if let Some(date1) = rule_transitions_0.date {
if OffsetDateTime::from(date1).unix_timestamp() < OffsetDateTime::now_utc().unix_timestamp() {
return true;
}
}
if !rule.noncurrent_version_transitions.is_none() {
return true;
}
if rule.transitions.is_some() {
let rule_expiration = rule.expiration.as_ref().expect("err!");
if !rule_expiration.date.is_none()
&& OffsetDateTime::from(rule_expiration.date.clone().expect("err!")).unix_timestamp()
< OffsetDateTime::now_utc().unix_timestamp()
{
return true;
}
if !rule_expiration.date.is_none() {
return true;
}
if rule_expiration.expired_object_delete_marker.expect("err!") {
return true;
}
let rule_transitions: &[Transition] = &rule.transitions.as_ref().expect("err!");
let rule_transitions_0 = rule_transitions[0].clone();
if !rule_transitions_0.date.is_none()
&& OffsetDateTime::from(rule_transitions_0.date.expect("err!")).unix_timestamp()
< OffsetDateTime::now_utc().unix_timestamp()
{
return true;
}
if !rule.transitions.is_none() {
return true;
}
}
@@ -285,26 +274,16 @@ impl Lifecycle for BucketLifecycleConfiguration {
}
async fn eval(&self, obj: &ObjectOpts) -> Event {
self.eval_inner(obj, OffsetDateTime::now_utc(), 0).await
self.eval_inner(obj, OffsetDateTime::now_utc()).await
}
async fn eval_inner(&self, obj: &ObjectOpts, now: OffsetDateTime, newer_noncurrent_versions: usize) -> Event {
async fn eval_inner(&self, obj: &ObjectOpts, now: OffsetDateTime) -> Event {
let mut events = Vec::<Event>::new();
info!(
"eval_inner: object={}, mod_time={:?}, now={:?}, is_latest={}, delete_marker={}",
obj.name, obj.mod_time, now, obj.is_latest, obj.delete_marker
);
// Gracefully handle missing mod_time instead of panicking
let mod_time = match obj.mod_time {
Some(t) => t,
None => {
info!("eval_inner: mod_time is None for object={}, returning default event", obj.name);
return Event::default();
}
};
if mod_time.unix_timestamp() == 0 {
if obj.mod_time.expect("err").unix_timestamp() == 0 {
info!("eval_inner: mod_time is 0, returning default event");
return Event::default();
}
@@ -344,7 +323,7 @@ impl Lifecycle for BucketLifecycleConfiguration {
}
if let Some(days) = expiration.days {
let expected_expiry = expected_expiry_time(mod_time, days /*, date*/);
let expected_expiry = expected_expiry_time(obj.mod_time.unwrap(), days /*, date*/);
if now.unix_timestamp() >= expected_expiry.unix_timestamp() {
events.push(Event {
action: IlmAction::DeleteVersionAction,
@@ -447,10 +426,10 @@ impl Lifecycle for BucketLifecycleConfiguration {
obj.is_latest,
obj.delete_marker,
obj.version_id,
(obj.is_latest || obj.version_id.is_none_or(|v| v.is_nil())) && !obj.delete_marker
(obj.is_latest || obj.version_id.is_empty()) && !obj.delete_marker
);
// Allow expiration for latest objects OR non-versioned objects (empty version_id)
if (obj.is_latest || obj.version_id.is_none_or(|v| v.is_nil())) && !obj.delete_marker {
if (obj.is_latest || obj.version_id.is_empty()) && !obj.delete_marker {
info!("eval_inner: entering expiration check");
if let Some(ref expiration) = rule.expiration {
if let Some(ref date) = expiration.date {
@@ -467,11 +446,11 @@ impl Lifecycle for BucketLifecycleConfiguration {
});
}
} else if let Some(days) = expiration.days {
let expected_expiry: OffsetDateTime = expected_expiry_time(mod_time, days);
let expected_expiry: OffsetDateTime = expected_expiry_time(obj.mod_time.unwrap(), days);
info!(
"eval_inner: expiration check - days={}, obj_time={:?}, expiry_time={:?}, now={:?}, should_expire={}",
days,
mod_time,
obj.mod_time.expect("err!"),
expected_expiry,
now,
now.unix_timestamp() > expected_expiry.unix_timestamp()
@@ -670,7 +649,7 @@ pub struct ObjectOpts {
pub user_tags: String,
pub mod_time: Option<OffsetDateTime>,
pub size: usize,
pub version_id: Option<Uuid>,
pub version_id: String,
pub is_latest: bool,
pub delete_marker: bool,
pub num_versions: usize,
@@ -680,37 +659,12 @@ pub struct ObjectOpts {
pub restore_expires: Option<OffsetDateTime>,
pub versioned: bool,
pub version_suspended: bool,
pub user_defined: HashMap<String, String>,
pub version_purge_status: VersionPurgeStatusType,
pub replication_status: ReplicationStatusType,
}
impl ObjectOpts {
pub fn expired_object_deletemarker(&self) -> bool {
self.delete_marker && self.num_versions == 1
}
pub fn from_object_info(oi: &ObjectInfo) -> Self {
Self {
name: oi.name.clone(),
user_tags: oi.user_tags.clone(),
mod_time: oi.mod_time,
size: oi.size as usize,
version_id: oi.version_id.clone(),
is_latest: oi.is_latest,
delete_marker: oi.delete_marker,
num_versions: oi.num_versions,
successor_mod_time: oi.successor_mod_time,
transition_status: oi.transitioned_object.status.clone(),
restore_ongoing: oi.restore_ongoing,
restore_expires: oi.restore_expires,
versioned: false,
version_suspended: false,
user_defined: oi.user_defined.clone(),
version_purge_status: oi.version_purge_status.clone(),
replication_status: oi.replication_status.clone(),
}
}
}
#[derive(Debug, Clone)]

View File

@@ -14,7 +14,6 @@
pub mod bucket_lifecycle_audit;
pub mod bucket_lifecycle_ops;
pub mod evaluator;
pub mod lifecycle;
pub mod rule;
pub mod tier_last_day_stats;

View File

@@ -21,7 +21,6 @@
use sha2::{Digest, Sha256};
use std::any::Any;
use std::io::Write;
use uuid::Uuid;
use xxhash_rust::xxh64;
use super::bucket_lifecycle_ops::{ExpiryOp, GLOBAL_ExpiryState, TransitionedObject};
@@ -35,7 +34,7 @@ static XXHASH_SEED: u64 = 0;
struct ObjSweeper {
object: String,
bucket: String,
version_id: Option<Uuid>,
version_id: String,
versioned: bool,
suspended: bool,
transition_status: String,
@@ -55,8 +54,8 @@ impl ObjSweeper {
})
}
pub fn with_version(&mut self, vid: Option<Uuid>) -> &Self {
self.version_id = vid.clone();
pub fn with_version(&mut self, vid: String) -> &Self {
self.version_id = vid;
self
}
@@ -73,8 +72,8 @@ impl ObjSweeper {
version_suspended: self.suspended,
..Default::default()
};
if self.suspended && self.version_id.is_none_or(|v| v.is_nil()) {
opts.version_id = None;
if self.suspended && self.version_id == "" {
opts.version_id = String::from("");
}
opts
}
@@ -95,7 +94,7 @@ impl ObjSweeper {
if !self.versioned || self.suspended {
// 1, 2.a, 2.b
del_tier = true;
} else if self.versioned && self.version_id.is_some_and(|v| !v.is_nil()) {
} else if self.versioned && self.version_id != "" {
// 3.a
del_tier = true;
}

View File

@@ -175,13 +175,6 @@ pub async fn created_at(bucket: &str) -> Result<OffsetDateTime> {
bucket_meta_sys.created_at(bucket).await
}
pub async fn list_bucket_targets(bucket: &str) -> Result<BucketTargets> {
let bucket_meta_sys_lock = get_bucket_metadata_sys()?;
let bucket_meta_sys = bucket_meta_sys_lock.read().await;
bucket_meta_sys.get_bucket_targets_config(bucket).await
}
#[derive(Debug)]
pub struct BucketMetadataSys {
metadata_map: RwLock<HashMap<String, Arc<BucketMetadata>>>,

View File

@@ -15,7 +15,7 @@
pub mod objectlock;
pub mod objectlock_sys;
use s3s::dto::{ObjectLockConfiguration, ObjectLockEnabled, ObjectLockLegalHoldStatus};
use s3s::dto::{ObjectLockConfiguration, ObjectLockEnabled};
pub trait ObjectLockApi {
fn enabled(&self) -> bool;
@@ -28,13 +28,3 @@ impl ObjectLockApi for ObjectLockConfiguration {
.is_some_and(|v| v.as_str() == ObjectLockEnabled::ENABLED)
}
}
pub trait ObjectLockStatusExt {
fn valid(&self) -> bool;
}
impl ObjectLockStatusExt for ObjectLockLegalHoldStatus {
fn valid(&self) -> bool {
matches!(self.as_str(), ObjectLockLegalHoldStatus::ON | ObjectLockLegalHoldStatus::OFF)
}
}

View File

@@ -37,7 +37,7 @@ pub fn get_object_retention_meta(meta: HashMap<String, String>) -> ObjectLockRet
let mut mode_str = meta.get(X_AMZ_OBJECT_LOCK_MODE.as_str().to_lowercase().as_str());
if mode_str.is_none() {
mode_str = Some(&meta[X_AMZ_OBJECT_LOCK_MODE.as_str()]);
mode_str = meta.get(X_AMZ_OBJECT_LOCK_MODE.as_str());
}
let mode = if let Some(mode_str) = mode_str {
parse_ret_mode(mode_str.as_str())
@@ -50,7 +50,7 @@ pub fn get_object_retention_meta(meta: HashMap<String, String>) -> ObjectLockRet
let mut till_str = meta.get(X_AMZ_OBJECT_LOCK_RETAIN_UNTIL_DATE.as_str().to_lowercase().as_str());
if till_str.is_none() {
till_str = Some(&meta[X_AMZ_OBJECT_LOCK_RETAIN_UNTIL_DATE.as_str()]);
till_str = meta.get(X_AMZ_OBJECT_LOCK_RETAIN_UNTIL_DATE.as_str());
}
if let Some(till_str) = till_str {
let t = OffsetDateTime::parse(till_str, &format_description::well_known::Iso8601::DEFAULT);
@@ -67,7 +67,7 @@ pub fn get_object_retention_meta(meta: HashMap<String, String>) -> ObjectLockRet
pub fn get_object_legalhold_meta(meta: HashMap<String, String>) -> ObjectLockLegalHold {
let mut hold_str = meta.get(X_AMZ_OBJECT_LOCK_LEGAL_HOLD.as_str().to_lowercase().as_str());
if hold_str.is_none() {
hold_str = Some(&meta[X_AMZ_OBJECT_LOCK_LEGAL_HOLD.as_str()]);
hold_str = meta.get(X_AMZ_OBJECT_LOCK_LEGAL_HOLD.as_str());
}
if let Some(hold_str) = hold_str {
return ObjectLockLegalHold {

View File

@@ -22,7 +22,7 @@ pub struct PolicySys {}
impl PolicySys {
pub async fn is_allowed(args: &BucketPolicyArgs<'_>) -> bool {
match Self::get(args.bucket).await {
Ok(cfg) => return cfg.is_allowed(args).await,
Ok(cfg) => return cfg.is_allowed(args),
Err(err) => {
if err != StorageError::ConfigNotFound {
info!("config get err {:?}", err);

View File

@@ -9,11 +9,8 @@ use std::sync::Arc;
use std::sync::atomic::AtomicI32;
use std::sync::atomic::Ordering;
use crate::bucket::bucket_target_sys::BucketTargetSys;
use crate::bucket::metadata_sys;
use crate::bucket::replication::replication_resyncer::{
BucketReplicationResyncStatus, DeletedObjectReplicationInfo, ReplicationConfig, ReplicationResyncer,
get_heal_replicate_object_info,
BucketReplicationResyncStatus, DeletedObjectReplicationInfo, ReplicationResyncer,
};
use crate::bucket::replication::replication_state::ReplicationStats;
use crate::config::com::read_config;
@@ -29,10 +26,8 @@ use rustfs_filemeta::ReplicationStatusType;
use rustfs_filemeta::ReplicationType;
use rustfs_filemeta::ReplicationWorkerOperation;
use rustfs_filemeta::ResyncDecision;
use rustfs_filemeta::VersionPurgeStatusType;
use rustfs_filemeta::replication_statuses_map;
use rustfs_filemeta::version_purge_statuses_map;
use rustfs_filemeta::{REPLICATE_EXISTING, REPLICATE_HEAL, REPLICATE_HEAL_DELETE};
use rustfs_utils::http::RESERVED_METADATA_PREFIX_LOWER;
use time::OffsetDateTime;
use time::format_description::well_known::Rfc3339;
@@ -1038,152 +1033,3 @@ pub async fn schedule_replication_delete(dv: DeletedObjectReplicationInfo) {
}
}
}
/// QueueReplicationHeal is a wrapper for queue_replication_heal_internal
pub async fn queue_replication_heal(bucket: &str, oi: ObjectInfo, retry_count: u32) {
// ignore modtime zero objects
if oi.mod_time.is_none() || oi.mod_time == Some(OffsetDateTime::UNIX_EPOCH) {
return;
}
let rcfg = match metadata_sys::get_replication_config(bucket).await {
Ok((config, _)) => config,
Err(err) => {
warn!("Failed to get replication config for bucket {}: {}", bucket, err);
return;
}
};
let tgts = match BucketTargetSys::get().list_bucket_targets(bucket).await {
Ok(targets) => Some(targets),
Err(err) => {
warn!("Failed to list bucket targets for bucket {}: {}", bucket, err);
None
}
};
let rcfg_wrapper = ReplicationConfig::new(Some(rcfg), tgts);
queue_replication_heal_internal(bucket, oi, rcfg_wrapper, retry_count).await;
}
/// queue_replication_heal_internal enqueues objects that failed replication OR eligible for resyncing through
/// an ongoing resync operation or via existing objects replication configuration setting.
pub async fn queue_replication_heal_internal(
_bucket: &str,
oi: ObjectInfo,
rcfg: ReplicationConfig,
retry_count: u32,
) -> ReplicateObjectInfo {
let mut roi = ReplicateObjectInfo::default();
// ignore modtime zero objects
if oi.mod_time.is_none() || oi.mod_time == Some(OffsetDateTime::UNIX_EPOCH) {
return roi;
}
if rcfg.config.is_none() || rcfg.remotes.is_none() {
return roi;
}
roi = get_heal_replicate_object_info(&oi, &rcfg).await;
roi.retry_count = retry_count;
if !roi.dsc.replicate_any() {
return roi;
}
// early return if replication already done, otherwise we need to determine if this
// version is an existing object that needs healing.
if roi.replication_status == ReplicationStatusType::Completed
&& roi.version_purge_status.is_empty()
&& !roi.existing_obj_resync.must_resync()
{
return roi;
}
if roi.delete_marker || !roi.version_purge_status.is_empty() {
let (version_id, dm_version_id) = if roi.version_purge_status.is_empty() {
(None, roi.version_id)
} else {
(roi.version_id, None)
};
let dv = DeletedObjectReplicationInfo {
delete_object: crate::store_api::DeletedObject {
object_name: roi.name.clone(),
delete_marker_version_id: dm_version_id,
version_id,
replication_state: roi.replication_state.clone(),
delete_marker_mtime: roi.mod_time,
delete_marker: roi.delete_marker,
..Default::default()
},
bucket: roi.bucket.clone(),
op_type: ReplicationType::Heal,
event_type: REPLICATE_HEAL_DELETE.to_string(),
..Default::default()
};
// heal delete marker replication failure or versioned delete replication failure
if roi.replication_status == ReplicationStatusType::Pending
|| roi.replication_status == ReplicationStatusType::Failed
|| roi.version_purge_status == VersionPurgeStatusType::Failed
|| roi.version_purge_status == VersionPurgeStatusType::Pending
{
if let Some(pool) = GLOBAL_REPLICATION_POOL.get() {
pool.queue_replica_delete_task(dv).await;
}
return roi;
}
// if replication status is Complete on DeleteMarker and existing object resync required
let existing_obj_resync = roi.existing_obj_resync.clone();
if existing_obj_resync.must_resync()
&& (roi.replication_status == ReplicationStatusType::Completed || roi.replication_status.is_empty())
{
queue_replicate_deletes_wrapper(dv, existing_obj_resync).await;
return roi;
}
return roi;
}
if roi.existing_obj_resync.must_resync() {
roi.op_type = ReplicationType::ExistingObject;
}
match roi.replication_status {
ReplicationStatusType::Pending | ReplicationStatusType::Failed => {
roi.event_type = REPLICATE_HEAL.to_string();
if let Some(pool) = GLOBAL_REPLICATION_POOL.get() {
pool.queue_replica_task(roi.clone()).await;
}
return roi;
}
_ => {}
}
if roi.existing_obj_resync.must_resync() {
roi.event_type = REPLICATE_EXISTING.to_string();
if let Some(pool) = GLOBAL_REPLICATION_POOL.get() {
pool.queue_replica_task(roi.clone()).await;
}
}
roi
}
/// Wrapper function for queueing replicate deletes with resync decision
async fn queue_replicate_deletes_wrapper(doi: DeletedObjectReplicationInfo, existing_obj_resync: ResyncDecision) {
for (k, v) in existing_obj_resync.targets.iter() {
if v.replicate {
let mut dv = doi.clone();
dv.reset_id = v.reset_id.clone();
dv.target_arn = k.clone();
if let Some(pool) = GLOBAL_REPLICATION_POOL.get() {
pool.queue_replica_delete_task(dv).await;
}
}
}
}

View File

@@ -744,7 +744,7 @@ impl ReplicationWorkerOperation for DeletedObjectReplicationInfo {
}
}
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
#[derive(Debug, Clone, Default)]
pub struct ReplicationConfig {
pub config: Option<ReplicationConfiguration>,
pub remotes: Option<BucketTargets>,

View File

@@ -16,7 +16,7 @@ use crate::disk::error::DiskError;
use crate::disk::{self, DiskAPI, DiskStore, WalkDirOptions};
use futures::future::join_all;
use rustfs_filemeta::{MetaCacheEntries, MetaCacheEntry, MetacacheReader, is_io_eof};
use std::{future::Future, pin::Pin};
use std::{future::Future, pin::Pin, sync::Arc};
use tokio::spawn;
use tokio_util::sync::CancellationToken;
use tracing::{error, info, warn};
@@ -71,14 +71,14 @@ pub async fn list_path_raw(rx: CancellationToken, opts: ListPathRawOptions) -> d
let mut jobs: Vec<tokio::task::JoinHandle<std::result::Result<(), DiskError>>> = Vec::new();
let mut readers = Vec::with_capacity(opts.disks.len());
let fds = opts.fallback_disks.iter().flatten().cloned().collect::<Vec<_>>();
let fds = Arc::new(opts.fallback_disks.clone());
let cancel_rx = CancellationToken::new();
for disk in opts.disks.iter() {
let opdisk = disk.clone();
let opts_clone = opts.clone();
let mut fds_clone = fds.clone();
let fds_clone = fds.clone();
let cancel_rx_clone = cancel_rx.clone();
let (rd, mut wr) = tokio::io::duplex(64);
readers.push(MetacacheReader::new(rd));
@@ -113,20 +113,21 @@ pub async fn list_path_raw(rx: CancellationToken, opts: ListPathRawOptions) -> d
}
while need_fallback {
let disk_op = {
if fds_clone.is_empty() {
None
} else {
let disk = fds_clone.remove(0);
if disk.is_online().await { Some(disk.clone()) } else { None }
// warn!("list_path_raw: while need_fallback start");
let disk = match fds_clone.iter().find(|d| d.is_some()) {
Some(d) => {
if let Some(disk) = d.clone() {
disk
} else {
warn!("list_path_raw: fallback disk is none");
break;
}
}
None => {
warn!("list_path_raw: fallback disk is none2");
break;
}
};
let Some(disk) = disk_op else {
warn!("list_path_raw: fallback disk is none");
break;
};
match disk
.as_ref()
.walk_dir(

View File

@@ -0,0 +1,350 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
#![allow(unused_must_use)]
#![allow(clippy::all)]
use lazy_static::lazy_static;
use rustfs_checksums::ChecksumAlgorithm;
use std::collections::HashMap;
use crate::client::{api_put_object::PutObjectOptions, api_s3_datatypes::ObjectPart};
use crate::{disk::DiskAPI, store_api::GetObjectReader};
use rustfs_utils::crypto::{base64_decode, base64_encode};
use s3s::header::{
X_AMZ_CHECKSUM_ALGORITHM, X_AMZ_CHECKSUM_CRC32, X_AMZ_CHECKSUM_CRC32C, X_AMZ_CHECKSUM_SHA1, X_AMZ_CHECKSUM_SHA256,
};
use enumset::{EnumSet, EnumSetType, enum_set};
#[derive(Debug, EnumSetType, Default)]
#[enumset(repr = "u8")]
pub enum ChecksumMode {
#[default]
ChecksumNone,
ChecksumSHA256,
ChecksumSHA1,
ChecksumCRC32,
ChecksumCRC32C,
ChecksumCRC64NVME,
ChecksumFullObject,
}
lazy_static! {
static ref C_ChecksumMask: EnumSet<ChecksumMode> = {
let mut s = EnumSet::all();
s.remove(ChecksumMode::ChecksumFullObject);
s
};
static ref C_ChecksumFullObjectCRC32: EnumSet<ChecksumMode> =
enum_set!(ChecksumMode::ChecksumCRC32 | ChecksumMode::ChecksumFullObject);
static ref C_ChecksumFullObjectCRC32C: EnumSet<ChecksumMode> =
enum_set!(ChecksumMode::ChecksumCRC32C | ChecksumMode::ChecksumFullObject);
}
const AMZ_CHECKSUM_CRC64NVME: &str = "x-amz-checksum-crc64nvme";
impl ChecksumMode {
//pub const CRC64_NVME_POLYNOMIAL: i64 = 0xad93d23594c93659;
pub fn base(&self) -> ChecksumMode {
let s = EnumSet::from(*self).intersection(*C_ChecksumMask);
match s.as_u8() {
1_u8 => ChecksumMode::ChecksumNone,
2_u8 => ChecksumMode::ChecksumSHA256,
4_u8 => ChecksumMode::ChecksumSHA1,
8_u8 => ChecksumMode::ChecksumCRC32,
16_u8 => ChecksumMode::ChecksumCRC32C,
32_u8 => ChecksumMode::ChecksumCRC64NVME,
_ => panic!("enum err."),
}
}
pub fn is(&self, t: ChecksumMode) -> bool {
*self & t == t
}
pub fn key(&self) -> String {
//match c & checksumMask {
match self {
ChecksumMode::ChecksumCRC32 => {
return X_AMZ_CHECKSUM_CRC32.to_string();
}
ChecksumMode::ChecksumCRC32C => {
return X_AMZ_CHECKSUM_CRC32C.to_string();
}
ChecksumMode::ChecksumSHA1 => {
return X_AMZ_CHECKSUM_SHA1.to_string();
}
ChecksumMode::ChecksumSHA256 => {
return X_AMZ_CHECKSUM_SHA256.to_string();
}
ChecksumMode::ChecksumCRC64NVME => {
return AMZ_CHECKSUM_CRC64NVME.to_string();
}
_ => {
return "".to_string();
}
}
}
pub fn can_composite(&self) -> bool {
let s = EnumSet::from(*self).intersection(*C_ChecksumMask);
match s.as_u8() {
2_u8 => true,
4_u8 => true,
8_u8 => true,
16_u8 => true,
_ => false,
}
}
pub fn can_merge_crc(&self) -> bool {
let s = EnumSet::from(*self).intersection(*C_ChecksumMask);
match s.as_u8() {
8_u8 => true,
16_u8 => true,
32_u8 => true,
_ => false,
}
}
pub fn full_object_requested(&self) -> bool {
let s = EnumSet::from(*self).intersection(*C_ChecksumMask);
match s.as_u8() {
//C_ChecksumFullObjectCRC32 as u8 => true,
//C_ChecksumFullObjectCRC32C as u8 => true,
32_u8 => true,
_ => false,
}
}
pub fn key_capitalized(&self) -> String {
self.key()
}
pub fn raw_byte_len(&self) -> usize {
let u = EnumSet::from(*self).intersection(*C_ChecksumMask).as_u8();
if u == ChecksumMode::ChecksumCRC32 as u8 || u == ChecksumMode::ChecksumCRC32C as u8 {
4
} else if u == ChecksumMode::ChecksumSHA1 as u8 {
use sha1::Digest;
sha1::Sha1::output_size() as usize
} else if u == ChecksumMode::ChecksumSHA256 as u8 {
use sha2::Digest;
sha2::Sha256::output_size() as usize
} else if u == ChecksumMode::ChecksumCRC64NVME as u8 {
8
} else {
0
}
}
pub fn hasher(&self) -> Result<Box<dyn rustfs_checksums::http::HttpChecksum>, std::io::Error> {
match /*C_ChecksumMask & **/self {
ChecksumMode::ChecksumCRC32 => {
return Ok(ChecksumAlgorithm::Crc32.into_impl());
}
ChecksumMode::ChecksumCRC32C => {
return Ok(ChecksumAlgorithm::Crc32c.into_impl());
}
ChecksumMode::ChecksumSHA1 => {
return Ok(ChecksumAlgorithm::Sha1.into_impl());
}
ChecksumMode::ChecksumSHA256 => {
return Ok(ChecksumAlgorithm::Sha256.into_impl());
}
ChecksumMode::ChecksumCRC64NVME => {
return Ok(ChecksumAlgorithm::Crc64Nvme.into_impl());
}
_ => return Err(std::io::Error::other("unsupported checksum type")),
}
}
pub fn is_set(&self) -> bool {
let s = EnumSet::from(*self).intersection(*C_ChecksumMask);
s.len() == 1
}
pub fn set_default(&mut self, t: ChecksumMode) {
if !self.is_set() {
*self = t;
}
}
pub fn encode_to_string(&self, b: &[u8]) -> Result<String, std::io::Error> {
if !self.is_set() {
return Ok("".to_string());
}
let mut h = self.hasher()?;
h.update(b);
let hash = h.finalize();
Ok(base64_encode(hash.as_ref()))
}
pub fn to_string(&self) -> String {
//match c & checksumMask {
match self {
ChecksumMode::ChecksumCRC32 => {
return "CRC32".to_string();
}
ChecksumMode::ChecksumCRC32C => {
return "CRC32C".to_string();
}
ChecksumMode::ChecksumSHA1 => {
return "SHA1".to_string();
}
ChecksumMode::ChecksumSHA256 => {
return "SHA256".to_string();
}
ChecksumMode::ChecksumNone => {
return "".to_string();
}
ChecksumMode::ChecksumCRC64NVME => {
return "CRC64NVME".to_string();
}
_ => {
return "<invalid>".to_string();
}
}
}
// pub fn check_sum_reader(&self, r: GetObjectReader) -> Result<Checksum, std::io::Error> {
// let mut h = self.hasher()?;
// Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
// }
// pub fn check_sum_bytes(&self, b: &[u8]) -> Result<Checksum, std::io::Error> {
// let mut h = self.hasher()?;
// Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
// }
pub fn composite_checksum(&self, p: &mut [ObjectPart]) -> Result<Checksum, std::io::Error> {
if !self.can_composite() {
return Err(std::io::Error::other("cannot do composite checksum"));
}
p.sort_by(|i, j| {
if i.part_num < j.part_num {
std::cmp::Ordering::Less
} else if i.part_num > j.part_num {
std::cmp::Ordering::Greater
} else {
std::cmp::Ordering::Equal
}
});
let c = self.base();
let crc_bytes = Vec::<u8>::with_capacity(p.len() * self.raw_byte_len() as usize);
let mut h = self.hasher()?;
h.update(crc_bytes.as_ref());
let hash = h.finalize();
Ok(Checksum {
checksum_type: self.clone(),
r: hash.as_ref().to_vec(),
computed: false,
})
}
pub fn full_object_checksum(&self, p: &mut [ObjectPart]) -> Result<Checksum, std::io::Error> {
todo!();
}
}
#[derive(Default)]
pub struct Checksum {
checksum_type: ChecksumMode,
r: Vec<u8>,
computed: bool,
}
#[allow(dead_code)]
impl Checksum {
fn new(t: ChecksumMode, b: &[u8]) -> Checksum {
if t.is_set() && b.len() == t.raw_byte_len() {
return Checksum {
checksum_type: t,
r: b.to_vec(),
computed: false,
};
}
Checksum::default()
}
#[allow(dead_code)]
fn new_checksum_string(t: ChecksumMode, s: &str) -> Result<Checksum, std::io::Error> {
let b = match base64_decode(s.as_bytes()) {
Ok(b) => b,
Err(err) => return Err(std::io::Error::other(err.to_string())),
};
if t.is_set() && b.len() == t.raw_byte_len() {
return Ok(Checksum {
checksum_type: t,
r: b,
computed: false,
});
}
Ok(Checksum::default())
}
fn is_set(&self) -> bool {
self.checksum_type.is_set() && self.r.len() == self.checksum_type.raw_byte_len()
}
fn encoded(&self) -> String {
if !self.is_set() {
return "".to_string();
}
base64_encode(&self.r)
}
#[allow(dead_code)]
fn raw(&self) -> Option<Vec<u8>> {
if !self.is_set() {
return None;
}
Some(self.r.clone())
}
}
pub fn add_auto_checksum_headers(opts: &mut PutObjectOptions) {
opts.user_metadata
.insert("X-Amz-Checksum-Algorithm".to_string(), opts.auto_checksum.to_string());
if opts.auto_checksum.full_object_requested() {
opts.user_metadata
.insert("X-Amz-Checksum-Type".to_string(), "FULL_OBJECT".to_string());
}
}
pub fn apply_auto_checksum(opts: &mut PutObjectOptions, all_parts: &mut [ObjectPart]) -> Result<(), std::io::Error> {
if opts.auto_checksum.can_composite() && !opts.auto_checksum.is(ChecksumMode::ChecksumFullObject) {
let crc = opts.auto_checksum.composite_checksum(all_parts)?;
opts.user_metadata = {
let mut hm = HashMap::new();
hm.insert(opts.auto_checksum.key(), crc.encoded());
hm
}
} else if opts.auto_checksum.can_merge_crc() {
let crc = opts.auto_checksum.full_object_checksum(all_parts)?;
opts.user_metadata = {
let mut hm = HashMap::new();
hm.insert(opts.auto_checksum.key_capitalized(), crc.encoded());
hm.insert("X-Amz-Checksum-Type".to_string(), "FULL_OBJECT".to_string());
hm
}
}
Ok(())
}

View File

@@ -0,0 +1,270 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// use crate::error::StdError;
// use bytes::Bytes;
// use futures::pin_mut;
// use futures::stream::{Stream, StreamExt};
// use std::future::Future;
// use std::pin::Pin;
// use std::task::{Context, Poll};
// use transform_stream::AsyncTryStream;
// pub type SyncBoxFuture<'a, T> = Pin<Box<dyn Future<Output = T> + Send + Sync + 'a>>;
// pub struct ChunkedStream<'a> {
// /// inner
// inner: AsyncTryStream<Bytes, StdError, SyncBoxFuture<'a, Result<(), StdError>>>,
// remaining_length: usize,
// }
// impl<'a> ChunkedStream<'a> {
// pub fn new<S>(body: S, content_length: usize, chunk_size: usize, need_padding: bool) -> Self
// where
// S: Stream<Item = Result<Bytes, StdError>> + Send + Sync + 'a,
// {
// let inner = AsyncTryStream::<_, _, SyncBoxFuture<'a, Result<(), StdError>>>::new(|mut y| {
// #[allow(clippy::shadow_same)] // necessary for `pin_mut!`
// Box::pin(async move {
// pin_mut!(body);
// // Data left over from the previous call
// let mut prev_bytes = Bytes::new();
// let mut read_size = 0;
// loop {
// let data: Vec<Bytes> = {
// // Read a fixed-size chunk
// match Self::read_data(body.as_mut(), prev_bytes, chunk_size).await {
// None => break,
// Some(Err(e)) => return Err(e),
// Some(Ok((data, remaining_bytes))) => {
// // debug!(
// // "content_length:{},read_size:{}, read_data data:{}, remaining_bytes: {} ",
// // content_length,
// // read_size,
// // data.len(),
// // remaining_bytes.len()
// // );
// prev_bytes = remaining_bytes;
// data
// }
// }
// };
// for bytes in data {
// read_size += bytes.len();
// // debug!("read_size {}, content_length {}", read_size, content_length,);
// y.yield_ok(bytes).await;
// }
// if read_size + prev_bytes.len() >= content_length {
// // debug!(
// // "Finished reading: read_size:{} + prev_bytes.len({}) == content_length {}",
// // read_size,
// // prev_bytes.len(),
// // content_length,
// // );
// // Pad with zeros?
// if !need_padding {
// y.yield_ok(prev_bytes).await;
// break;
// }
// let mut bytes = vec![0u8; chunk_size];
// let (left, _) = bytes.split_at_mut(prev_bytes.len());
// left.copy_from_slice(&prev_bytes);
// y.yield_ok(Bytes::from(bytes)).await;
// break;
// }
// }
// // debug!("chunked stream exit");
// Ok(())
// })
// });
// Self {
// inner,
// remaining_length: content_length,
// }
// }
// /// read data and return remaining bytes
// async fn read_data<S>(
// mut body: Pin<&mut S>,
// prev_bytes: Bytes,
// data_size: usize,
// ) -> Option<Result<(Vec<Bytes>, Bytes), StdError>>
// where
// S: Stream<Item = Result<Bytes, StdError>> + Send,
// {
// let mut bytes_buffer = Vec::new();
// // Run only once
// let mut push_data_bytes = |mut bytes: Bytes| {
// // debug!("read from body {} split per {}, prev_bytes: {}", bytes.len(), data_size, prev_bytes.len());
// if bytes.is_empty() {
// return None;
// }
// if data_size == 0 {
// return Some(bytes);
// }
// // Merge with the previous data
// if !prev_bytes.is_empty() {
// let need_size = data_size.wrapping_sub(prev_bytes.len());
// // debug!(
// // "Previous leftover {}, take {} now, total: {}",
// // prev_bytes.len(),
// // need_size,
// // prev_bytes.len() + need_size
// // );
// if bytes.len() >= need_size {
// let data = bytes.split_to(need_size);
// let mut combined = Vec::new();
// combined.extend_from_slice(&prev_bytes);
// combined.extend_from_slice(&data);
// // debug!(
// // "Fetched more bytes than needed: {}, merged result {}, remaining bytes {}",
// // need_size,
// // combined.len(),
// // bytes.len(),
// // );
// bytes_buffer.push(Bytes::from(combined));
// } else {
// let mut combined = Vec::new();
// combined.extend_from_slice(&prev_bytes);
// combined.extend_from_slice(&bytes);
// // debug!(
// // "Fetched fewer bytes than needed: {}, merged result {}, remaining bytes {}, return immediately",
// // need_size,
// // combined.len(),
// // bytes.len(),
// // );
// return Some(Bytes::from(combined));
// }
// }
// // If the fetched data exceeds the chunk, slice the required size
// if data_size <= bytes.len() {
// let n = bytes.len() / data_size;
// for _ in 0..n {
// let data = bytes.split_to(data_size);
// // println!("bytes_buffer.push: {}, remaining: {}", data.len(), bytes.len());
// bytes_buffer.push(data);
// }
// Some(bytes)
// } else {
// // Insufficient data
// Some(bytes)
// }
// };
// // Remaining data
// let remaining_bytes = 'outer: {
// // // Exit if the previous data was sufficient
// // if let Some(remaining_bytes) = push_data_bytes(prev_bytes) {
// // println!("Consuming leftovers");
// // break 'outer remaining_bytes;
// // }
// loop {
// match body.next().await? {
// Err(e) => return Some(Err(e)),
// Ok(bytes) => {
// if let Some(remaining_bytes) = push_data_bytes(bytes) {
// break 'outer remaining_bytes;
// }
// }
// }
// }
// };
// Some(Ok((bytes_buffer, remaining_bytes)))
// }
// fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Result<Bytes, StdError>>> {
// let ans = Pin::new(&mut self.inner).poll_next(cx);
// if let Poll::Ready(Some(Ok(ref bytes))) = ans {
// self.remaining_length = self.remaining_length.saturating_sub(bytes.len());
// }
// ans
// }
// // pub fn exact_remaining_length(&self) -> usize {
// // self.remaining_length
// // }
// }
// impl Stream for ChunkedStream<'_> {
// type Item = Result<Bytes, StdError>;
// fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
// self.poll(cx)
// }
// fn size_hint(&self) -> (usize, Option<usize>) {
// (0, None)
// }
// }
// #[cfg(test)]
// mod test {
// use super::*;
// #[tokio::test]
// async fn test_chunked_stream() {
// let chunk_size = 4;
// let data1 = vec![1u8; 7777]; // 65536
// let data2 = vec![1u8; 7777]; // 65536
// let content_length = data1.len() + data2.len();
// let chunk1 = Bytes::from(data1);
// let chunk2 = Bytes::from(data2);
// let chunk_results: Vec<Result<Bytes, _>> = vec![Ok(chunk1), Ok(chunk2)];
// let stream = futures::stream::iter(chunk_results);
// let mut chunked_stream = ChunkedStream::new(stream, content_length, chunk_size, true);
// loop {
// let ans1 = chunked_stream.next().await;
// if ans1.is_none() {
// break;
// }
// let bytes = ans1.unwrap().unwrap();
// assert!(bytes.len() == chunk_size)
// }
// // assert_eq!(ans1.unwrap(), chunk1_data.as_slice());
// }
// }

View File

@@ -18,10 +18,8 @@
#![allow(unused_must_use)]
#![allow(clippy::all)]
use bytes::Bytes;
use http::{HeaderMap, StatusCode};
use http_body_util::BodyExt;
use hyper::body::Body;
use hyper::body::Bytes;
use std::collections::HashMap;
use crate::client::{
@@ -63,19 +61,9 @@ impl TransitionClient {
let resp = self.execute_method(http::Method::PUT, &mut req_metadata).await?;
//defer closeResponse(resp)
let resp_status = resp.status();
let h = resp.headers().clone();
//if resp != nil {
if resp_status != StatusCode::NO_CONTENT && resp.status() != StatusCode::OK {
return Err(std::io::Error::other(http_resp_to_error_response(
resp_status,
&h,
vec![],
bucket_name,
"",
)));
if resp.status() != StatusCode::NO_CONTENT && resp.status() != StatusCode::OK {
return Err(std::io::Error::other(http_resp_to_error_response(&resp, vec![], bucket_name, "")));
}
//}
Ok(())
@@ -109,17 +97,8 @@ impl TransitionClient {
.await?;
//defer closeResponse(resp)
let resp_status = resp.status();
let h = resp.headers().clone();
if resp_status != StatusCode::NO_CONTENT {
return Err(std::io::Error::other(http_resp_to_error_response(
resp_status,
&h,
vec![],
bucket_name,
"",
)));
if resp.status() != StatusCode::NO_CONTENT {
return Err(std::io::Error::other(http_resp_to_error_response(&resp, vec![], bucket_name, "")));
}
Ok(())
@@ -157,15 +136,7 @@ impl TransitionClient {
)
.await?;
let mut body_vec = Vec::new();
let mut body = resp.into_body();
while let Some(frame) = body.frame().await {
let frame = frame.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e.to_string()))?;
if let Some(data) = frame.data_ref() {
body_vec.extend_from_slice(data);
}
}
let policy = String::from_utf8_lossy(&body_vec).to_string();
let policy = String::from_utf8_lossy(&resp.body().bytes().expect("err").to_vec()).to_string();
Ok(policy)
}
}

View File

@@ -18,12 +18,12 @@
#![allow(unused_must_use)]
#![allow(clippy::all)]
use http::{HeaderMap, StatusCode};
use http::StatusCode;
use serde::{Deserialize, Serialize};
use serde::{de::Deserializer, ser::Serializer};
use std::fmt::Display;
//use s3s::Body;
use s3s::Body;
use s3s::S3ErrorCode;
const _REPORT_ISSUE: &str = "Please report this issue at https://github.com/rustfs/rustfs/issues.";
@@ -95,8 +95,7 @@ pub fn to_error_response(err: &std::io::Error) -> ErrorResponse {
}
pub fn http_resp_to_error_response(
resp_status: StatusCode,
h: &HeaderMap,
resp: &http::Response<Body>,
b: Vec<u8>,
bucket_name: &str,
object_name: &str,
@@ -105,11 +104,11 @@ pub fn http_resp_to_error_response(
let err_resp_ = quick_xml::de::from_str::<ErrorResponse>(&err_body);
let mut err_resp = ErrorResponse::default();
if err_resp_.is_err() {
match resp_status {
match resp.status() {
StatusCode::NOT_FOUND => {
if object_name == "" {
err_resp = ErrorResponse {
status_code: resp_status,
status_code: resp.status(),
code: S3ErrorCode::NoSuchBucket,
message: "The specified bucket does not exist.".to_string(),
bucket_name: bucket_name.to_string(),
@@ -117,7 +116,7 @@ pub fn http_resp_to_error_response(
};
} else {
err_resp = ErrorResponse {
status_code: resp_status,
status_code: resp.status(),
code: S3ErrorCode::NoSuchKey,
message: "The specified key does not exist.".to_string(),
bucket_name: bucket_name.to_string(),
@@ -128,7 +127,7 @@ pub fn http_resp_to_error_response(
}
StatusCode::FORBIDDEN => {
err_resp = ErrorResponse {
status_code: resp_status,
status_code: resp.status(),
code: S3ErrorCode::AccessDenied,
message: "Access Denied.".to_string(),
bucket_name: bucket_name.to_string(),
@@ -138,7 +137,7 @@ pub fn http_resp_to_error_response(
}
StatusCode::CONFLICT => {
err_resp = ErrorResponse {
status_code: resp_status,
status_code: resp.status(),
code: S3ErrorCode::BucketNotEmpty,
message: "Bucket not empty.".to_string(),
bucket_name: bucket_name.to_string(),
@@ -147,7 +146,7 @@ pub fn http_resp_to_error_response(
}
StatusCode::PRECONDITION_FAILED => {
err_resp = ErrorResponse {
status_code: resp_status,
status_code: resp.status(),
code: S3ErrorCode::PreconditionFailed,
message: "Pre condition failed.".to_string(),
bucket_name: bucket_name.to_string(),
@@ -156,13 +155,13 @@ pub fn http_resp_to_error_response(
};
}
_ => {
let mut msg = resp_status.to_string();
let mut msg = resp.status().to_string();
if err_body.len() > 0 {
msg = err_body;
}
err_resp = ErrorResponse {
status_code: resp_status,
code: S3ErrorCode::Custom(resp_status.to_string().into()),
status_code: resp.status(),
code: S3ErrorCode::Custom(resp.status().to_string().into()),
message: msg,
bucket_name: bucket_name.to_string(),
..Default::default()
@@ -172,32 +171,32 @@ pub fn http_resp_to_error_response(
} else {
err_resp = err_resp_.unwrap();
}
err_resp.status_code = resp_status;
if let Some(server_name) = h.get("Server") {
err_resp.status_code = resp.status();
if let Some(server_name) = resp.headers().get("Server") {
err_resp.server = server_name.to_str().expect("err").to_string();
}
let code = h.get("x-minio-error-code");
let code = resp.headers().get("x-minio-error-code");
if code.is_some() {
err_resp.code = S3ErrorCode::Custom(code.expect("err").to_str().expect("err").into());
}
let desc = h.get("x-minio-error-desc");
let desc = resp.headers().get("x-minio-error-desc");
if desc.is_some() {
err_resp.message = desc.expect("err").to_str().expect("err").trim_matches('"').to_string();
}
if err_resp.request_id == "" {
if let Some(x_amz_request_id) = h.get("x-amz-request-id") {
if let Some(x_amz_request_id) = resp.headers().get("x-amz-request-id") {
err_resp.request_id = x_amz_request_id.to_str().expect("err").to_string();
}
}
if err_resp.host_id == "" {
if let Some(x_amz_id_2) = h.get("x-amz-id-2") {
if let Some(x_amz_id_2) = resp.headers().get("x-amz-id-2") {
err_resp.host_id = x_amz_id_2.to_str().expect("err").to_string();
}
}
if err_resp.region == "" {
if let Some(x_amz_bucket_region) = h.get("x-amz-bucket-region") {
if let Some(x_amz_bucket_region) = resp.headers().get("x-amz-bucket-region") {
err_resp.region = x_amz_bucket_region.to_str().expect("err").to_string();
}
}

View File

@@ -19,26 +19,17 @@
#![allow(unused_must_use)]
#![allow(clippy::all)]
//use bytes::Bytes;
use futures_util::ready;
use bytes::Bytes;
use http::HeaderMap;
use std::io::{Cursor, Error as IoError, ErrorKind as IoErrorKind, Read};
use std::pin::Pin;
use std::task::{Context, Poll};
use std::io::Cursor;
use tokio::io::BufReader;
use tokio_util::io::StreamReader;
use crate::client::{
api_error_response::err_invalid_argument,
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
};
use futures_util::StreamExt;
use http_body_util::BodyExt;
use hyper::body::Body;
use hyper::body::Bytes;
use rustfs_utils::hash::EMPTY_STRING_SHA256_HASH;
use tokio_util::io::ReaderStream;
impl TransitionClient {
pub fn get_object(&self, bucket_name: &str, object_name: &str, opts: &GetObjectOptions) -> Result<Object, std::io::Error> {
@@ -74,19 +65,11 @@ impl TransitionClient {
)
.await?;
let resp = &resp;
let object_stat = to_object_info(bucket_name, object_name, resp.headers())?;
let h = resp.headers().clone();
let mut body_vec = Vec::new();
let mut body = resp.into_body();
while let Some(frame) = body.frame().await {
let frame = frame.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e.to_string()))?;
if let Some(data) = frame.data_ref() {
body_vec.extend_from_slice(data);
}
}
Ok((object_stat, h, BufReader::new(Cursor::new(body_vec))))
let b = resp.body().bytes().expect("err").to_vec();
Ok((object_stat, resp.headers().clone(), BufReader::new(Cursor::new(b))))
}
}

View File

@@ -18,18 +18,19 @@
#![allow(unused_must_use)]
#![allow(clippy::all)]
use crate::client::{
api_error_response::http_resp_to_error_response,
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReaderImpl, RequestMetadata, TransitionClient},
};
use bytes::Bytes;
use http::{HeaderMap, HeaderValue};
use http_body_util::BodyExt;
use rustfs_config::MAX_S3_CLIENT_RESPONSE_SIZE;
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
use s3s::dto::Owner;
use std::collections::HashMap;
use std::io::Cursor;
use tokio::io::BufReader;
use crate::client::{
api_error_response::{err_invalid_argument, http_resp_to_error_response},
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
};
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
#[derive(Clone, Debug, Default, serde::Serialize, serde::Deserialize)]
pub struct Grantee {
@@ -84,29 +85,13 @@ impl TransitionClient {
)
.await?;
let resp_status = resp.status();
let h = resp.headers().clone();
let mut body_vec = Vec::new();
let mut body = resp.into_body();
while let Some(frame) = body.frame().await {
let frame = frame.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e.to_string()))?;
if let Some(data) = frame.data_ref() {
body_vec.extend_from_slice(data);
}
if resp.status() != http::StatusCode::OK {
let b = resp.body().bytes().expect("err").to_vec();
return Err(std::io::Error::other(http_resp_to_error_response(&resp, b, bucket_name, object_name)));
}
if resp_status != http::StatusCode::OK {
return Err(std::io::Error::other(http_resp_to_error_response(
resp_status,
&h,
body_vec,
bucket_name,
object_name,
)));
}
let mut res = match quick_xml::de::from_str::<AccessControlPolicy>(&String::from_utf8(body_vec).unwrap()) {
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
let mut res = match quick_xml::de::from_str::<AccessControlPolicy>(&String::from_utf8(b).unwrap()) {
Ok(result) => result,
Err(err) => {
return Err(std::io::Error::other(err.to_string()));

View File

@@ -18,23 +18,27 @@
#![allow(unused_must_use)]
#![allow(clippy::all)]
use bytes::Bytes;
use http::{HeaderMap, HeaderValue};
use std::collections::HashMap;
use std::io::Cursor;
use time::OffsetDateTime;
use tokio::io::BufReader;
use crate::client::constants::{GET_OBJECT_ATTRIBUTES_MAX_PARTS, GET_OBJECT_ATTRIBUTES_TAGS, ISO8601_DATEFORMAT};
use crate::client::{
api_get_object_acl::AccessControlPolicy,
transition_api::{ReaderImpl, RequestMetadata, TransitionClient},
};
use rustfs_config::MAX_S3_CLIENT_RESPONSE_SIZE;
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
//use s3s::Body;
use http_body_util::BodyExt;
use hyper::body::Body;
use hyper::body::Bytes;
use hyper::body::Incoming;
use s3s::header::{X_AMZ_MAX_PARTS, X_AMZ_OBJECT_ATTRIBUTES, X_AMZ_PART_NUMBER_MARKER, X_AMZ_VERSION_ID};
use s3s::header::{
X_AMZ_DELETE_MARKER, X_AMZ_MAX_PARTS, X_AMZ_METADATA_DIRECTIVE, X_AMZ_OBJECT_ATTRIBUTES, X_AMZ_PART_NUMBER_MARKER,
X_AMZ_REQUEST_CHARGED, X_AMZ_RESTORE, X_AMZ_VERSION_ID,
};
use s3s::{Body, dto::Owner};
use crate::client::{
api_error_response::err_invalid_argument,
api_get_object_acl::AccessControlPolicy,
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
};
pub struct ObjectAttributesOptions {
pub max_parts: i64,
@@ -133,12 +137,14 @@ struct ObjectAttributePart {
}
impl ObjectAttributes {
pub async fn parse_response(&mut self, h: &HeaderMap, body_vec: Vec<u8>) -> Result<(), std::io::Error> {
pub async fn parse_response(&mut self, resp: &mut http::Response<Body>) -> Result<(), std::io::Error> {
let h = resp.headers();
let mod_time = OffsetDateTime::parse(h.get("Last-Modified").unwrap().to_str().unwrap(), ISO8601_DATEFORMAT).unwrap(); //RFC7231Time
self.last_modified = mod_time;
self.version_id = h.get(X_AMZ_VERSION_ID).unwrap().to_str().unwrap().to_string();
let mut response = match quick_xml::de::from_str::<ObjectAttributesResponse>(&String::from_utf8(body_vec).unwrap()) {
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
let mut response = match quick_xml::de::from_str::<ObjectAttributesResponse>(&String::from_utf8(b).unwrap()) {
Ok(result) => result,
Err(err) => {
return Err(std::io::Error::other(err.to_string()));
@@ -209,8 +215,7 @@ impl TransitionClient {
)
.await?;
let resp_status = resp.status();
let h = resp.headers().clone();
let h = resp.headers();
let has_etag = h.get("ETag").unwrap().to_str().unwrap();
if !has_etag.is_empty() {
return Err(std::io::Error::other(
@@ -218,17 +223,9 @@ impl TransitionClient {
));
}
let mut body_vec = Vec::new();
let mut body = resp.into_body();
while let Some(frame) = body.frame().await {
let frame = frame.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e.to_string()))?;
if let Some(data) = frame.data_ref() {
body_vec.extend_from_slice(data);
}
}
if resp_status != http::StatusCode::OK {
let err_body = String::from_utf8(body_vec).unwrap();
if resp.status() != http::StatusCode::OK {
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
let err_body = String::from_utf8(b).unwrap();
let mut er = match quick_xml::de::from_str::<AccessControlPolicy>(&err_body) {
Ok(result) => result,
Err(err) => {
@@ -240,7 +237,7 @@ impl TransitionClient {
}
let mut oa = ObjectAttributes::new();
oa.parse_response(&h, body_vec).await?;
oa.parse_response(&mut resp).await?;
Ok(oa)
}

View File

@@ -18,6 +18,10 @@
#![allow(unused_must_use)]
#![allow(clippy::all)]
use bytes::Bytes;
use http::{HeaderMap, StatusCode};
use std::collections::HashMap;
use crate::client::{
api_error_response::http_resp_to_error_response,
api_s3_datatypes::{
@@ -27,14 +31,7 @@ use crate::client::{
transition_api::{ReaderImpl, RequestMetadata, TransitionClient},
};
use crate::store_api::BucketInfo;
//use bytes::Bytes;
use http::{HeaderMap, StatusCode};
use http_body_util::BodyExt;
use hyper::body::Body;
use hyper::body::Bytes;
use rustfs_config::MAX_S3_CLIENT_RESPONSE_SIZE;
use rustfs_utils::hash::EMPTY_STRING_SHA256_HASH;
use std::collections::HashMap;
impl TransitionClient {
pub fn list_buckets(&self) -> Result<Vec<BucketInfo>, std::io::Error> {
@@ -100,30 +97,13 @@ impl TransitionClient {
},
)
.await?;
let resp_status = resp.status();
let h = resp.headers().clone();
if resp.status() != StatusCode::OK {
return Err(std::io::Error::other(http_resp_to_error_response(
resp_status,
&h,
vec![],
bucket_name,
"",
)));
return Err(std::io::Error::other(http_resp_to_error_response(&resp, vec![], bucket_name, "")));
}
//let mut list_bucket_result = ListBucketV2Result::default();
let mut body_vec = Vec::new();
let mut body = resp.into_body();
while let Some(frame) = body.frame().await {
let frame = frame.map_err(|e| std::io::Error::new(std::io::ErrorKind::Other, e.to_string()))?;
if let Some(data) = frame.data_ref() {
body_vec.extend_from_slice(data);
}
}
let mut list_bucket_result = match quick_xml::de::from_str::<ListBucketV2Result>(&String::from_utf8(body_vec).unwrap()) {
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
let mut list_bucket_result = match quick_xml::de::from_str::<ListBucketV2Result>(&String::from_utf8(b).unwrap()) {
Ok(result) => result,
Err(err) => {
return Err(std::io::Error::other(err.to_string()));

View File

@@ -17,9 +17,8 @@
#![allow(unused_must_use)]
#![allow(clippy::all)]
//use bytes::Bytes;
use bytes::Bytes;
use http::{HeaderMap, HeaderName, StatusCode};
use hyper::body::Bytes;
use s3s::S3ErrorCode;
use std::collections::HashMap;
use time::OffsetDateTime;
@@ -226,15 +225,10 @@ impl TransitionClient {
};
let resp = self.execute_method(http::Method::POST, &mut req_metadata).await?;
let resp_status = resp.status();
let h = resp.headers().clone();
//if resp.is_none() {
if resp.status() != StatusCode::OK {
return Err(std::io::Error::other(http_resp_to_error_response(
resp_status,
&h,
&resp,
vec![],
bucket_name,
object_name,
@@ -293,14 +287,9 @@ impl TransitionClient {
};
let resp = self.execute_method(http::Method::PUT, &mut req_metadata).await?;
let resp_status = resp.status();
let h = resp.headers().clone();
if resp.status() != StatusCode::OK {
return Err(std::io::Error::other(http_resp_to_error_response(
resp_status,
&h,
&resp,
vec![],
&p.bucket_name.clone(),
&p.object_name,
@@ -381,8 +370,7 @@ impl TransitionClient {
let resp = self.execute_method(http::Method::POST, &mut req_metadata).await?;
let h = resp.headers().clone();
let b = resp.body().bytes().expect("err").to_vec();
let complete_multipart_upload_result: CompleteMultipartUploadResult = CompleteMultipartUploadResult::default();
let (exp_time, rule_id) = if let Some(h_x_amz_expiration) = resp.headers().get(X_AMZ_EXPIRATION) {
@@ -394,6 +382,7 @@ impl TransitionClient {
(OffsetDateTime::now_utc(), "".to_string())
};
let h = resp.headers();
Ok(UploadInfo {
bucket: complete_multipart_upload_result.bucket,
key: complete_multipart_upload_result.key,

View File

@@ -479,13 +479,9 @@ impl TransitionClient {
let resp = self.execute_method(http::Method::PUT, &mut req_metadata).await?;
let resp_status = resp.status();
let h = resp.headers().clone();
if resp.status() != StatusCode::OK {
return Err(std::io::Error::other(http_resp_to_error_response(
resp_status,
&h,
&resp,
vec![],
bucket_name,
object_name,

Some files were not shown because too many files have changed in this diff Show More