Compare commits
93 Commits
1.0.0-alph
...
1.0.0-alph
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
aefd894fc2 | ||
|
|
1e1d4646a2 | ||
|
|
b97845fffd | ||
|
|
84f5a4cb48 | ||
|
|
2832f0e089 | ||
|
|
a3b5445824 | ||
|
|
363e37c791 | ||
|
|
1b0b041530 | ||
|
|
7d5fc87002 | ||
|
|
13130e9dd4 | ||
|
|
1061ce11a3 | ||
|
|
9f9a74000d | ||
|
|
d1863018df | ||
|
|
166080aac8 | ||
|
|
78b2487639 | ||
|
|
79f4e81fea | ||
|
|
28da78d544 | ||
|
|
df2eb9bc6a | ||
|
|
7c20d92fe5 | ||
|
|
b4c316c662 | ||
|
|
411b511937 | ||
|
|
c902475443 | ||
|
|
00d8008a89 | ||
|
|
36acb5bce9 | ||
|
|
e033b019f6 | ||
|
|
259b80777e | ||
|
|
abdfad8521 | ||
|
|
c498fbcb27 | ||
|
|
874d486b1e | ||
|
|
21516251b0 | ||
|
|
a2f83b0d2d | ||
|
|
aa65766312 | ||
|
|
660f004cfd | ||
|
|
6d2c420f54 | ||
|
|
5f0b9a5fa8 | ||
|
|
8378e308e0 | ||
|
|
b9f54519fd | ||
|
|
4108a9649f | ||
|
|
6244e23451 | ||
|
|
713b322f99 | ||
|
|
e1a5a195c3 | ||
|
|
bc37417d6c | ||
|
|
3dbcaaa221 | ||
|
|
49f480d346 | ||
|
|
055a99ba25 | ||
|
|
2bd11d476e | ||
|
|
297004c259 | ||
|
|
4e2c4d8dba | ||
|
|
0626099c3b | ||
|
|
107ddcf394 | ||
|
|
8893ffc10f | ||
|
|
f23e855d23 | ||
|
|
8366413970 | ||
|
|
9862677fcf | ||
|
|
e50bc4c60c | ||
|
|
5f6104731d | ||
|
|
6a6866c337 | ||
|
|
ce2ce4b16e | ||
|
|
1ecd5a87d9 | ||
|
|
72aead5466 | ||
|
|
abd5dff9b5 | ||
|
|
040b05c318 | ||
|
|
ce470c95c4 | ||
|
|
32e531bc61 | ||
|
|
dcf25e46af | ||
|
|
2b079ae065 | ||
|
|
d41ccc1551 | ||
|
|
fa17f7b1e3 | ||
|
|
c41299a29f | ||
|
|
79156d2d82 | ||
|
|
26542b741e | ||
|
|
8b2b4a0146 | ||
|
|
5cf9087113 | ||
|
|
dd12250987 | ||
|
|
e172b277f2 | ||
|
|
086331b8e7 | ||
|
|
96d22c3276 | ||
|
|
caa3564439 | ||
|
|
18933fdb58 | ||
|
|
65a731a243 | ||
|
|
89035d3b3b | ||
|
|
c6527643a3 | ||
|
|
b9157d5e9d | ||
|
|
20be2d9859 | ||
|
|
855541678e | ||
|
|
73d3d8ab5c | ||
|
|
6983a3ffce | ||
|
|
d6653f1258 | ||
|
|
7ab53a6d7d | ||
|
|
85ee9811d8 | ||
|
|
61bd76f77e | ||
|
|
8cf611426b | ||
|
|
b0ac977a3d |
36
.cursorrules
@@ -5,15 +5,18 @@
|
||||
### 🚨 NEVER COMMIT DIRECTLY TO MASTER/MAIN BRANCH 🚨
|
||||
|
||||
- **This is the most important rule - NEVER modify code directly on main or master branch**
|
||||
- **ALL CHANGES MUST GO THROUGH PULL REQUESTS - NO EXCEPTIONS**
|
||||
- **Always work on feature branches and use pull requests for all changes**
|
||||
- **Any direct commits to master/main branch are strictly forbidden**
|
||||
- **Pull requests are the ONLY way to merge code to main branch**
|
||||
- Before starting any development, always:
|
||||
1. `git checkout main` (switch to main branch)
|
||||
2. `git pull` (get latest changes)
|
||||
3. `git checkout -b feat/your-feature-name` (create and switch to feature branch)
|
||||
4. Make your changes on the feature branch
|
||||
5. Commit and push to the feature branch
|
||||
6. Create a pull request for review
|
||||
6. **Create a pull request for review - THIS IS MANDATORY**
|
||||
7. **Wait for PR approval and merge through GitHub interface only**
|
||||
|
||||
## Project Overview
|
||||
|
||||
@@ -817,6 +820,7 @@ These rules should serve as guiding principles when developing the RustFS projec
|
||||
|
||||
- **🚨 CRITICAL: NEVER modify code directly on main or master branch - THIS IS ABSOLUTELY FORBIDDEN 🚨**
|
||||
- **⚠️ ANY DIRECT COMMITS TO MASTER/MAIN WILL BE REJECTED AND MUST BE REVERTED IMMEDIATELY ⚠️**
|
||||
- **🔒 ALL CHANGES MUST GO THROUGH PULL REQUESTS - NO DIRECT COMMITS TO MAIN UNDER ANY CIRCUMSTANCES 🔒**
|
||||
- **Always work on feature branches - NO EXCEPTIONS**
|
||||
- Always check the .cursorrules file before starting to ensure you understand the project guidelines
|
||||
- **MANDATORY workflow for ALL changes:**
|
||||
@@ -826,13 +830,39 @@ These rules should serve as guiding principles when developing the RustFS projec
|
||||
4. Make your changes ONLY on the feature branch
|
||||
5. Test thoroughly before committing
|
||||
6. Commit and push to the feature branch
|
||||
7. Create a pull request for code review
|
||||
7. **Create a pull request for code review - THIS IS THE ONLY WAY TO MERGE TO MAIN**
|
||||
8. **Wait for PR approval before merging - NEVER merge your own PRs without review**
|
||||
- Use descriptive branch names following the pattern: `feat/feature-name`, `fix/issue-name`, `refactor/component-name`, etc.
|
||||
- **Double-check current branch before ANY commit: `git branch` to ensure you're NOT on main/master**
|
||||
- Ensure all changes are made on feature branches and merged through pull requests
|
||||
- **Pull Request Requirements:**
|
||||
- All changes must be submitted via PR regardless of size or urgency
|
||||
- PRs must include comprehensive description and testing information
|
||||
- PRs must pass all CI/CD checks before merging
|
||||
- PRs require at least one approval from code reviewers
|
||||
- Even hotfixes and emergency changes must go through PR process
|
||||
- **Enforcement:**
|
||||
- Main branch should be protected with branch protection rules
|
||||
- Direct pushes to main should be blocked by repository settings
|
||||
- Any accidental direct commits to main must be immediately reverted via PR
|
||||
|
||||
#### Development Workflow
|
||||
|
||||
## 🎯 **Core Development Principles**
|
||||
|
||||
- **🔴 Every change must be precise - don't modify unless you're confident**
|
||||
- Carefully analyze code logic and ensure complete understanding before making changes
|
||||
- When uncertain, prefer asking users or consulting documentation over blind modifications
|
||||
- Use small iterative steps, modify only necessary parts at a time
|
||||
- Evaluate impact scope before changes to ensure no new issues are introduced
|
||||
|
||||
- **🚀 GitHub PR creation prioritizes gh command usage**
|
||||
- Prefer using `gh pr create` command to create Pull Requests
|
||||
- Avoid having users manually create PRs through web interface
|
||||
- Provide clear and professional PR titles and descriptions
|
||||
- Using `gh` commands ensures better integration and automation
|
||||
|
||||
## 📝 **Code Quality Requirements**
|
||||
|
||||
- Use English for all code comments, documentation, and variable names
|
||||
- Write meaningful and descriptive names for variables, functions, and methods
|
||||
- Avoid meaningless test content like "debug 111" or placeholder values
|
||||
|
||||
38
.github/ISSUE_TEMPLATE/bug_report.md
vendored
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
name: Bug report
|
||||
about: Create a report to help us improve
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Describe the bug**
|
||||
A clear and concise description of what the bug is.
|
||||
|
||||
**To Reproduce**
|
||||
Steps to reproduce the behavior:
|
||||
1. Go to '...'
|
||||
2. Click on '....'
|
||||
3. Scroll down to '....'
|
||||
4. See error
|
||||
|
||||
**Expected behavior**
|
||||
A clear and concise description of what you expected to happen.
|
||||
|
||||
**Screenshots**
|
||||
If applicable, add screenshots to help explain your problem.
|
||||
|
||||
**Desktop (please complete the following information):**
|
||||
- OS: [e.g. iOS]
|
||||
- Browser [e.g. chrome, safari]
|
||||
- Version [e.g. 22]
|
||||
|
||||
**Smartphone (please complete the following information):**
|
||||
- Device: [e.g. iPhone6]
|
||||
- OS: [e.g. iOS8.1]
|
||||
- Browser [e.g. stock browser, safari]
|
||||
- Version [e.g. 22]
|
||||
|
||||
**Additional context**
|
||||
Add any other context about the problem here.
|
||||
20
.github/ISSUE_TEMPLATE/feature_request.md
vendored
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
name: Feature request
|
||||
about: Suggest an idea for this project
|
||||
title: ''
|
||||
labels: ''
|
||||
assignees: ''
|
||||
|
||||
---
|
||||
|
||||
**Is your feature request related to a problem? Please describe.**
|
||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
||||
|
||||
**Describe the solution you'd like**
|
||||
A clear and concise description of what you want to happen.
|
||||
|
||||
**Describe alternatives you've considered**
|
||||
A clear and concise description of any alternative solutions or features you've considered.
|
||||
|
||||
**Additional context**
|
||||
Add any other context or screenshots about the feature request here.
|
||||
95
.github/actions/setup/action.yml
vendored
@@ -12,56 +12,99 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name: "setup"
|
||||
|
||||
description: "setup environment for rustfs"
|
||||
name: "Setup Rust Environment"
|
||||
description: "Setup Rust development environment with caching for RustFS"
|
||||
|
||||
inputs:
|
||||
rust-version:
|
||||
required: true
|
||||
description: "Rust version to install"
|
||||
required: false
|
||||
default: "stable"
|
||||
description: "Rust version to use"
|
||||
cache-shared-key:
|
||||
required: true
|
||||
default: ""
|
||||
description: "Cache key for shared cache"
|
||||
description: "Shared cache key for Rust dependencies"
|
||||
required: false
|
||||
default: "rustfs-deps"
|
||||
cache-save-if:
|
||||
required: true
|
||||
default: ${{ github.ref == 'refs/heads/main' }}
|
||||
description: "Cache save condition"
|
||||
runs-on:
|
||||
required: true
|
||||
default: "ubuntu-latest"
|
||||
description: "Running system"
|
||||
description: "Condition for saving cache"
|
||||
required: false
|
||||
default: "true"
|
||||
install-cross-tools:
|
||||
description: "Install cross-compilation tools"
|
||||
required: false
|
||||
default: "false"
|
||||
target:
|
||||
description: "Target architecture to add"
|
||||
required: false
|
||||
default: ""
|
||||
github-token:
|
||||
description: "GitHub token for API access"
|
||||
required: false
|
||||
default: ""
|
||||
|
||||
runs:
|
||||
using: "composite"
|
||||
steps:
|
||||
- name: Install system dependencies
|
||||
if: inputs.runs-on == 'ubuntu-latest'
|
||||
- name: Install system dependencies (Ubuntu)
|
||||
if: runner.os == 'Linux'
|
||||
shell: bash
|
||||
run: |
|
||||
sudo apt update
|
||||
sudo apt install -y musl-tools build-essential lld libdbus-1-dev libwayland-dev libwebkit2gtk-4.1-dev libxdo-dev
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y \
|
||||
musl-tools \
|
||||
build-essential \
|
||||
lld \
|
||||
libdbus-1-dev \
|
||||
libwayland-dev \
|
||||
libwebkit2gtk-4.1-dev \
|
||||
libxdo-dev \
|
||||
pkg-config \
|
||||
libssl-dev
|
||||
|
||||
- uses: arduino/setup-protoc@v3
|
||||
- name: Cache protoc binary
|
||||
id: cache-protoc
|
||||
uses: actions/cache@v4
|
||||
with:
|
||||
path: ~/.local/bin/protoc
|
||||
key: protoc-31.1-${{ runner.os }}-${{ runner.arch }}
|
||||
|
||||
- name: Install protoc
|
||||
if: steps.cache-protoc.outputs.cache-hit != 'true'
|
||||
uses: arduino/setup-protoc@v3
|
||||
with:
|
||||
version: "31.1"
|
||||
repo-token: ${{ inputs.github-token }}
|
||||
|
||||
- uses: Nugine/setup-flatc@v1
|
||||
- name: Install flatc
|
||||
uses: Nugine/setup-flatc@v1
|
||||
with:
|
||||
version: "25.2.10"
|
||||
|
||||
- uses: dtolnay/rust-toolchain@master
|
||||
- name: Install Rust toolchain
|
||||
uses: dtolnay/rust-toolchain@stable
|
||||
with:
|
||||
toolchain: ${{ inputs.rust-version }}
|
||||
targets: ${{ inputs.target }}
|
||||
components: rustfmt, clippy
|
||||
|
||||
- uses: Swatinem/rust-cache@v2
|
||||
- name: Install Zig
|
||||
if: inputs.install-cross-tools == 'true'
|
||||
uses: mlugg/setup-zig@v2
|
||||
|
||||
- name: Install cargo-zigbuild
|
||||
if: inputs.install-cross-tools == 'true'
|
||||
uses: taiki-e/install-action@cargo-zigbuild
|
||||
|
||||
- name: Install cargo-nextest
|
||||
uses: taiki-e/install-action@cargo-nextest
|
||||
|
||||
- name: Setup Rust cache
|
||||
uses: Swatinem/rust-cache@v2
|
||||
with:
|
||||
cache-all-crates: true
|
||||
cache-on-failure: true
|
||||
shared-key: ${{ inputs.cache-shared-key }}
|
||||
save-if: ${{ inputs.cache-save-if }}
|
||||
|
||||
- uses: mlugg/setup-zig@v2
|
||||
- uses: taiki-e/install-action@cargo-zigbuild
|
||||
# Cache workspace dependencies
|
||||
workspaces: |
|
||||
. -> target
|
||||
cli/rustfs-gui -> cli/rustfs-gui/target
|
||||
|
||||
39
.github/pull_request_template.md
vendored
Normal file
@@ -0,0 +1,39 @@
|
||||
<!--
|
||||
Pull Request Template for RustFS
|
||||
-->
|
||||
|
||||
## Type of Change
|
||||
- [ ] New Feature
|
||||
- [ ] Bug Fix
|
||||
- [ ] Documentation
|
||||
- [ ] Performance Improvement
|
||||
- [ ] Test/CI
|
||||
- [ ] Refactor
|
||||
- [ ] Other:
|
||||
|
||||
## Related Issues
|
||||
<!-- List related Issue numbers, e.g. #123 -->
|
||||
|
||||
## Summary of Changes
|
||||
<!-- Briefly describe the main changes and motivation for this PR -->
|
||||
|
||||
## Checklist
|
||||
- [ ] I have read and followed the [CONTRIBUTING.md](CONTRIBUTING.md) guidelines
|
||||
- [ ] Code is formatted with `cargo fmt --all`
|
||||
- [ ] Passed `cargo clippy --all-targets --all-features -- -D warnings`
|
||||
- [ ] Passed `cargo check --all-targets`
|
||||
- [ ] Added/updated necessary tests
|
||||
- [ ] Documentation updated (if needed)
|
||||
- [ ] CI/CD passed (if applicable)
|
||||
|
||||
## Impact
|
||||
- [ ] Breaking change (compatibility)
|
||||
- [ ] Requires doc/config/deployment update
|
||||
- [ ] Other impact:
|
||||
|
||||
## Additional Notes
|
||||
<!-- Any extra information for reviewers -->
|
||||
|
||||
---
|
||||
|
||||
Thank you for your contribution! Please ensure your PR follows the community standards ([CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md)) and sign the CLA if this is your first contribution.
|
||||
59
.github/workflows/audit.yml
vendored
@@ -12,28 +12,67 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name: Audit
|
||||
name: Security Audit
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
branches: [main]
|
||||
paths:
|
||||
- '**/Cargo.toml'
|
||||
- '**/Cargo.lock'
|
||||
- '.github/workflows/audit.yml'
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
branches: [main]
|
||||
paths:
|
||||
- '**/Cargo.toml'
|
||||
- '**/Cargo.lock'
|
||||
- '.github/workflows/audit.yml'
|
||||
schedule:
|
||||
- cron: '0 0 * * 0' # at midnight of each sunday
|
||||
- cron: '0 0 * * 0' # Weekly on Sunday at midnight UTC
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
|
||||
jobs:
|
||||
audit:
|
||||
security-audit:
|
||||
name: Security Audit
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 15
|
||||
steps:
|
||||
- uses: actions/checkout@v4.2.2
|
||||
- uses: taiki-e/install-action@cargo-audit
|
||||
- run: cargo audit -D warnings
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Install cargo-audit
|
||||
uses: taiki-e/install-action@v2
|
||||
with:
|
||||
tool: cargo-audit
|
||||
|
||||
- name: Run security audit
|
||||
run: |
|
||||
cargo audit -D warnings --json | tee audit-results.json
|
||||
|
||||
- name: Upload audit results
|
||||
if: always()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: security-audit-results-${{ github.run_number }}
|
||||
path: audit-results.json
|
||||
retention-days: 30
|
||||
|
||||
dependency-review:
|
||||
name: Dependency Review
|
||||
runs-on: ubuntu-latest
|
||||
if: github.event_name == 'pull_request'
|
||||
permissions:
|
||||
contents: read
|
||||
pull-requests: write
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Dependency Review
|
||||
uses: actions/dependency-review-action@v4
|
||||
with:
|
||||
fail-on-severity: moderate
|
||||
comment-summary-in-pr: true
|
||||
|
||||
881
.github/workflows/build.yml
vendored
@@ -12,572 +12,409 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name: Build RustFS And GUI
|
||||
name: Build and Release
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
schedule:
|
||||
- cron: "0 0 * * 0" # at midnight of each sunday
|
||||
push:
|
||||
tags: ["v*", "*"]
|
||||
branches:
|
||||
- main
|
||||
tags: ["*"]
|
||||
branches: [main]
|
||||
paths-ignore:
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
- ".github/**"
|
||||
- "docs/**"
|
||||
- "deploy/**"
|
||||
- "scripts/dev_*.sh"
|
||||
- "LICENSE*"
|
||||
- "README*"
|
||||
- "**/*.png"
|
||||
- "**/*.jpg"
|
||||
- "**/*.svg"
|
||||
- ".gitignore"
|
||||
- ".dockerignore"
|
||||
pull_request:
|
||||
branches: [main]
|
||||
paths-ignore:
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
- ".github/**"
|
||||
- "docs/**"
|
||||
- "deploy/**"
|
||||
- "scripts/dev_*.sh"
|
||||
- "LICENSE*"
|
||||
- "README*"
|
||||
- "**/*.png"
|
||||
- "**/*.jpg"
|
||||
- "**/*.svg"
|
||||
- ".gitignore"
|
||||
- ".dockerignore"
|
||||
schedule:
|
||||
- cron: "0 0 * * 0" # Weekly on Sunday at midnight UTC
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
force_build:
|
||||
description: "Force build even without changes"
|
||||
required: false
|
||||
default: false
|
||||
type: boolean
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
RUST_BACKTRACE: 1
|
||||
# Optimize build performance
|
||||
CARGO_INCREMENTAL: 0
|
||||
|
||||
jobs:
|
||||
# Second layer: Business logic level checks (handling build strategy)
|
||||
build-check:
|
||||
name: Build Strategy Check
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
should_build: ${{ steps.check.outputs.should_build }}
|
||||
build_type: ${{ steps.check.outputs.build_type }}
|
||||
steps:
|
||||
- name: Determine build strategy
|
||||
id: check
|
||||
run: |
|
||||
should_build=false
|
||||
build_type="none"
|
||||
|
||||
# Business logic: when we need to build
|
||||
if [[ "${{ github.event_name }}" == "schedule" ]] || \
|
||||
[[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
|
||||
[[ "${{ github.event.inputs.force_build }}" == "true" ]] || \
|
||||
[[ "${{ contains(github.event.head_commit.message, '--build') }}" == "true" ]]; then
|
||||
should_build=true
|
||||
build_type="development"
|
||||
fi
|
||||
|
||||
# Always build for tag pushes (version releases)
|
||||
if [[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]]; then
|
||||
should_build=true
|
||||
build_type="release"
|
||||
echo "🏷️ Tag detected: forcing release build"
|
||||
fi
|
||||
|
||||
echo "should_build=$should_build" >> $GITHUB_OUTPUT
|
||||
echo "build_type=$build_type" >> $GITHUB_OUTPUT
|
||||
echo "Build needed: $should_build (type: $build_type)"
|
||||
|
||||
# Build RustFS binaries
|
||||
build-rustfs:
|
||||
name: Build RustFS
|
||||
needs: [build-check]
|
||||
if: needs.build-check.outputs.should_build == 'true'
|
||||
runs-on: ${{ matrix.os }}
|
||||
# Only execute in the following cases: 1) tag push 2) scheduled run 3) commit message contains --build
|
||||
if: |
|
||||
startsWith(github.ref, 'refs/tags/') ||
|
||||
github.event_name == 'schedule' ||
|
||||
github.event_name == 'workflow_dispatch' ||
|
||||
contains(github.event.head_commit.message, '--build')
|
||||
timeout-minutes: 60
|
||||
env:
|
||||
RUSTFLAGS: ${{ matrix.cross == 'false' && '-C target-cpu=native' || '' }}
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
os: [ubuntu-latest, macos-latest, windows-latest]
|
||||
variant:
|
||||
- {
|
||||
profile: release,
|
||||
target: x86_64-unknown-linux-musl,
|
||||
glibc: "default",
|
||||
}
|
||||
- {
|
||||
profile: release,
|
||||
target: x86_64-unknown-linux-gnu,
|
||||
glibc: "default",
|
||||
}
|
||||
- { profile: release, target: aarch64-apple-darwin, glibc: "default" }
|
||||
#- { profile: release, target: aarch64-unknown-linux-gnu, glibc: "default" }
|
||||
- {
|
||||
profile: release,
|
||||
target: aarch64-unknown-linux-musl,
|
||||
glibc: "default",
|
||||
}
|
||||
#- { profile: release, target: x86_64-pc-windows-msvc, glibc: "default" }
|
||||
exclude:
|
||||
# Linux targets on non-Linux systems
|
||||
- os: macos-latest
|
||||
variant:
|
||||
{
|
||||
profile: release,
|
||||
target: x86_64-unknown-linux-gnu,
|
||||
glibc: "default",
|
||||
}
|
||||
- os: macos-latest
|
||||
variant:
|
||||
{
|
||||
profile: release,
|
||||
target: x86_64-unknown-linux-musl,
|
||||
glibc: "default",
|
||||
}
|
||||
- os: macos-latest
|
||||
variant:
|
||||
{
|
||||
profile: release,
|
||||
target: aarch64-unknown-linux-gnu,
|
||||
glibc: "default",
|
||||
}
|
||||
- os: macos-latest
|
||||
variant:
|
||||
{
|
||||
profile: release,
|
||||
target: aarch64-unknown-linux-musl,
|
||||
glibc: "default",
|
||||
}
|
||||
- os: windows-latest
|
||||
variant:
|
||||
{
|
||||
profile: release,
|
||||
target: x86_64-unknown-linux-gnu,
|
||||
glibc: "default",
|
||||
}
|
||||
- os: windows-latest
|
||||
variant:
|
||||
{
|
||||
profile: release,
|
||||
target: x86_64-unknown-linux-musl,
|
||||
glibc: "default",
|
||||
}
|
||||
- os: windows-latest
|
||||
variant:
|
||||
{
|
||||
profile: release,
|
||||
target: aarch64-unknown-linux-gnu,
|
||||
glibc: "default",
|
||||
}
|
||||
- os: windows-latest
|
||||
variant:
|
||||
{
|
||||
profile: release,
|
||||
target: aarch64-unknown-linux-musl,
|
||||
glibc: "default",
|
||||
}
|
||||
|
||||
# Apple targets on non-macOS systems
|
||||
include:
|
||||
# Linux builds
|
||||
- os: ubuntu-latest
|
||||
variant:
|
||||
{
|
||||
profile: release,
|
||||
target: aarch64-apple-darwin,
|
||||
glibc: "default",
|
||||
}
|
||||
- os: windows-latest
|
||||
variant:
|
||||
{
|
||||
profile: release,
|
||||
target: aarch64-apple-darwin,
|
||||
glibc: "default",
|
||||
}
|
||||
|
||||
# Windows targets on non-Windows systems
|
||||
target: x86_64-unknown-linux-musl
|
||||
cross: false
|
||||
platform: linux
|
||||
- os: ubuntu-latest
|
||||
variant:
|
||||
{
|
||||
profile: release,
|
||||
target: x86_64-pc-windows-msvc,
|
||||
glibc: "default",
|
||||
}
|
||||
target: aarch64-unknown-linux-musl
|
||||
cross: true
|
||||
platform: linux
|
||||
# macOS builds
|
||||
- os: macos-latest
|
||||
variant:
|
||||
{
|
||||
profile: release,
|
||||
target: x86_64-pc-windows-msvc,
|
||||
glibc: "default",
|
||||
}
|
||||
|
||||
target: aarch64-apple-darwin
|
||||
cross: false
|
||||
platform: macos
|
||||
- os: macos-latest
|
||||
target: x86_64-apple-darwin
|
||||
cross: false
|
||||
platform: macos
|
||||
# # Windows builds (temporarily disabled)
|
||||
# - os: windows-latest
|
||||
# target: x86_64-pc-windows-msvc
|
||||
# cross: false
|
||||
# platform: windows
|
||||
# - os: windows-latest
|
||||
# target: aarch64-pc-windows-msvc
|
||||
# cross: true
|
||||
# platform: windows
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4.2.2
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
fetch-depth: 0
|
||||
|
||||
# Installation system dependencies
|
||||
- name: Install system dependencies (Ubuntu)
|
||||
if: runner.os == 'Linux'
|
||||
- name: Setup Rust environment
|
||||
uses: ./.github/actions/setup
|
||||
with:
|
||||
rust-version: stable
|
||||
target: ${{ matrix.target }}
|
||||
cache-shared-key: build-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
cache-save-if: ${{ github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/') }}
|
||||
install-cross-tools: ${{ matrix.cross }}
|
||||
|
||||
- name: Download static console assets
|
||||
run: |
|
||||
sudo apt update
|
||||
sudo apt install -y musl-tools build-essential lld libdbus-1-dev libwayland-dev libwebkit2gtk-4.1-dev libxdo-dev
|
||||
shell: bash
|
||||
|
||||
#Install Rust using dtolnay/rust-toolchain
|
||||
- uses: dtolnay/rust-toolchain@master
|
||||
with:
|
||||
toolchain: stable
|
||||
targets: ${{ matrix.variant.target }}
|
||||
components: rustfmt, clippy
|
||||
|
||||
# Install system dependencies
|
||||
- name: Cache Protoc
|
||||
id: cache-protoc
|
||||
uses: actions/cache@v4.2.3
|
||||
with:
|
||||
path: /Users/runner/hostedtoolcache/protoc
|
||||
key: protoc-${{ runner.os }}-31.1
|
||||
restore-keys: |
|
||||
protoc-${{ runner.os }}-
|
||||
|
||||
- name: Install Protoc
|
||||
if: steps.cache-protoc.outputs.cache-hit != 'true'
|
||||
uses: arduino/setup-protoc@v3
|
||||
with:
|
||||
version: "31.1"
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Setup Flatc
|
||||
uses: Nugine/setup-flatc@v1
|
||||
with:
|
||||
version: "25.2.10"
|
||||
|
||||
# Cache Cargo dependencies
|
||||
- uses: Swatinem/rust-cache@v2
|
||||
with:
|
||||
cache-on-failure: true
|
||||
cache-all-crates: true
|
||||
shared-key: rustfs-${{ matrix.os }}-${{ matrix.variant.profile }}-${{ matrix.variant.target }}-${{ matrix.variant.glibc }}-${{ hashFiles('**/Cargo.lock') }}
|
||||
save-if: ${{ github.event_name != 'pull_request' }}
|
||||
|
||||
# Set up Zig for cross-compilation
|
||||
- uses: mlugg/setup-zig@v2
|
||||
if: matrix.variant.glibc != 'default' || contains(matrix.variant.target, 'aarch64-unknown-linux')
|
||||
|
||||
- uses: taiki-e/install-action@cargo-zigbuild
|
||||
if: matrix.variant.glibc != 'default' || contains(matrix.variant.target, 'aarch64-unknown-linux')
|
||||
|
||||
# Download static resources
|
||||
- name: Download and Extract Static Assets
|
||||
run: |
|
||||
url="https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip"
|
||||
|
||||
# Create a static resource directory
|
||||
mkdir -p ./rustfs/static
|
||||
|
||||
# Download static resources
|
||||
echo "::group::Downloading static assets"
|
||||
curl -L -o static_assets.zip "$url" --retry 3
|
||||
|
||||
# Unzip static resources
|
||||
echo "::group::Extracting static assets"
|
||||
if [ "${{ runner.os }}" = "Windows" ]; then
|
||||
7z x static_assets.zip -o./rustfs/static
|
||||
del static_assets.zip
|
||||
rm -rf ./rustfs/static/*
|
||||
if [[ "${{ matrix.platform }}" == "windows" ]]; then
|
||||
curl.exe -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" -o console.zip --retry 3 --retry-delay 5 --max-time 300
|
||||
if [[ $? -eq 0 ]]; then
|
||||
unzip -o console.zip -d ./rustfs/static
|
||||
rm console.zip
|
||||
else
|
||||
echo "Warning: Failed to download console assets, continuing without them"
|
||||
echo "// Static assets not available" > ./rustfs/static/empty.txt
|
||||
fi
|
||||
else
|
||||
unzip -o static_assets.zip -d ./rustfs/static
|
||||
rm static_assets.zip
|
||||
chmod +w ./rustfs/static/LICENSE || true
|
||||
rm -f ./rustfs/static/LICENSE
|
||||
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
|
||||
-o console.zip --retry 3 --retry-delay 5 --max-time 300
|
||||
if [[ $? -eq 0 ]]; then
|
||||
unzip -o console.zip -d ./rustfs/static
|
||||
rm console.zip
|
||||
else
|
||||
echo "Warning: Failed to download console assets, continuing without them"
|
||||
echo "// Static assets not available" > ./rustfs/static/empty.txt
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "::group::Static assets content"
|
||||
ls -la ./rustfs/static
|
||||
shell: bash
|
||||
|
||||
# Build rustfs
|
||||
- name: Build rustfs
|
||||
id: build
|
||||
shell: bash
|
||||
- name: Build RustFS
|
||||
run: |
|
||||
echo "::group::Setting up build parameters"
|
||||
PROFILE="${{ matrix.variant.profile }}"
|
||||
TARGET="${{ matrix.variant.target }}"
|
||||
GLIBC="${{ matrix.variant.glibc }}"
|
||||
|
||||
# Determine whether to use zigbuild
|
||||
USE_ZIGBUILD=false
|
||||
if [[ "$GLIBC" != "default" || "$TARGET" == *"aarch64-unknown-linux"* ]]; then
|
||||
USE_ZIGBUILD=true
|
||||
echo "Using zigbuild for cross-compilation"
|
||||
fi
|
||||
|
||||
# Determine the target parameters
|
||||
TARGET_ARG="$TARGET"
|
||||
if [[ "$GLIBC" != "default" ]]; then
|
||||
TARGET_ARG="${TARGET}.${GLIBC}"
|
||||
echo "Using custom glibc target: $TARGET_ARG"
|
||||
fi
|
||||
|
||||
# Confirm the profile directory name
|
||||
if [[ "$PROFILE" == "dev" ]]; then
|
||||
PROFILE_DIR="debug"
|
||||
else
|
||||
PROFILE_DIR="$PROFILE"
|
||||
fi
|
||||
|
||||
# Determine the binary suffix
|
||||
BIN_SUFFIX=""
|
||||
if [[ "${{ matrix.variant.target }}" == *"windows"* ]]; then
|
||||
BIN_SUFFIX=".exe"
|
||||
fi
|
||||
|
||||
# Determine the binary name - Use the appropriate extension for Windows
|
||||
BIN_NAME="rustfs.${PROFILE}.${TARGET}"
|
||||
if [[ "$GLIBC" != "default" ]]; then
|
||||
BIN_NAME="${BIN_NAME}.glibc${GLIBC}"
|
||||
fi
|
||||
|
||||
# Windows systems use exe suffix, and other systems do not have suffix
|
||||
if [[ "${{ matrix.variant.target }}" == *"windows"* ]]; then
|
||||
BIN_NAME="${BIN_NAME}.exe"
|
||||
else
|
||||
BIN_NAME="${BIN_NAME}.bin"
|
||||
fi
|
||||
|
||||
echo "Binary name will be: $BIN_NAME"
|
||||
|
||||
echo "::group::Building rustfs"
|
||||
# Refresh build information
|
||||
# Force rebuild by touching build.rs
|
||||
touch rustfs/build.rs
|
||||
|
||||
# Identify the build command and execute it
|
||||
if [[ "$USE_ZIGBUILD" == "true" ]]; then
|
||||
echo "Build command: cargo zigbuild --profile $PROFILE --target $TARGET_ARG -p rustfs --bins"
|
||||
cargo zigbuild --profile $PROFILE --target $TARGET_ARG -p rustfs --bins
|
||||
else
|
||||
echo "Build command: cargo build --profile $PROFILE --target $TARGET_ARG -p rustfs --bins"
|
||||
cargo build --profile $PROFILE --target $TARGET_ARG -p rustfs --bins
|
||||
fi
|
||||
|
||||
# Determine the binary path and output path
|
||||
BIN_PATH="target/${TARGET_ARG}/${PROFILE_DIR}/rustfs${BIN_SUFFIX}"
|
||||
OUT_PATH="target/artifacts/${BIN_NAME}"
|
||||
|
||||
# Create a target directory
|
||||
mkdir -p target/artifacts
|
||||
|
||||
echo "Copying binary from ${BIN_PATH} to ${OUT_PATH}"
|
||||
cp "${BIN_PATH}" "${OUT_PATH}"
|
||||
|
||||
# Record the output path for use in the next steps
|
||||
echo "bin_path=${OUT_PATH}" >> $GITHUB_OUTPUT
|
||||
echo "bin_name=${BIN_NAME}" >> $GITHUB_OUTPUT
|
||||
|
||||
- name: Package Binary and Static Assets
|
||||
id: package
|
||||
run: |
|
||||
# Create component file name
|
||||
ARTIFACT_NAME="rustfs-${{ matrix.variant.profile }}-${{ matrix.variant.target }}"
|
||||
if [ "${{ matrix.variant.glibc }}" != "default" ]; then
|
||||
ARTIFACT_NAME="${ARTIFACT_NAME}-glibc${{ matrix.variant.glibc }}"
|
||||
fi
|
||||
echo "artifact_name=${ARTIFACT_NAME}" >> $GITHUB_OUTPUT
|
||||
|
||||
# Get the binary path
|
||||
BIN_PATH="${{ steps.build.outputs.bin_path }}"
|
||||
|
||||
# Create a packaged directory structure - only contains bin and docs directories
|
||||
mkdir -p ${ARTIFACT_NAME}/{bin,docs}
|
||||
|
||||
# Copy binary files (note the difference between Windows and other systems)
|
||||
if [[ "${{ matrix.variant.target }}" == *"windows"* ]]; then
|
||||
cp "${BIN_PATH}" ${ARTIFACT_NAME}/bin/rustfs.exe
|
||||
else
|
||||
cp "${BIN_PATH}" ${ARTIFACT_NAME}/bin/rustfs
|
||||
fi
|
||||
|
||||
# copy documents and licenses
|
||||
if [ -f "LICENSE" ]; then
|
||||
cp LICENSE ${ARTIFACT_NAME}/docs/
|
||||
fi
|
||||
if [ -f "README.md" ]; then
|
||||
cp README.md ${ARTIFACT_NAME}/docs/
|
||||
fi
|
||||
|
||||
# Packaged as zip
|
||||
if [ "${{ runner.os }}" = "Windows" ]; then
|
||||
7z a ${ARTIFACT_NAME}.zip ${ARTIFACT_NAME}
|
||||
else
|
||||
zip -r ${ARTIFACT_NAME}.zip ${ARTIFACT_NAME}
|
||||
fi
|
||||
|
||||
echo "Created artifact: ${ARTIFACT_NAME}.zip"
|
||||
ls -la ${ARTIFACT_NAME}.zip
|
||||
shell: bash
|
||||
|
||||
- uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ${{ steps.package.outputs.artifact_name }}
|
||||
path: ${{ steps.package.outputs.artifact_name }}.zip
|
||||
retention-days: 7
|
||||
|
||||
# Install ossutil2 tool for OSS upload
|
||||
- name: Install ossutil2
|
||||
if: startsWith(github.ref, 'refs/tags/') || github.ref == 'refs/heads/main'
|
||||
shell: bash
|
||||
run: |
|
||||
echo "::group::Installing ossutil2"
|
||||
# Download and install ossutil based on platform
|
||||
if [ "${{ runner.os }}" = "Linux" ]; then
|
||||
curl -o ossutil.zip https://gosspublic.alicdn.com/ossutil/v2/2.1.1/ossutil-2.1.1-linux-amd64.zip
|
||||
unzip -o ossutil.zip
|
||||
chmod 755 ossutil-2.1.1-linux-amd64/ossutil
|
||||
sudo mv ossutil-2.1.1-linux-amd64/ossutil /usr/local/bin/
|
||||
rm -rf ossutil.zip ossutil-2.1.1-linux-amd64
|
||||
elif [ "${{ runner.os }}" = "macOS" ]; then
|
||||
if [ "$(uname -m)" = "arm64" ]; then
|
||||
curl -o ossutil.zip https://gosspublic.alicdn.com/ossutil/v2/2.1.1/ossutil-2.1.1-mac-arm64.zip
|
||||
if [[ "${{ matrix.cross }}" == "true" ]]; then
|
||||
if [[ "${{ matrix.platform }}" == "windows" ]]; then
|
||||
# Use cross for Windows ARM64
|
||||
cargo install cross --git https://github.com/cross-rs/cross
|
||||
cross build --release --target ${{ matrix.target }} -p rustfs --bins
|
||||
else
|
||||
curl -o ossutil.zip https://gosspublic.alicdn.com/ossutil/v2/2.1.1/ossutil-2.1.1-mac-amd64.zip
|
||||
# Use zigbuild for Linux ARM64
|
||||
cargo zigbuild --release --target ${{ matrix.target }} -p rustfs --bins
|
||||
fi
|
||||
unzip -o ossutil.zip
|
||||
chmod 755 ossutil-*/ossutil
|
||||
sudo mv ossutil-*/ossutil /usr/local/bin/
|
||||
rm -rf ossutil.zip ossutil-*
|
||||
elif [ "${{ runner.os }}" = "Windows" ]; then
|
||||
curl -o ossutil.zip https://gosspublic.alicdn.com/ossutil/v2/2.1.1/ossutil-2.1.1-windows-amd64.zip
|
||||
unzip -o ossutil.zip
|
||||
mv ossutil-*/ossutil.exe /usr/bin/ossutil.exe
|
||||
rm -rf ossutil.zip ossutil-*
|
||||
else
|
||||
cargo build --release --target ${{ matrix.target }} -p rustfs --bins
|
||||
fi
|
||||
echo "ossutil2 installation completed"
|
||||
|
||||
- name: Create release package
|
||||
id: package
|
||||
shell: bash
|
||||
run: |
|
||||
PACKAGE_NAME="rustfs-${{ matrix.target }}"
|
||||
|
||||
# Create zip packages for all platforms
|
||||
# Ensure zip is available
|
||||
if ! command -v zip &> /dev/null; then
|
||||
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then
|
||||
sudo apt-get update && sudo apt-get install -y zip
|
||||
fi
|
||||
fi
|
||||
|
||||
cd target/${{ matrix.target }}/release
|
||||
zip "../../../${PACKAGE_NAME}.zip" rustfs
|
||||
cd ../../..
|
||||
echo "package_name=${PACKAGE_NAME}" >> $GITHUB_OUTPUT
|
||||
echo "package_file=${PACKAGE_NAME}.zip" >> $GITHUB_OUTPUT
|
||||
echo "Package created: ${PACKAGE_NAME}.zip"
|
||||
|
||||
- name: Upload artifacts
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: ${{ steps.package.outputs.package_name }}
|
||||
path: ${{ steps.package.outputs.package_file }}
|
||||
retention-days: ${{ startsWith(github.ref, 'refs/tags/') && 30 || 7 }}
|
||||
|
||||
- name: Upload to Aliyun OSS
|
||||
if: startsWith(github.ref, 'refs/tags/') || github.ref == 'refs/heads/main'
|
||||
shell: bash
|
||||
if: needs.build-check.outputs.build_type == 'release' && env.OSS_ACCESS_KEY_ID != ''
|
||||
env:
|
||||
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
|
||||
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
|
||||
OSS_REGION: cn-beijing
|
||||
OSS_ENDPOINT: https://oss-cn-beijing.aliyuncs.com
|
||||
run: |
|
||||
echo "::group::Uploading files to OSS"
|
||||
# Upload the artifact file to two different paths
|
||||
ossutil cp "${{ steps.package.outputs.artifact_name }}.zip" "oss://rustfs-artifacts/artifacts/rustfs/${{ steps.package.outputs.artifact_name }}.zip" --force
|
||||
ossutil cp "${{ steps.package.outputs.artifact_name }}.zip" "oss://rustfs-artifacts/artifacts/rustfs/${{ steps.package.outputs.artifact_name }}.latest.zip" --force
|
||||
echo "Successfully uploaded artifacts to OSS"
|
||||
# Install ossutil (platform-specific)
|
||||
OSSUTIL_VERSION="2.1.1"
|
||||
case "${{ matrix.platform }}" in
|
||||
linux)
|
||||
if [[ "$(uname -m)" == "arm64" ]]; then
|
||||
ARCH="arm64"
|
||||
else
|
||||
ARCH="amd64"
|
||||
fi
|
||||
OSSUTIL_ZIP="ossutil-${OSSUTIL_VERSION}-linux-${ARCH}.zip"
|
||||
OSSUTIL_DIR="ossutil-${OSSUTIL_VERSION}-linux-${ARCH}"
|
||||
|
||||
# Create and upload latest version info
|
||||
- name: Create and Upload latest.json
|
||||
if: startsWith(github.ref, 'refs/tags/') && matrix.os == 'ubuntu-latest' && matrix.variant.target == 'x86_64-unknown-linux-musl'
|
||||
shell: bash
|
||||
env:
|
||||
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
|
||||
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
|
||||
OSS_REGION: cn-beijing
|
||||
OSS_ENDPOINT: https://oss-cn-beijing.aliyuncs.com
|
||||
run: |
|
||||
echo "::group::Creating latest.json file"
|
||||
curl -o "$OSSUTIL_ZIP" "https://gosspublic.alicdn.com/ossutil/v2/${OSSUTIL_VERSION}/${OSSUTIL_ZIP}"
|
||||
unzip "$OSSUTIL_ZIP"
|
||||
mv "${OSSUTIL_DIR}/ossutil" /usr/local/bin/
|
||||
rm -rf "$OSSUTIL_DIR" "$OSSUTIL_ZIP"
|
||||
chmod +x /usr/local/bin/ossutil
|
||||
OSSUTIL_BIN=ossutil
|
||||
;;
|
||||
macos)
|
||||
if [[ "$(uname -m)" == "arm64" ]]; then
|
||||
ARCH="arm64"
|
||||
else
|
||||
ARCH="amd64"
|
||||
fi
|
||||
OSSUTIL_ZIP="ossutil-${OSSUTIL_VERSION}-mac-${ARCH}.zip"
|
||||
OSSUTIL_DIR="ossutil-${OSSUTIL_VERSION}-mac-${ARCH}"
|
||||
|
||||
# Extract version from tag (remove 'refs/tags/' prefix)
|
||||
VERSION="${GITHUB_REF#refs/tags/}"
|
||||
# Remove 'v' prefix if present
|
||||
VERSION="${VERSION#v}"
|
||||
|
||||
# Get current timestamp in ISO 8601 format
|
||||
RELEASE_DATE=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
|
||||
|
||||
# Create latest.json content
|
||||
cat > latest.json << EOF
|
||||
{
|
||||
"version": "${VERSION}",
|
||||
"release_date": "${RELEASE_DATE}",
|
||||
"release_notes": "Release ${VERSION}",
|
||||
"download_url": "https://github.com/rustfs/rustfs/releases/tag/${GITHUB_REF#refs/tags/}"
|
||||
}
|
||||
EOF
|
||||
|
||||
echo "Generated latest.json:"
|
||||
cat latest.json
|
||||
|
||||
echo "::group::Uploading latest.json to OSS"
|
||||
# Upload latest.json to rustfs-version bucket
|
||||
ossutil cp latest.json "oss://rustfs-version/latest.json" --force
|
||||
echo "Successfully uploaded latest.json to OSS"
|
||||
|
||||
# Determine whether to perform GUI construction based on conditions
|
||||
- name: Prepare for GUI build
|
||||
if: startsWith(github.ref, 'refs/tags/')
|
||||
id: prepare_gui
|
||||
run: |
|
||||
# Create a target directory
|
||||
mkdir -p ./cli/rustfs-gui/embedded-rustfs/
|
||||
|
||||
# Copy the currently built binary to the embedded-rustfs directory
|
||||
if [[ "${{ matrix.variant.target }}" == *"windows"* ]]; then
|
||||
cp "${{ steps.build.outputs.bin_path }}" ./cli/rustfs-gui/embedded-rustfs/rustfs.exe
|
||||
else
|
||||
cp "${{ steps.build.outputs.bin_path }}" ./cli/rustfs-gui/embedded-rustfs/rustfs
|
||||
fi
|
||||
|
||||
echo "Copied binary to embedded-rustfs directory"
|
||||
ls -la ./cli/rustfs-gui/embedded-rustfs/
|
||||
shell: bash
|
||||
|
||||
#Install the dioxus-cli tool
|
||||
- uses: taiki-e/cache-cargo-install-action@v2
|
||||
if: startsWith(github.ref, 'refs/tags/')
|
||||
with:
|
||||
tool: dioxus-cli
|
||||
|
||||
# Build and package GUI applications
|
||||
- name: Build and Bundle rustfs-gui
|
||||
if: startsWith(github.ref, 'refs/tags/')
|
||||
id: build_gui
|
||||
shell: bash
|
||||
run: |
|
||||
echo "::group::Setting up build parameters for GUI"
|
||||
PROFILE="${{ matrix.variant.profile }}"
|
||||
TARGET="${{ matrix.variant.target }}"
|
||||
GLIBC="${{ matrix.variant.glibc }}"
|
||||
RELEASE_PATH="target/artifacts/$TARGET"
|
||||
|
||||
# Make sure the output directory exists
|
||||
mkdir -p ${RELEASE_PATH}
|
||||
|
||||
# Configure the target platform linker
|
||||
echo "::group::Configuring linker for $TARGET"
|
||||
case "$TARGET" in
|
||||
"x86_64-unknown-linux-gnu")
|
||||
export CC_x86_64_unknown_linux_gnu=gcc
|
||||
export CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER=gcc
|
||||
;;
|
||||
"x86_64-unknown-linux-musl")
|
||||
export CC_x86_64_unknown_linux_musl=musl-gcc
|
||||
export CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_LINKER=musl-gcc
|
||||
;;
|
||||
"aarch64-unknown-linux-gnu")
|
||||
export CC_aarch64_unknown_linux_gnu=aarch64-linux-gnu-gcc
|
||||
export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc
|
||||
;;
|
||||
"aarch64-unknown-linux-musl")
|
||||
export CC_aarch64_unknown_linux_musl=aarch64-linux-musl-gcc
|
||||
export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_LINKER=aarch64-linux-musl-gcc
|
||||
;;
|
||||
"aarch64-apple-darwin")
|
||||
export CC_aarch64_apple_darwin=clang
|
||||
export CARGO_TARGET_AARCH64_APPLE_DARWIN_LINKER=clang
|
||||
;;
|
||||
"x86_64-pc-windows-msvc")
|
||||
export CC_x86_64_pc_windows_msvc=cl
|
||||
export CARGO_TARGET_X86_64_PC_WINDOWS_MSVC_LINKER=link
|
||||
;;
|
||||
curl -o "$OSSUTIL_ZIP" "https://gosspublic.alicdn.com/ossutil/v2/${OSSUTIL_VERSION}/${OSSUTIL_ZIP}"
|
||||
unzip "$OSSUTIL_ZIP"
|
||||
mv "${OSSUTIL_DIR}/ossutil" /usr/local/bin/
|
||||
rm -rf "$OSSUTIL_DIR" "$OSSUTIL_ZIP"
|
||||
chmod +x /usr/local/bin/ossutil
|
||||
OSSUTIL_BIN=ossutil
|
||||
;;
|
||||
esac
|
||||
|
||||
echo "::group::Building GUI application"
|
||||
cd cli/rustfs-gui
|
||||
# Upload the package file directly to OSS
|
||||
echo "Uploading ${{ steps.package.outputs.package_file }} to OSS..."
|
||||
$OSSUTIL_BIN cp "${{ steps.package.outputs.package_file }}" oss://rustfs-artifacts/artifacts/rustfs/ --force
|
||||
|
||||
# Building according to the target platform
|
||||
if [[ "$TARGET" == *"apple-darwin"* ]]; then
|
||||
echo "Building for macOS"
|
||||
dx bundle --platform macos --package-types "macos" --package-types "dmg" --release --profile ${PROFILE} --out-dir ../../${RELEASE_PATH}
|
||||
elif [[ "$TARGET" == *"windows-msvc"* ]]; then
|
||||
echo "Building for Windows"
|
||||
dx bundle --platform windows --package-types "msi" --release --profile ${PROFILE} --out-dir ../../${RELEASE_PATH}
|
||||
elif [[ "$TARGET" == *"linux"* ]]; then
|
||||
echo "Building for Linux"
|
||||
dx bundle --platform linux --package-types "deb" --package-types "rpm" --package-types "appimage" --release --profile ${PROFILE} --out-dir ../../${RELEASE_PATH}
|
||||
# Create latest.json (only for the first Linux build to avoid duplication)
|
||||
if [[ "${{ matrix.target }}" == "x86_64-unknown-linux-musl" ]]; then
|
||||
VERSION="${GITHUB_REF#refs/tags/v}"
|
||||
echo "{\"version\":\"${VERSION}\",\"release_date\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\"}" > latest.json
|
||||
$OSSUTIL_BIN cp latest.json oss://rustfs-version/latest.json --force
|
||||
fi
|
||||
|
||||
cd ../..
|
||||
|
||||
# Create component name
|
||||
GUI_ARTIFACT_NAME="rustfs-gui-${PROFILE}-${TARGET}"
|
||||
|
||||
if [ "$GLIBC" != "default" ]; then
|
||||
GUI_ARTIFACT_NAME="${GUI_ARTIFACT_NAME}-glibc${GLIBC}"
|
||||
fi
|
||||
|
||||
echo "::group::Packaging GUI application"
|
||||
# Select packaging method according to the operating system
|
||||
if [ "${{ runner.os }}" = "Windows" ]; then
|
||||
7z a ${GUI_ARTIFACT_NAME}.zip ${RELEASE_PATH}/*
|
||||
else
|
||||
zip -r ${GUI_ARTIFACT_NAME}.zip ${RELEASE_PATH}/*
|
||||
fi
|
||||
|
||||
echo "gui_artifact_name=${GUI_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
|
||||
echo "Created GUI artifact: ${GUI_ARTIFACT_NAME}.zip"
|
||||
ls -la ${GUI_ARTIFACT_NAME}.zip
|
||||
|
||||
# Upload GUI components
|
||||
- uses: actions/upload-artifact@v4
|
||||
if: startsWith(github.ref, 'refs/tags/')
|
||||
with:
|
||||
name: ${{ steps.build_gui.outputs.gui_artifact_name }}
|
||||
path: ${{ steps.build_gui.outputs.gui_artifact_name }}.zip
|
||||
retention-days: 7
|
||||
|
||||
# Upload GUI to Alibaba Cloud OSS
|
||||
- name: Upload GUI to Aliyun OSS
|
||||
if: startsWith(github.ref, 'refs/tags/')
|
||||
shell: bash
|
||||
env:
|
||||
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
|
||||
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
|
||||
OSS_REGION: cn-beijing
|
||||
OSS_ENDPOINT: https://oss-cn-beijing.aliyuncs.com
|
||||
run: |
|
||||
echo "::group::Uploading GUI files to OSS"
|
||||
# Upload the GUI artifact file to two different paths
|
||||
ossutil cp "${{ steps.build_gui.outputs.gui_artifact_name }}.zip" "oss://rustfs-artifacts/artifacts/rustfs/${{ steps.build_gui.outputs.gui_artifact_name }}.zip" --force
|
||||
ossutil cp "${{ steps.build_gui.outputs.gui_artifact_name }}.zip" "oss://rustfs-artifacts/artifacts/rustfs/${{ steps.build_gui.outputs.gui_artifact_name }}.latest.zip" --force
|
||||
echo "Successfully uploaded GUI artifacts to OSS"
|
||||
|
||||
merge:
|
||||
# Release management
|
||||
release:
|
||||
name: GitHub Release
|
||||
needs: [build-check, build-rustfs]
|
||||
if: always() && needs.build-check.outputs.build_type == 'release'
|
||||
runs-on: ubuntu-latest
|
||||
needs: [build-rustfs]
|
||||
# Only execute merge operation when tag is pushed
|
||||
if: startsWith(github.ref, 'refs/tags/')
|
||||
permissions:
|
||||
contents: write
|
||||
steps:
|
||||
- uses: actions/upload-artifact/merge@v4
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
with:
|
||||
name: rustfs-packages
|
||||
pattern: "rustfs-*"
|
||||
delete-merged: true
|
||||
fetch-depth: 0
|
||||
|
||||
- name: Download all artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: ./release-artifacts
|
||||
|
||||
- name: Prepare release assets
|
||||
id: release_prep
|
||||
run: |
|
||||
VERSION="${GITHUB_REF#refs/tags/}"
|
||||
VERSION_CLEAN="${VERSION#v}"
|
||||
|
||||
echo "version=${VERSION}" >> $GITHUB_OUTPUT
|
||||
echo "version_clean=${VERSION_CLEAN}" >> $GITHUB_OUTPUT
|
||||
|
||||
# Organize artifacts
|
||||
mkdir -p ./release-files
|
||||
|
||||
# Copy all artifacts (.zip files)
|
||||
find ./release-artifacts -name "*.zip" -exec cp {} ./release-files/ \;
|
||||
|
||||
# Generate checksums for all files
|
||||
cd ./release-files
|
||||
if ls *.zip >/dev/null 2>&1; then
|
||||
sha256sum *.zip >> SHA256SUMS
|
||||
sha512sum *.zip >> SHA512SUMS
|
||||
fi
|
||||
cd ..
|
||||
|
||||
# Display what we're releasing
|
||||
echo "=== Release Files ==="
|
||||
ls -la ./release-files/
|
||||
|
||||
- name: Create GitHub Release
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
VERSION="${{ steps.release_prep.outputs.version }}"
|
||||
VERSION_CLEAN="${{ steps.release_prep.outputs.version_clean }}"
|
||||
|
||||
# Check if release already exists
|
||||
if gh release view "$VERSION" >/dev/null 2>&1; then
|
||||
echo "Release $VERSION already exists, skipping creation"
|
||||
else
|
||||
# Get release notes from tag message
|
||||
RELEASE_NOTES=$(git tag -l --format='%(contents)' "${VERSION}")
|
||||
if [[ -z "$RELEASE_NOTES" || "$RELEASE_NOTES" =~ ^[[:space:]]*$ ]]; then
|
||||
RELEASE_NOTES="Release ${VERSION_CLEAN}"
|
||||
fi
|
||||
|
||||
# Determine if this is a prerelease
|
||||
PRERELEASE_FLAG=""
|
||||
if [[ "$VERSION" == *"alpha"* ]] || [[ "$VERSION" == *"beta"* ]] || [[ "$VERSION" == *"rc"* ]]; then
|
||||
PRERELEASE_FLAG="--prerelease"
|
||||
fi
|
||||
|
||||
# Create the release only if it doesn't exist
|
||||
gh release create "$VERSION" \
|
||||
--title "RustFS $VERSION_CLEAN" \
|
||||
--notes "$RELEASE_NOTES" \
|
||||
$PRERELEASE_FLAG
|
||||
fi
|
||||
|
||||
- name: Upload release assets
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
VERSION="${{ steps.release_prep.outputs.version }}"
|
||||
|
||||
cd ./release-files
|
||||
|
||||
# Upload all binary files
|
||||
for file in *.zip; do
|
||||
if [[ -f "$file" ]]; then
|
||||
echo "Uploading $file..."
|
||||
gh release upload "$VERSION" "$file" --clobber
|
||||
fi
|
||||
done
|
||||
|
||||
# Upload checksum files
|
||||
if [[ -f "SHA256SUMS" ]]; then
|
||||
echo "Uploading SHA256SUMS..."
|
||||
gh release upload "$VERSION" "SHA256SUMS" --clobber
|
||||
fi
|
||||
|
||||
if [[ -f "SHA512SUMS" ]]; then
|
||||
echo "Uploading SHA512SUMS..."
|
||||
gh release upload "$VERSION" "SHA512SUMS" --clobber
|
||||
fi
|
||||
|
||||
- name: Update release notes
|
||||
env:
|
||||
GH_TOKEN: ${{ github.token }}
|
||||
run: |
|
||||
VERSION="${{ steps.release_prep.outputs.version }}"
|
||||
VERSION_CLEAN="${{ steps.release_prep.outputs.version_clean }}"
|
||||
|
||||
# Check if release already has custom notes (not auto-generated)
|
||||
EXISTING_NOTES=$(gh release view "$VERSION" --json body --jq '.body' 2>/dev/null || echo "")
|
||||
|
||||
# Only update if release notes are empty or auto-generated
|
||||
if [[ -z "$EXISTING_NOTES" ]] || [[ "$EXISTING_NOTES" == *"Release ${VERSION_CLEAN}"* ]]; then
|
||||
echo "Updating release notes for $VERSION"
|
||||
|
||||
# Get original release notes from tag
|
||||
ORIGINAL_NOTES=$(git tag -l --format='%(contents)' "${VERSION}")
|
||||
if [[ -z "$ORIGINAL_NOTES" || "$ORIGINAL_NOTES" =~ ^[[:space:]]*$ ]]; then
|
||||
ORIGINAL_NOTES="Release ${VERSION_CLEAN}"
|
||||
fi
|
||||
|
||||
# Use external template file and substitute variables
|
||||
sed -e "s/\${VERSION}/$VERSION/g" \
|
||||
-e "s/\${VERSION_CLEAN}/$VERSION_CLEAN/g" \
|
||||
-e "s/\${ORIGINAL_NOTES}/$(echo "$ORIGINAL_NOTES" | sed 's/[[\.*^$()+?{|]/\\&/g')/g" \
|
||||
.github/workflows/release-notes-template.md > enhanced_notes.md
|
||||
|
||||
# Update the release with enhanced notes
|
||||
gh release edit "$VERSION" --notes-file enhanced_notes.md
|
||||
else
|
||||
echo "Release $VERSION already has custom notes, skipping update to preserve manual edits"
|
||||
fi
|
||||
|
||||
80
.github/workflows/ci.yml
vendored
@@ -12,12 +12,11 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name: CI
|
||||
name: Continuous Integration
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
branches: [main]
|
||||
paths-ignore:
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
@@ -35,10 +34,9 @@ on:
|
||||
- ".github/workflows/build.yml"
|
||||
- ".github/workflows/docker.yml"
|
||||
- ".github/workflows/audit.yml"
|
||||
- ".github/workflows/samply.yml"
|
||||
- ".github/workflows/performance.yml"
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
branches: [main]
|
||||
paths-ignore:
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
@@ -56,13 +54,18 @@ on:
|
||||
- ".github/workflows/build.yml"
|
||||
- ".github/workflows/docker.yml"
|
||||
- ".github/workflows/audit.yml"
|
||||
- ".github/workflows/samply.yml"
|
||||
- ".github/workflows/performance.yml"
|
||||
schedule:
|
||||
- cron: "0 0 * * 0" # at midnight of each sunday
|
||||
- cron: "0 0 * * 0" # Weekly on Sunday at midnight UTC
|
||||
workflow_dispatch:
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
RUST_BACKTRACE: 1
|
||||
|
||||
jobs:
|
||||
skip-check:
|
||||
name: Skip Duplicate Actions
|
||||
permissions:
|
||||
actions: write
|
||||
contents: read
|
||||
@@ -70,59 +73,84 @@ jobs:
|
||||
outputs:
|
||||
should_skip: ${{ steps.skip_check.outputs.should_skip }}
|
||||
steps:
|
||||
- id: skip_check
|
||||
- name: Skip duplicate actions
|
||||
id: skip_check
|
||||
uses: fkirc/skip-duplicate-actions@v5
|
||||
with:
|
||||
concurrent_skipping: "same_content_newer"
|
||||
cancel_others: true
|
||||
paths_ignore: '["*.md"]'
|
||||
paths_ignore: '["*.md", "docs/**", "deploy/**"]'
|
||||
# Never skip release events and tag pushes
|
||||
do_not_skip: '["release", "push"]'
|
||||
|
||||
develop:
|
||||
test-and-lint:
|
||||
name: Test and Lint
|
||||
needs: skip-check
|
||||
if: needs.skip-check.outputs.should_skip != 'true'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
- uses: ./.github/actions/setup
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Test
|
||||
run: cargo test --all --exclude e2e_test
|
||||
- name: Setup Rust environment
|
||||
uses: ./.github/actions/setup
|
||||
with:
|
||||
rust-version: stable
|
||||
cache-shared-key: ci-test-${{ hashFiles('**/Cargo.lock') }}
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
|
||||
- name: Format
|
||||
- name: Run tests
|
||||
run: |
|
||||
cargo nextest run --all --exclude e2e_test
|
||||
cargo test --all --doc
|
||||
|
||||
- name: Check code formatting
|
||||
run: cargo fmt --all --check
|
||||
|
||||
- name: Lint
|
||||
- name: Run clippy lints
|
||||
run: cargo clippy --all-targets --all-features -- -D warnings
|
||||
|
||||
s3s-e2e:
|
||||
name: E2E (s3s-e2e)
|
||||
e2e-tests:
|
||||
name: End-to-End Tests
|
||||
needs: skip-check
|
||||
if: needs.skip-check.outputs.should_skip != 'true'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
steps:
|
||||
- uses: actions/checkout@v4.2.2
|
||||
- uses: ./.github/actions/setup
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Install s3s-e2e
|
||||
- name: Setup Rust environment
|
||||
uses: ./.github/actions/setup
|
||||
with:
|
||||
rust-version: stable
|
||||
cache-shared-key: ci-e2e-${{ hashFiles('**/Cargo.lock') }}
|
||||
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Install s3s-e2e test tool
|
||||
uses: taiki-e/cache-cargo-install-action@v2
|
||||
with:
|
||||
tool: s3s-e2e
|
||||
git: https://github.com/Nugine/s3s.git
|
||||
rev: b7714bfaa17ddfa9b23ea01774a1e7bbdbfc2ca3
|
||||
|
||||
- name: Build debug
|
||||
- name: Build debug binary
|
||||
run: |
|
||||
touch rustfs/build.rs
|
||||
cargo build -p rustfs --bins
|
||||
|
||||
- name: Run s3s-e2e
|
||||
- name: Run end-to-end tests
|
||||
run: |
|
||||
s3s-e2e --version
|
||||
./scripts/e2e-run.sh ./target/debug/rustfs /tmp/rustfs
|
||||
|
||||
- uses: actions/upload-artifact@v4
|
||||
- name: Upload test logs
|
||||
if: failure()
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: s3s-e2e.logs
|
||||
name: e2e-test-logs-${{ github.run_number }}
|
||||
path: /tmp/rustfs.log
|
||||
retention-days: 3
|
||||
|
||||
294
.github/workflows/docker.yml
vendored
@@ -12,155 +12,112 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name: Build and Push Docker Images
|
||||
name: Docker Images
|
||||
|
||||
on:
|
||||
push:
|
||||
tags:
|
||||
- "v*"
|
||||
branches:
|
||||
- main
|
||||
tags: ["*"]
|
||||
branches: [main]
|
||||
paths-ignore:
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
- ".github/**"
|
||||
- "docs/**"
|
||||
- "deploy/**"
|
||||
- "scripts/dev_*.sh"
|
||||
- "LICENSE*"
|
||||
- "README*"
|
||||
- "**/*.png"
|
||||
- "**/*.jpg"
|
||||
- "**/*.svg"
|
||||
- ".gitignore"
|
||||
- ".dockerignore"
|
||||
pull_request:
|
||||
branches:
|
||||
- main
|
||||
branches: [main]
|
||||
paths-ignore:
|
||||
- "**.md"
|
||||
- "**.txt"
|
||||
- ".github/**"
|
||||
- "docs/**"
|
||||
- "deploy/**"
|
||||
- "scripts/dev_*.sh"
|
||||
- "LICENSE*"
|
||||
- "README*"
|
||||
- "**/*.png"
|
||||
- "**/*.jpg"
|
||||
- "**/*.svg"
|
||||
- ".gitignore"
|
||||
- ".dockerignore"
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
push_to_registry:
|
||||
description: "Push images to registry"
|
||||
push_images:
|
||||
description: "Push images to registries"
|
||||
required: false
|
||||
default: true
|
||||
type: boolean
|
||||
|
||||
env:
|
||||
REGISTRY_IMAGE_DOCKERHUB: rustfs/rustfs
|
||||
REGISTRY_IMAGE_GHCR: ghcr.io/${{ github.repository }}
|
||||
CARGO_TERM_COLOR: always
|
||||
REGISTRY_DOCKERHUB: rustfs/rustfs
|
||||
REGISTRY_GHCR: ghcr.io/${{ github.repository }}
|
||||
|
||||
jobs:
|
||||
# Skip duplicate job runs
|
||||
skip-check:
|
||||
permissions:
|
||||
actions: write
|
||||
contents: read
|
||||
# Check if we should build
|
||||
build-check:
|
||||
name: Build Check
|
||||
runs-on: ubuntu-latest
|
||||
outputs:
|
||||
should_skip: ${{ steps.skip_check.outputs.should_skip }}
|
||||
should_build: ${{ steps.check.outputs.should_build }}
|
||||
should_push: ${{ steps.check.outputs.should_push }}
|
||||
steps:
|
||||
- id: skip_check
|
||||
uses: fkirc/skip-duplicate-actions@v5
|
||||
with:
|
||||
concurrent_skipping: "same_content_newer"
|
||||
cancel_others: true
|
||||
paths_ignore: '["*.md", "docs/**"]'
|
||||
|
||||
# Build RustFS binary for different platforms
|
||||
build-binary:
|
||||
needs: skip-check
|
||||
# Only execute in the following cases: 1) tag push 2) commit message contains --build 3) workflow_dispatch 4) PR
|
||||
if: needs.skip-check.outputs.should_skip != 'true' && (startsWith(github.ref, 'refs/tags/') || contains(github.event.head_commit.message, '--build') || github.event_name == 'workflow_dispatch' || github.event_name == 'pull_request')
|
||||
strategy:
|
||||
matrix:
|
||||
include:
|
||||
- target: x86_64-unknown-linux-musl
|
||||
os: ubuntu-latest
|
||||
arch: amd64
|
||||
use_cross: false
|
||||
- target: aarch64-unknown-linux-gnu
|
||||
os: ubuntu-latest
|
||||
arch: arm64
|
||||
use_cross: true
|
||||
runs-on: ${{ matrix.os }}
|
||||
timeout-minutes: 120
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Rust toolchain
|
||||
uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||
with:
|
||||
target: ${{ matrix.target }}
|
||||
components: rustfmt, clippy
|
||||
|
||||
- name: Install cross-compilation dependencies (native build)
|
||||
if: matrix.use_cross == false
|
||||
- name: Check build conditions
|
||||
id: check
|
||||
run: |
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y musl-tools
|
||||
should_build=false
|
||||
should_push=false
|
||||
|
||||
- name: Install cross tool (cross compilation)
|
||||
if: matrix.use_cross == true
|
||||
uses: taiki-e/install-action@v2
|
||||
with:
|
||||
tool: cross
|
||||
# Always build on workflow_dispatch or when changes detected
|
||||
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
|
||||
[[ "${{ github.event_name }}" == "push" ]] || \
|
||||
[[ "${{ github.event_name }}" == "pull_request" ]]; then
|
||||
should_build=true
|
||||
fi
|
||||
|
||||
- name: Install protoc
|
||||
uses: arduino/setup-protoc@v3
|
||||
with:
|
||||
version: "31.1"
|
||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
# Push only on main branch, tags, or manual trigger
|
||||
if [[ "${{ github.ref }}" == "refs/heads/main" ]] || \
|
||||
[[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]] || \
|
||||
[[ "${{ github.event.inputs.push_images }}" == "true" ]]; then
|
||||
should_push=true
|
||||
fi
|
||||
|
||||
- name: Install flatc
|
||||
uses: Nugine/setup-flatc@v1
|
||||
with:
|
||||
version: "25.2.10"
|
||||
echo "should_build=$should_build" >> $GITHUB_OUTPUT
|
||||
echo "should_push=$should_push" >> $GITHUB_OUTPUT
|
||||
echo "Build: $should_build, Push: $should_push"
|
||||
|
||||
- name: Cache cargo dependencies
|
||||
uses: actions/cache@v3
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/registry
|
||||
~/.cargo/git
|
||||
target
|
||||
key: ${{ runner.os }}-cargo-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
|
||||
restore-keys: |
|
||||
${{ runner.os }}-cargo-${{ matrix.target }}-
|
||||
${{ runner.os }}-cargo-
|
||||
|
||||
- name: Generate protobuf code
|
||||
run: cargo run --bin gproto
|
||||
|
||||
- name: Build RustFS binary (native)
|
||||
if: matrix.use_cross == false
|
||||
run: |
|
||||
cargo build --release --target ${{ matrix.target }} --bin rustfs
|
||||
|
||||
- name: Build RustFS binary (cross)
|
||||
if: matrix.use_cross == true
|
||||
run: |
|
||||
cross build --release --target ${{ matrix.target }} --bin rustfs
|
||||
|
||||
- name: Upload binary artifact
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: rustfs-${{ matrix.arch }}
|
||||
path: target/${{ matrix.target }}/release/rustfs
|
||||
retention-days: 1
|
||||
|
||||
# Build and push multi-arch Docker images
|
||||
build-images:
|
||||
needs: [skip-check, build-binary]
|
||||
if: needs.skip-check.outputs.should_skip != 'true'
|
||||
# Build multi-arch Docker images
|
||||
build-docker:
|
||||
name: Build Docker Images
|
||||
needs: build-check
|
||||
if: needs.build-check.outputs.should_build == 'true'
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 60
|
||||
strategy:
|
||||
fail-fast: false
|
||||
matrix:
|
||||
image-type: [production, ubuntu, rockylinux, devenv]
|
||||
variant:
|
||||
- name: production
|
||||
dockerfile: Dockerfile
|
||||
platforms: linux/amd64,linux/arm64
|
||||
- name: ubuntu
|
||||
dockerfile: .docker/Dockerfile.ubuntu22.04
|
||||
platforms: linux/amd64,linux/arm64
|
||||
- name: alpine
|
||||
dockerfile: .docker/Dockerfile.alpine
|
||||
platforms: linux/amd64,linux/arm64
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Download binary artifacts
|
||||
uses: actions/download-artifact@v4
|
||||
with:
|
||||
path: ./artifacts
|
||||
|
||||
- name: Setup binary files
|
||||
run: |
|
||||
mkdir -p target/x86_64-unknown-linux-musl/release
|
||||
mkdir -p target/aarch64-unknown-linux-gnu/release
|
||||
cp artifacts/rustfs-amd64/rustfs target/x86_64-unknown-linux-musl/release/
|
||||
cp artifacts/rustfs-arm64/rustfs target/aarch64-unknown-linux-gnu/release/
|
||||
chmod +x target/*/release/rustfs
|
||||
|
||||
- name: Set up Docker Buildx
|
||||
uses: docker/setup-buildx-action@v3
|
||||
|
||||
@@ -168,75 +125,86 @@ jobs:
|
||||
uses: docker/setup-qemu-action@v3
|
||||
|
||||
- name: Login to Docker Hub
|
||||
if: github.event_name != 'pull_request' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/'))
|
||||
if: needs.build-check.outputs.should_push == 'true' && secrets.DOCKERHUB_USERNAME != ''
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
if: github.event_name != 'pull_request' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/'))
|
||||
if: needs.build-check.outputs.should_push == 'true'
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Set Dockerfile and context
|
||||
id: dockerfile
|
||||
run: |
|
||||
case "${{ matrix.image-type }}" in
|
||||
production)
|
||||
echo "dockerfile=Dockerfile" >> $GITHUB_OUTPUT
|
||||
echo "context=." >> $GITHUB_OUTPUT
|
||||
echo "suffix=" >> $GITHUB_OUTPUT
|
||||
;;
|
||||
ubuntu)
|
||||
echo "dockerfile=.docker/Dockerfile.ubuntu22.04" >> $GITHUB_OUTPUT
|
||||
echo "context=." >> $GITHUB_OUTPUT
|
||||
echo "suffix=-ubuntu22.04" >> $GITHUB_OUTPUT
|
||||
;;
|
||||
rockylinux)
|
||||
echo "dockerfile=.docker/Dockerfile.rockylinux9.3" >> $GITHUB_OUTPUT
|
||||
echo "context=." >> $GITHUB_OUTPUT
|
||||
echo "suffix=-rockylinux9.3" >> $GITHUB_OUTPUT
|
||||
;;
|
||||
devenv)
|
||||
echo "dockerfile=.docker/Dockerfile.devenv" >> $GITHUB_OUTPUT
|
||||
echo "context=." >> $GITHUB_OUTPUT
|
||||
echo "suffix=-devenv" >> $GITHUB_OUTPUT
|
||||
;;
|
||||
esac
|
||||
|
||||
- name: Extract metadata
|
||||
id: meta
|
||||
uses: docker/metadata-action@v5
|
||||
with:
|
||||
images: |
|
||||
${{ env.REGISTRY_IMAGE_DOCKERHUB }}
|
||||
${{ env.REGISTRY_IMAGE_GHCR }}
|
||||
${{ env.REGISTRY_DOCKERHUB }}
|
||||
${{ env.REGISTRY_GHCR }}
|
||||
tags: |
|
||||
type=ref,event=branch,suffix=${{ steps.dockerfile.outputs.suffix }}
|
||||
type=ref,event=pr,suffix=${{ steps.dockerfile.outputs.suffix }}
|
||||
type=semver,pattern={{version}},suffix=${{ steps.dockerfile.outputs.suffix }}
|
||||
type=semver,pattern={{major}}.{{minor}},suffix=${{ steps.dockerfile.outputs.suffix }}
|
||||
type=semver,pattern={{major}},suffix=${{ steps.dockerfile.outputs.suffix }}
|
||||
type=raw,value=latest,suffix=${{ steps.dockerfile.outputs.suffix }},enable={{is_default_branch}}
|
||||
type=ref,event=branch,suffix=-${{ matrix.variant.name }}
|
||||
type=ref,event=pr,suffix=-${{ matrix.variant.name }}
|
||||
type=semver,pattern={{version}},suffix=-${{ matrix.variant.name }}
|
||||
type=semver,pattern={{major}}.{{minor}},suffix=-${{ matrix.variant.name }}
|
||||
type=raw,value=latest,suffix=-${{ matrix.variant.name }},enable={{is_default_branch}}
|
||||
flavor: |
|
||||
latest=false
|
||||
|
||||
- name: Build and push multi-arch Docker image
|
||||
- name: Build and push Docker image
|
||||
uses: docker/build-push-action@v5
|
||||
with:
|
||||
context: ${{ steps.dockerfile.outputs.context }}
|
||||
file: ${{ steps.dockerfile.outputs.dockerfile }}
|
||||
platforms: linux/amd64,linux/arm64
|
||||
push: ${{ (github.event_name != 'pull_request' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/'))) || github.event.inputs.push_to_registry == 'true' }}
|
||||
context: .
|
||||
file: ${{ matrix.variant.dockerfile }}
|
||||
platforms: ${{ matrix.variant.platforms }}
|
||||
push: ${{ needs.build-check.outputs.should_push == 'true' }}
|
||||
tags: ${{ steps.meta.outputs.tags }}
|
||||
labels: ${{ steps.meta.outputs.labels }}
|
||||
cache-from: type=gha,scope=${{ matrix.image-type }}
|
||||
cache-to: type=gha,mode=max,scope=${{ matrix.image-type }}
|
||||
cache-from: type=gha,scope=docker-${{ matrix.variant.name }}
|
||||
cache-to: type=gha,mode=max,scope=docker-${{ matrix.variant.name }}
|
||||
build-args: |
|
||||
BUILDTIME=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.created'] }}
|
||||
VERSION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.version'] }}
|
||||
REVISION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }}
|
||||
|
||||
# Create manifest for main production image
|
||||
create-manifest:
|
||||
name: Create Manifest
|
||||
needs: [build-check, build-docker]
|
||||
if: needs.build-check.outputs.should_push == 'true' && startsWith(github.ref, 'refs/tags/')
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Login to Docker Hub
|
||||
if: secrets.DOCKERHUB_USERNAME != ''
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||
password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||
|
||||
- name: Login to GitHub Container Registry
|
||||
uses: docker/login-action@v3
|
||||
with:
|
||||
registry: ghcr.io
|
||||
username: ${{ github.actor }}
|
||||
password: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Create and push manifest
|
||||
run: |
|
||||
VERSION=${GITHUB_REF#refs/tags/}
|
||||
|
||||
# Create main image tag (without variant suffix)
|
||||
if [[ -n "${{ secrets.DOCKERHUB_USERNAME }}" ]]; then
|
||||
docker buildx imagetools create \
|
||||
-t ${{ env.REGISTRY_DOCKERHUB }}:${VERSION} \
|
||||
-t ${{ env.REGISTRY_DOCKERHUB }}:latest \
|
||||
${{ env.REGISTRY_DOCKERHUB }}:${VERSION}-production
|
||||
fi
|
||||
|
||||
docker buildx imagetools create \
|
||||
-t ${{ env.REGISTRY_GHCR }}:${VERSION} \
|
||||
-t ${{ env.REGISTRY_GHCR }}:latest \
|
||||
${{ env.REGISTRY_GHCR }}:${VERSION}-production
|
||||
|
||||
18
.github/workflows/issue-translator.yml
vendored
Normal file
@@ -0,0 +1,18 @@
|
||||
name: 'issue-translator'
|
||||
on:
|
||||
issue_comment:
|
||||
types: [created]
|
||||
issues:
|
||||
types: [opened]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: usthe/issues-translate-action@v2.7
|
||||
with:
|
||||
IS_MODIFY_TITLE: false
|
||||
# not require, default false, . Decide whether to modify the issue title
|
||||
# if true, the robot account @Issues-translate-bot must have modification permissions, invite @Issues-translate-bot to your project or use your custom bot.
|
||||
CUSTOM_BOT_NOTE: Bot detected the issue body's language is not English, translate it automatically.
|
||||
# not require. Customize the translation robot prefix message.
|
||||
140
.github/workflows/performance.yml
vendored
Normal file
@@ -0,0 +1,140 @@
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name: Performance Testing
|
||||
|
||||
on:
|
||||
push:
|
||||
branches: [main]
|
||||
paths:
|
||||
- '**/*.rs'
|
||||
- '**/Cargo.toml'
|
||||
- '**/Cargo.lock'
|
||||
- '.github/workflows/performance.yml'
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
profile_duration:
|
||||
description: "Profiling duration in seconds"
|
||||
required: false
|
||||
default: "120"
|
||||
type: string
|
||||
|
||||
env:
|
||||
CARGO_TERM_COLOR: always
|
||||
RUST_BACKTRACE: 1
|
||||
|
||||
jobs:
|
||||
performance-profile:
|
||||
name: Performance Profiling
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 30
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Rust environment
|
||||
uses: ./.github/actions/setup
|
||||
with:
|
||||
rust-version: nightly
|
||||
cache-shared-key: perf-${{ hashFiles('**/Cargo.lock') }}
|
||||
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Install additional nightly components
|
||||
run: rustup component add llvm-tools-preview
|
||||
|
||||
- name: Install samply profiler
|
||||
uses: taiki-e/cache-cargo-install-action@v2
|
||||
with:
|
||||
tool: samply
|
||||
|
||||
- name: Configure kernel for profiling
|
||||
run: echo '1' | sudo tee /proc/sys/kernel/perf_event_paranoid
|
||||
|
||||
- name: Prepare test environment
|
||||
run: |
|
||||
# Create test volumes
|
||||
for i in {0..4}; do
|
||||
mkdir -p ./target/volume/test$i
|
||||
done
|
||||
|
||||
# Set environment variables
|
||||
echo "RUSTFS_VOLUMES=./target/volume/test{0...4}" >> $GITHUB_ENV
|
||||
echo "RUST_LOG=rustfs=info,ecstore=info,s3s=info,iam=info,rustfs-obs=info" >> $GITHUB_ENV
|
||||
|
||||
- name: Download static files
|
||||
run: |
|
||||
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
|
||||
-o tempfile.zip --retry 3 --retry-delay 5
|
||||
unzip -o tempfile.zip -d ./rustfs/static
|
||||
rm tempfile.zip
|
||||
|
||||
- name: Build with profiling optimizations
|
||||
run: |
|
||||
RUSTFLAGS="-C force-frame-pointers=yes -C debug-assertions=off" \
|
||||
cargo +nightly build --profile profiling -p rustfs --bins
|
||||
|
||||
- name: Run performance profiling
|
||||
id: profiling
|
||||
run: |
|
||||
DURATION="${{ github.event.inputs.profile_duration || '120' }}"
|
||||
echo "Running profiling for ${DURATION} seconds..."
|
||||
|
||||
timeout "${DURATION}s" samply record \
|
||||
--output samply-profile.json \
|
||||
./target/profiling/rustfs ${RUSTFS_VOLUMES} || true
|
||||
|
||||
if [ -f "samply-profile.json" ]; then
|
||||
echo "profile_generated=true" >> $GITHUB_OUTPUT
|
||||
echo "Profile generated successfully"
|
||||
else
|
||||
echo "profile_generated=false" >> $GITHUB_OUTPUT
|
||||
echo "::warning::Profile data not generated"
|
||||
fi
|
||||
|
||||
- name: Upload profile data
|
||||
if: steps.profiling.outputs.profile_generated == 'true'
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: performance-profile-${{ github.run_number }}
|
||||
path: samply-profile.json
|
||||
retention-days: 30
|
||||
|
||||
benchmark:
|
||||
name: Benchmark Tests
|
||||
runs-on: ubuntu-latest
|
||||
timeout-minutes: 45
|
||||
steps:
|
||||
- name: Checkout repository
|
||||
uses: actions/checkout@v4
|
||||
|
||||
- name: Setup Rust environment
|
||||
uses: ./.github/actions/setup
|
||||
with:
|
||||
rust-version: stable
|
||||
cache-shared-key: bench-${{ hashFiles('**/Cargo.lock') }}
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
|
||||
|
||||
- name: Run benchmarks
|
||||
run: |
|
||||
cargo bench --package ecstore --bench comparison_benchmark -- --output-format json | \
|
||||
tee benchmark-results.json
|
||||
|
||||
- name: Upload benchmark results
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: benchmark-results-${{ github.run_number }}
|
||||
path: benchmark-results.json
|
||||
retention-days: 7
|
||||
78
.github/workflows/release-notes-template.md
vendored
Normal file
@@ -0,0 +1,78 @@
|
||||
## RustFS ${VERSION_CLEAN}
|
||||
|
||||
${ORIGINAL_NOTES}
|
||||
|
||||
---
|
||||
|
||||
### 🚀 Quick Download
|
||||
|
||||
**Linux (Static Binaries - No Dependencies):**
|
||||
|
||||
```bash
|
||||
# x86_64 (Intel/AMD)
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-x86_64-unknown-linux-musl.zip
|
||||
unzip rustfs-x86_64-unknown-linux-musl.zip
|
||||
sudo mv rustfs /usr/local/bin/
|
||||
|
||||
# ARM64 (Graviton, Apple Silicon VMs)
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-aarch64-unknown-linux-musl.zip
|
||||
unzip rustfs-aarch64-unknown-linux-musl.zip
|
||||
sudo mv rustfs /usr/local/bin/
|
||||
```
|
||||
|
||||
**macOS:**
|
||||
|
||||
```bash
|
||||
# Apple Silicon (M1/M2/M3)
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-aarch64-apple-darwin.zip
|
||||
unzip rustfs-aarch64-apple-darwin.zip
|
||||
sudo mv rustfs /usr/local/bin/
|
||||
|
||||
# Intel
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-x86_64-apple-darwin.zip
|
||||
unzip rustfs-x86_64-apple-darwin.zip
|
||||
sudo mv rustfs /usr/local/bin/
|
||||
```
|
||||
|
||||
### 📁 Available Downloads
|
||||
|
||||
| Platform | Architecture | File | Description |
|
||||
|----------|-------------|------|-------------|
|
||||
| Linux | x86_64 | `rustfs-x86_64-unknown-linux-musl.zip` | Static binary, no dependencies |
|
||||
| Linux | ARM64 | `rustfs-aarch64-unknown-linux-musl.zip` | Static binary, no dependencies |
|
||||
| macOS | Apple Silicon | `rustfs-aarch64-apple-darwin.zip` | Native binary, ZIP archive |
|
||||
| macOS | Intel | `rustfs-x86_64-apple-darwin.zip` | Native binary, ZIP archive |
|
||||
|
||||
### 🔐 Verification
|
||||
|
||||
Download checksums and verify your download:
|
||||
|
||||
```bash
|
||||
# Download checksums
|
||||
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/SHA256SUMS
|
||||
|
||||
# Verify (Linux)
|
||||
sha256sum -c SHA256SUMS --ignore-missing
|
||||
|
||||
# Verify (macOS)
|
||||
shasum -a 256 -c SHA256SUMS --ignore-missing
|
||||
```
|
||||
|
||||
### 🛠️ System Requirements
|
||||
|
||||
- **Linux**: Any distribution with glibc 2.17+ (CentOS 7+, Ubuntu 16.04+)
|
||||
- **macOS**: 10.15+ (Catalina or later)
|
||||
- **Windows**: Windows 10 version 1809 or later
|
||||
|
||||
### 📚 Documentation
|
||||
|
||||
- [Installation Guide](https://github.com/rustfs/rustfs#installation)
|
||||
- [Quick Start](https://github.com/rustfs/rustfs#quick-start)
|
||||
- [Configuration](https://github.com/rustfs/rustfs/blob/main/docs/)
|
||||
- [API Documentation](https://docs.rs/rustfs)
|
||||
|
||||
### 🆘 Support
|
||||
|
||||
- 🐛 [Report Issues](https://github.com/rustfs/rustfs/issues)
|
||||
- 💬 [Community Discussions](https://github.com/rustfs/rustfs/discussions)
|
||||
- 📖 [Documentation](https://github.com/rustfs/rustfs/tree/main/docs)
|
||||
82
.github/workflows/samply.yml
vendored
@@ -1,82 +0,0 @@
|
||||
# Copyright 2024 RustFS Team
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
name: Profile with Samply
|
||||
on:
|
||||
push:
|
||||
branches: [ main ]
|
||||
workflow_dispatch:
|
||||
jobs:
|
||||
profile:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4.2.2
|
||||
|
||||
- uses: dtolnay/rust-toolchain@nightly
|
||||
with:
|
||||
components: llvm-tools-preview
|
||||
|
||||
- uses: actions/cache@v4
|
||||
with:
|
||||
path: |
|
||||
~/.cargo/registry
|
||||
~/.cargo/git
|
||||
target
|
||||
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
|
||||
|
||||
- name: Install samply
|
||||
uses: taiki-e/cache-cargo-install-action@v2
|
||||
with:
|
||||
tool: samply
|
||||
|
||||
- name: Configure kernel for profiling
|
||||
run: echo '1' | sudo tee /proc/sys/kernel/perf_event_paranoid
|
||||
|
||||
- name: Create test volumes
|
||||
run: |
|
||||
for i in {0..4}; do
|
||||
mkdir -p ./target/volume/test$i
|
||||
done
|
||||
|
||||
- name: Set environment variables
|
||||
run: |
|
||||
echo "RUSTFS_VOLUMES=./target/volume/test{0...4}" >> $GITHUB_ENV
|
||||
echo "RUST_LOG=rustfs=info,ecstore=info,s3s=info,iam=info,rustfs-obs=info" >> $GITHUB_ENV
|
||||
|
||||
- name: Download static files
|
||||
run: |
|
||||
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" -o tempfile.zip && unzip -o tempfile.zip -d ./rustfs/static && rm tempfile.zip
|
||||
|
||||
- name: Build with profiling
|
||||
run: |
|
||||
RUSTFLAGS="-C force-frame-pointers=yes" cargo +nightly build --profile profiling -p rustfs --bins
|
||||
|
||||
- name: Run samply with timeout
|
||||
id: samply_record
|
||||
run: |
|
||||
timeout 120s samply record --output samply.json ./target/profiling/rustfs ${RUSTFS_VOLUMES}
|
||||
if [ -f "samply.json" ]; then
|
||||
echo "profile_generated=true" >> $GITHUB_OUTPUT
|
||||
else
|
||||
echo "profile_generated=false" >> $GITHUB_OUTPUT
|
||||
echo "::error::Failed to generate profile data"
|
||||
fi
|
||||
|
||||
- name: Upload profile data
|
||||
if: steps.samply_record.outputs.profile_generated == 'true'
|
||||
uses: actions/upload-artifact@v4
|
||||
with:
|
||||
name: samply-profile-${{ github.run_number }}
|
||||
path: samply.json
|
||||
retention-days: 7
|
||||
39
CLA.md
Normal file
@@ -0,0 +1,39 @@
|
||||
RustFS Individual Contributor License Agreement
|
||||
|
||||
Thank you for your interest in contributing documentation and related software code to a project hosted or managed by RustFS. In order to clarify the intellectual property license granted with Contributions from any person or entity, RustFS must have a Contributor License Agreement (“CLA”) on file that has been signed by each Contributor, indicating agreement to the license terms below. This version of the Contributor License Agreement allows an individual to submit Contributions to the applicable project. If you are making a submission on behalf of a legal entity, then you should sign the separate Corporate Contributor License Agreement.
|
||||
|
||||
You accept and agree to the following terms and conditions for Your present and future Contributions submitted to RustFS. You hereby irrevocably assign and transfer to RustFS all right, title, and interest in and to Your Contributions, including all copyrights and other intellectual property rights therein.
|
||||
|
||||
Definitions
|
||||
|
||||
“You” (or “Your”) shall mean the copyright owner or legal entity authorized by the copyright owner that is making this Agreement with RustFS. For legal entities, the entity making a Contribution and all other entities that control, are controlled by, or are under common control with that entity are considered to be a single Contributor. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
“Contribution” shall mean any original work of authorship, including any modifications or additions to an existing work, that is intentionally submitted by You to RustFS for inclusion in, or documentation of, any of the products or projects owned or managed by RustFS (the “Work”), including without limitation any Work described in Schedule A. For the purposes of this definition, “submitted” means any form of electronic or written communication sent to RustFS or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, RustFS for the purpose of discussing and improving the Work.
|
||||
|
||||
Assignment of Copyright
|
||||
|
||||
Subject to the terms and conditions of this Agreement, You hereby irrevocably assign and transfer to RustFS all right, title, and interest in and to Your Contributions, including all copyrights and other intellectual property rights therein, for the entire term of such rights, including all renewals and extensions. You agree to execute all documents and take all actions as may be reasonably necessary to vest in RustFS the ownership of Your Contributions and to assist RustFS in perfecting, maintaining, and enforcing its rights in Your Contributions.
|
||||
|
||||
Grant of Patent License
|
||||
|
||||
Subject to the terms and conditions of this Agreement, You hereby grant to RustFS and to recipients of documentation and software distributed by RustFS a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by You that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Work to which such Contribution(s) was submitted. If any entity institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted to that entity under this Agreement for that Contribution or Work shall terminate as of the date such litigation is filed.
|
||||
|
||||
You represent that you are legally entitled to grant the above assignment and license.
|
||||
|
||||
You represent that each of Your Contributions is Your original creation (see section 7 for submissions on behalf of others). You represent that Your Contribution submissions include complete details of any third-party license or other restriction (including, but not limited to, related patents and trademarks) of which you are personally aware and which are associated with any part of Your Contributions.
|
||||
|
||||
You are not expected to provide support for Your Contributions, except to the extent You desire to provide support. You may provide support for free, for a fee, or not at all. Unless required by applicable law or agreed to in writing, You provide Your Contributions on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON- INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
|
||||
|
||||
Should You wish to submit work that is not Your original creation, You may submit it to RustFS separately from any Contribution, identifying the complete details of its source and of any license or other restriction (including, but not limited to, related patents, trademarks, and license agreements) of which you are personally aware, and conspicuously marking the work as “Submitted on behalf of a third-party: [named here]”.
|
||||
|
||||
You agree to notify RustFS of any facts or circumstances of which you become aware that would make these representations inaccurate in any respect.
|
||||
|
||||
Modification of CLA
|
||||
|
||||
RustFS reserves the right to update or modify this CLA in the future. Any updates or modifications to this CLA shall apply only to Contributions made after the effective date of the revised CLA. Contributions made prior to the update shall remain governed by the version of the CLA that was in effect at the time of submission. It is not necessary for all Contributors to re-sign the CLA when the CLA is updated or modified.
|
||||
|
||||
Governing Law and Dispute Resolution
|
||||
|
||||
This Agreement will be governed by and construed in accordance with the laws of the People’s Republic of China excluding that body of laws known as conflict of laws. The parties expressly agree that the United Nations Convention on Contracts for the International Sale of Goods will not apply. Any legal action or proceeding arising under this Agreement will be brought exclusively in the courts located in Beijing, China, and the parties hereby irrevocably consent to the personal jurisdiction and venue therein.
|
||||
|
||||
For your reading convenience, this Agreement is written in parallel English and Chinese sections. To the extent there is a conflict between the English and Chinese sections, the English sections shall govern.
|
||||
128
CODE_OF_CONDUCT.md
Normal file
@@ -0,0 +1,128 @@
|
||||
# Contributor Covenant Code of Conduct
|
||||
|
||||
## Our Pledge
|
||||
|
||||
We as members, contributors, and leaders pledge to make participation in our
|
||||
community a harassment-free experience for everyone, regardless of age, body
|
||||
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
||||
identity and expression, level of experience, education, socio-economic status,
|
||||
nationality, personal appearance, race, religion, or sexual identity
|
||||
and orientation.
|
||||
|
||||
We pledge to act and interact in ways that contribute to an open, welcoming,
|
||||
diverse, inclusive, and healthy community.
|
||||
|
||||
## Our Standards
|
||||
|
||||
Examples of behavior that contributes to a positive environment for our
|
||||
community include:
|
||||
|
||||
* Demonstrating empathy and kindness toward other people
|
||||
* Being respectful of differing opinions, viewpoints, and experiences
|
||||
* Giving and gracefully accepting constructive feedback
|
||||
* Accepting responsibility and apologizing to those affected by our mistakes,
|
||||
and learning from the experience
|
||||
* Focusing on what is best not just for us as individuals, but for the
|
||||
overall community
|
||||
|
||||
Examples of unacceptable behavior include:
|
||||
|
||||
* The use of sexualized language or imagery, and sexual attention or
|
||||
advances of any kind
|
||||
* Trolling, insulting or derogatory comments, and personal or political attacks
|
||||
* Public or private harassment
|
||||
* Publishing others' private information, such as a physical or email
|
||||
address, without their explicit permission
|
||||
* Other conduct which could reasonably be considered inappropriate in a
|
||||
professional setting
|
||||
|
||||
## Enforcement Responsibilities
|
||||
|
||||
Community leaders are responsible for clarifying and enforcing our standards of
|
||||
acceptable behavior and will take appropriate and fair corrective action in
|
||||
response to any behavior that they deem inappropriate, threatening, offensive,
|
||||
or harmful.
|
||||
|
||||
Community leaders have the right and responsibility to remove, edit, or reject
|
||||
comments, commits, code, wiki edits, issues, and other contributions that are
|
||||
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
||||
decisions when appropriate.
|
||||
|
||||
## Scope
|
||||
|
||||
This Code of Conduct applies within all community spaces, and also applies when
|
||||
an individual is officially representing the community in public spaces.
|
||||
Examples of representing our community include using an official e-mail address,
|
||||
posting via an official social media account, or acting as an appointed
|
||||
representative at an online or offline event.
|
||||
|
||||
## Enforcement
|
||||
|
||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
||||
reported to the community leaders responsible for enforcement at
|
||||
hello@rustfs.com.
|
||||
All complaints will be reviewed and investigated promptly and fairly.
|
||||
|
||||
All community leaders are obligated to respect the privacy and security of the
|
||||
reporter of any incident.
|
||||
|
||||
## Enforcement Guidelines
|
||||
|
||||
Community leaders will follow these Community Impact Guidelines in determining
|
||||
the consequences for any action they deem in violation of this Code of Conduct:
|
||||
|
||||
### 1. Correction
|
||||
|
||||
**Community Impact**: Use of inappropriate language or other behavior deemed
|
||||
unprofessional or unwelcome in the community.
|
||||
|
||||
**Consequence**: A private, written warning from community leaders, providing
|
||||
clarity around the nature of the violation and an explanation of why the
|
||||
behavior was inappropriate. A public apology may be requested.
|
||||
|
||||
### 2. Warning
|
||||
|
||||
**Community Impact**: A violation through a single incident or series
|
||||
of actions.
|
||||
|
||||
**Consequence**: A warning with consequences for continued behavior. No
|
||||
interaction with the people involved, including unsolicited interaction with
|
||||
those enforcing the Code of Conduct, for a specified period of time. This
|
||||
includes avoiding interactions in community spaces as well as external channels
|
||||
like social media. Violating these terms may lead to a temporary or
|
||||
permanent ban.
|
||||
|
||||
### 3. Temporary Ban
|
||||
|
||||
**Community Impact**: A serious violation of community standards, including
|
||||
sustained inappropriate behavior.
|
||||
|
||||
**Consequence**: A temporary ban from any sort of interaction or public
|
||||
communication with the community for a specified period of time. No public or
|
||||
private interaction with the people involved, including unsolicited interaction
|
||||
with those enforcing the Code of Conduct, is allowed during this period.
|
||||
Violating these terms may lead to a permanent ban.
|
||||
|
||||
### 4. Permanent Ban
|
||||
|
||||
**Community Impact**: Demonstrating a pattern of violation of community
|
||||
standards, including sustained inappropriate behavior, harassment of an
|
||||
individual, or aggression toward or disparagement of classes of individuals.
|
||||
|
||||
**Consequence**: A permanent ban from any sort of public interaction within
|
||||
the community.
|
||||
|
||||
## Attribution
|
||||
|
||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
||||
version 2.0, available at
|
||||
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
|
||||
|
||||
Community Impact Guidelines were inspired by [Mozilla's code of conduct
|
||||
enforcement ladder](https://github.com/mozilla/diversity).
|
||||
|
||||
[homepage]: https://www.contributor-covenant.org
|
||||
|
||||
For answers to common questions about this code of conduct, see the FAQ at
|
||||
https://www.contributor-covenant.org/faq. Translations are available at
|
||||
https://www.contributor-covenant.org/translations.
|
||||
@@ -11,21 +11,25 @@
|
||||
Before every commit, you **MUST**:
|
||||
|
||||
1. **Format your code**:
|
||||
|
||||
```bash
|
||||
cargo fmt --all
|
||||
```
|
||||
|
||||
2. **Verify formatting**:
|
||||
|
||||
```bash
|
||||
cargo fmt --all --check
|
||||
```
|
||||
|
||||
3. **Pass clippy checks**:
|
||||
|
||||
```bash
|
||||
cargo clippy --all-targets --all-features -- -D warnings
|
||||
```
|
||||
|
||||
4. **Ensure compilation**:
|
||||
|
||||
```bash
|
||||
cargo check --all-targets
|
||||
```
|
||||
@@ -136,6 +140,7 @@ Install the `rust-analyzer` extension and add to your `settings.json`:
|
||||
#### Other IDEs
|
||||
|
||||
Configure your IDE to:
|
||||
|
||||
- Use the project's `rustfmt.toml` configuration
|
||||
- Format on save
|
||||
- Run clippy checks
|
||||
196
Cargo.lock
generated
@@ -472,9 +472,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "async-channel"
|
||||
version = "2.3.1"
|
||||
version = "2.4.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "89b47800b0be77592da0afd425cc03468052844aff33b84e33cc696f64e77b6a"
|
||||
checksum = "16c74e56284d2188cabb6ad99603d1ace887a5d7e7b695d01b728155ed9ed427"
|
||||
dependencies = [
|
||||
"concurrent-queue",
|
||||
"event-listener-strategy",
|
||||
@@ -733,9 +733,9 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "aws-sdk-s3"
|
||||
version = "1.95.0"
|
||||
version = "1.96.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "a316e3c4c38837084dfbf87c0fc6ea016b3dc3e1f867d9d7f5eddfe47e5cae37"
|
||||
checksum = "6e25d24de44b34dcdd5182ac4e4c6f07bcec2661c505acef94c0d293b65505fe"
|
||||
dependencies = [
|
||||
"aws-credential-types",
|
||||
"aws-runtime",
|
||||
@@ -1171,7 +1171,7 @@ dependencies = [
|
||||
"bitflags 2.9.1",
|
||||
"cexpr",
|
||||
"clang-sys",
|
||||
"itertools 0.12.1",
|
||||
"itertools 0.11.0",
|
||||
"lazy_static",
|
||||
"lazycell",
|
||||
"log",
|
||||
@@ -2058,7 +2058,6 @@ dependencies = [
|
||||
"ciborium",
|
||||
"clap",
|
||||
"criterion-plot",
|
||||
"futures",
|
||||
"is-terminal",
|
||||
"itertools 0.10.5",
|
||||
"num-traits",
|
||||
@@ -2071,7 +2070,6 @@ dependencies = [
|
||||
"serde_derive",
|
||||
"serde_json",
|
||||
"tinytemplate",
|
||||
"tokio",
|
||||
"walkdir",
|
||||
]
|
||||
|
||||
@@ -3471,7 +3469,7 @@ checksum = "92773504d58c093f6de2459af4af33faa518c13451eb8f2b5698ed3d36e7c813"
|
||||
|
||||
[[package]]
|
||||
name = "e2e_test"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"flatbuffers 25.2.10",
|
||||
@@ -4948,6 +4946,17 @@ version = "3.0.4"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8bb03732005da905c88227371639bf1ad885cc712789c011c31c5fb3ab3ccf02"
|
||||
|
||||
[[package]]
|
||||
name = "io-uring"
|
||||
version = "0.7.8"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b86e202f00093dcba4275d4636b93ef9dd75d025ae560d2521b45ea28ab49013"
|
||||
dependencies = [
|
||||
"bitflags 2.9.1",
|
||||
"cfg-if",
|
||||
"libc",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "ipnet"
|
||||
version = "2.11.0"
|
||||
@@ -5014,15 +5023,6 @@ dependencies = [
|
||||
"either",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "itertools"
|
||||
version = "0.12.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "ba291022dbbd398a455acf126c1e341954079855bc60dfdda641363bd6922569"
|
||||
dependencies = [
|
||||
"either",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "itertools"
|
||||
version = "0.13.0"
|
||||
@@ -5332,7 +5332,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "07033963ba89ebaf1584d767badaa2e8fcec21aedea6b8c0346d487d49c28667"
|
||||
dependencies = [
|
||||
"cfg-if",
|
||||
"windows-targets 0.53.0",
|
||||
"windows-targets 0.52.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
@@ -5625,9 +5625,9 @@ checksum = "490cc448043f947bae3cbee9c203358d62dbee0db12107a74be5c30ccfd09771"
|
||||
|
||||
[[package]]
|
||||
name = "memchr"
|
||||
version = "2.7.4"
|
||||
version = "2.7.5"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "78ca9ab1a0babb1e7d5695e3530886289c18cf2f87ec19a575a0abdce112e3a3"
|
||||
checksum = "32a282da65faaf38286cf3be983213fcf1d2e2a58700e808f83f4ea9a4804bc0"
|
||||
|
||||
[[package]]
|
||||
name = "memoffset"
|
||||
@@ -7830,7 +7830,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"async-trait",
|
||||
"atoi",
|
||||
@@ -7857,6 +7857,7 @@ dependencies = [
|
||||
"pin-project-lite",
|
||||
"reqwest",
|
||||
"rust-embed",
|
||||
"rustfs-ahm",
|
||||
"rustfs-appauth",
|
||||
"rustfs-common",
|
||||
"rustfs-config",
|
||||
@@ -7897,9 +7898,37 @@ dependencies = [
|
||||
"zip",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-ahm"
|
||||
version = "0.0.3"
|
||||
dependencies = [
|
||||
"anyhow",
|
||||
"async-trait",
|
||||
"bytes",
|
||||
"futures",
|
||||
"lazy_static",
|
||||
"rmp-serde",
|
||||
"rustfs-common",
|
||||
"rustfs-ecstore",
|
||||
"rustfs-filemeta",
|
||||
"rustfs-lock",
|
||||
"rustfs-madmin",
|
||||
"rustfs-utils",
|
||||
"serde",
|
||||
"serde_json",
|
||||
"thiserror 2.0.12",
|
||||
"time",
|
||||
"tokio",
|
||||
"tokio-test",
|
||||
"tokio-util",
|
||||
"tracing",
|
||||
"url",
|
||||
"uuid",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-appauth"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"base64-simd",
|
||||
"rsa",
|
||||
@@ -7909,7 +7938,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-common"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"lazy_static",
|
||||
"tokio",
|
||||
@@ -7918,7 +7947,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-config"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"const-str",
|
||||
"serde",
|
||||
@@ -7927,7 +7956,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-crypto"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"aes-gcm",
|
||||
"argon2",
|
||||
@@ -7945,7 +7974,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-ecstore"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"async-channel",
|
||||
"async-trait",
|
||||
@@ -8020,7 +8049,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-filemeta"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"byteorder",
|
||||
"bytes",
|
||||
@@ -8041,7 +8070,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-gui"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"chrono",
|
||||
"dioxus",
|
||||
@@ -8062,7 +8091,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-iam"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"arc-swap",
|
||||
"async-trait",
|
||||
@@ -8086,7 +8115,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-lock"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"async-trait",
|
||||
"lazy_static",
|
||||
@@ -8103,7 +8132,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-madmin"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"chrono",
|
||||
"humantime",
|
||||
@@ -8115,7 +8144,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-notify"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"async-trait",
|
||||
"axum",
|
||||
@@ -8144,7 +8173,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-obs"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"async-trait",
|
||||
"chrono",
|
||||
@@ -8177,7 +8206,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-policy"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"base64-simd",
|
||||
"ipnetwork",
|
||||
@@ -8196,7 +8225,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-protos"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"flatbuffers 25.2.10",
|
||||
"prost",
|
||||
@@ -8207,12 +8236,11 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-rio"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"aes-gcm",
|
||||
"bytes",
|
||||
"crc32fast",
|
||||
"criterion",
|
||||
"futures",
|
||||
"http 1.3.1",
|
||||
"md-5",
|
||||
@@ -8256,7 +8284,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-s3select-api"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"async-trait",
|
||||
"bytes",
|
||||
@@ -8280,7 +8308,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-s3select-query"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"async-recursion",
|
||||
"async-trait",
|
||||
@@ -8298,21 +8326,25 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-signer"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"bytes",
|
||||
"http 1.3.1",
|
||||
"hyper 1.6.0",
|
||||
"lazy_static",
|
||||
"rand 0.9.1",
|
||||
"rustfs-utils",
|
||||
"s3s",
|
||||
"serde",
|
||||
"serde_urlencoded",
|
||||
"tempfile",
|
||||
"time",
|
||||
"tracing",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-utils"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"base64-simd",
|
||||
"blake3",
|
||||
@@ -8356,7 +8388,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-workers"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"tokio",
|
||||
"tracing",
|
||||
@@ -8364,7 +8396,7 @@ dependencies = [
|
||||
|
||||
[[package]]
|
||||
name = "rustfs-zip"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
dependencies = [
|
||||
"async-compression",
|
||||
"tokio",
|
||||
@@ -8552,8 +8584,9 @@ checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
|
||||
|
||||
[[package]]
|
||||
name = "s3s"
|
||||
version = "0.12.0-dev"
|
||||
source = "git+https://github.com/Nugine/s3s.git?rev=4733cdfb27b2713e832967232cbff413bb768c10#4733cdfb27b2713e832967232cbff413bb768c10"
|
||||
version = "0.12.0-minio-preview.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "5b630a6b9051328a0c185cacf723180ccd7936d08f1fda0b932a60b1b9cd860d"
|
||||
dependencies = [
|
||||
"arrayvec",
|
||||
"async-trait",
|
||||
@@ -9834,17 +9867,19 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
|
||||
|
||||
[[package]]
|
||||
name = "tokio"
|
||||
version = "1.45.1"
|
||||
version = "1.46.1"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "75ef51a33ef1da925cea3e4eb122833cb377c61439ca401b770f54902b806779"
|
||||
checksum = "0cc3a2344dafbe23a245241fe8b09735b521110d30fcefbbd5feb1797ca35d17"
|
||||
dependencies = [
|
||||
"backtrace",
|
||||
"bytes",
|
||||
"io-uring",
|
||||
"libc",
|
||||
"mio",
|
||||
"parking_lot",
|
||||
"pin-project-lite",
|
||||
"signal-hook-registry",
|
||||
"slab",
|
||||
"socket2",
|
||||
"tokio-macros",
|
||||
"tracing",
|
||||
@@ -10085,6 +10120,7 @@ dependencies = [
|
||||
"futures-util",
|
||||
"http 1.3.1",
|
||||
"http-body 1.0.1",
|
||||
"http-body-util",
|
||||
"iri-string",
|
||||
"pin-project-lite",
|
||||
"tokio",
|
||||
@@ -11113,29 +11149,13 @@ dependencies = [
|
||||
"windows_aarch64_gnullvm 0.52.6",
|
||||
"windows_aarch64_msvc 0.52.6",
|
||||
"windows_i686_gnu 0.52.6",
|
||||
"windows_i686_gnullvm 0.52.6",
|
||||
"windows_i686_gnullvm",
|
||||
"windows_i686_msvc 0.52.6",
|
||||
"windows_x86_64_gnu 0.52.6",
|
||||
"windows_x86_64_gnullvm 0.52.6",
|
||||
"windows_x86_64_msvc 0.52.6",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "windows-targets"
|
||||
version = "0.53.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "b1e4c7e8ceaaf9cb7d7507c974735728ab453b67ef8f18febdd7c11fe59dca8b"
|
||||
dependencies = [
|
||||
"windows_aarch64_gnullvm 0.53.0",
|
||||
"windows_aarch64_msvc 0.53.0",
|
||||
"windows_i686_gnu 0.53.0",
|
||||
"windows_i686_gnullvm 0.53.0",
|
||||
"windows_i686_msvc 0.53.0",
|
||||
"windows_x86_64_gnu 0.53.0",
|
||||
"windows_x86_64_gnullvm 0.53.0",
|
||||
"windows_x86_64_msvc 0.53.0",
|
||||
]
|
||||
|
||||
[[package]]
|
||||
name = "windows-threading"
|
||||
version = "0.1.0"
|
||||
@@ -11172,12 +11192,6 @@ version = "0.52.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3"
|
||||
|
||||
[[package]]
|
||||
name = "windows_aarch64_gnullvm"
|
||||
version = "0.53.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "86b8d5f90ddd19cb4a147a5fa63ca848db3df085e25fee3cc10b39b6eebae764"
|
||||
|
||||
[[package]]
|
||||
name = "windows_aarch64_msvc"
|
||||
version = "0.42.2"
|
||||
@@ -11196,12 +11210,6 @@ version = "0.52.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469"
|
||||
|
||||
[[package]]
|
||||
name = "windows_aarch64_msvc"
|
||||
version = "0.53.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c7651a1f62a11b8cbd5e0d42526e55f2c99886c77e007179efff86c2b137e66c"
|
||||
|
||||
[[package]]
|
||||
name = "windows_i686_gnu"
|
||||
version = "0.42.2"
|
||||
@@ -11220,24 +11228,12 @@ version = "0.52.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b"
|
||||
|
||||
[[package]]
|
||||
name = "windows_i686_gnu"
|
||||
version = "0.53.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "c1dc67659d35f387f5f6c479dc4e28f1d4bb90ddd1a5d3da2e5d97b42d6272c3"
|
||||
|
||||
[[package]]
|
||||
name = "windows_i686_gnullvm"
|
||||
version = "0.52.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66"
|
||||
|
||||
[[package]]
|
||||
name = "windows_i686_gnullvm"
|
||||
version = "0.53.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "9ce6ccbdedbf6d6354471319e781c0dfef054c81fbc7cf83f338a4296c0cae11"
|
||||
|
||||
[[package]]
|
||||
name = "windows_i686_msvc"
|
||||
version = "0.42.2"
|
||||
@@ -11256,12 +11252,6 @@ version = "0.52.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66"
|
||||
|
||||
[[package]]
|
||||
name = "windows_i686_msvc"
|
||||
version = "0.53.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "581fee95406bb13382d2f65cd4a908ca7b1e4c2f1917f143ba16efe98a589b5d"
|
||||
|
||||
[[package]]
|
||||
name = "windows_x86_64_gnu"
|
||||
version = "0.42.2"
|
||||
@@ -11280,12 +11270,6 @@ version = "0.52.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78"
|
||||
|
||||
[[package]]
|
||||
name = "windows_x86_64_gnu"
|
||||
version = "0.53.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "2e55b5ac9ea33f2fc1716d1742db15574fd6fc8dadc51caab1c16a3d3b4190ba"
|
||||
|
||||
[[package]]
|
||||
name = "windows_x86_64_gnullvm"
|
||||
version = "0.42.2"
|
||||
@@ -11304,12 +11288,6 @@ version = "0.52.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d"
|
||||
|
||||
[[package]]
|
||||
name = "windows_x86_64_gnullvm"
|
||||
version = "0.53.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "0a6e035dd0599267ce1ee132e51c27dd29437f63325753051e71dd9e42406c57"
|
||||
|
||||
[[package]]
|
||||
name = "windows_x86_64_msvc"
|
||||
version = "0.42.2"
|
||||
@@ -11328,12 +11306,6 @@ version = "0.52.6"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec"
|
||||
|
||||
[[package]]
|
||||
name = "windows_x86_64_msvc"
|
||||
version = "0.53.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "271414315aff87387382ec3d271b52d7ae78726f5d44ac98b4f4030c91880486"
|
||||
|
||||
[[package]]
|
||||
name = "winnow"
|
||||
version = "0.5.40"
|
||||
|
||||
73
Cargo.toml
@@ -36,6 +36,7 @@ members = [
|
||||
"crates/utils", # Utility functions and helpers
|
||||
"crates/workers", # Worker thread pools and task scheduling
|
||||
"crates/zip", # ZIP file handling and compression
|
||||
"crates/ahm",
|
||||
]
|
||||
resolver = "2"
|
||||
|
||||
@@ -44,7 +45,11 @@ edition = "2024"
|
||||
license = "Apache-2.0"
|
||||
repository = "https://github.com/rustfs/rustfs"
|
||||
rust-version = "1.85"
|
||||
version = "0.0.1"
|
||||
version = "0.0.5"
|
||||
homepage = "https://rustfs.com"
|
||||
description = "RustFS is a high-performance distributed object storage software built using Rust, one of the most popular languages worldwide. "
|
||||
keywords = ["RustFS", "Minio", "object-storage", "filesystem", "s3"]
|
||||
categories = ["web-programming", "development-tools", "filesystem", "network-programming"]
|
||||
|
||||
[workspace.lints.rust]
|
||||
unsafe_code = "deny"
|
||||
@@ -52,38 +57,44 @@ unsafe_code = "deny"
|
||||
[workspace.lints.clippy]
|
||||
all = "warn"
|
||||
|
||||
[patch.crates-io]
|
||||
rustfs-utils = { path = "crates/utils" }
|
||||
rustfs-filemeta = { path = "crates/filemeta" }
|
||||
rustfs-rio = { path = "crates/rio" }
|
||||
|
||||
[workspace.dependencies]
|
||||
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.1" }
|
||||
rustfs-appauth = { path = "crates/appauth", version = "0.0.1" }
|
||||
rustfs-common = { path = "crates/common", version = "0.0.1" }
|
||||
rustfs-crypto = { path = "crates/crypto", version = "0.0.1" }
|
||||
rustfs-ecstore = { path = "crates/ecstore", version = "0.0.1" }
|
||||
rustfs-iam = { path = "crates/iam", version = "0.0.1" }
|
||||
rustfs-lock = { path = "crates/lock", version = "0.0.1" }
|
||||
rustfs-madmin = { path = "crates/madmin", version = "0.0.1" }
|
||||
rustfs-policy = { path = "crates/policy", version = "0.0.1" }
|
||||
rustfs-protos = { path = "crates/protos", version = "0.0.1" }
|
||||
rustfs-s3select-query = { path = "crates/s3select-query", version = "0.0.1" }
|
||||
rustfs = { path = "./rustfs", version = "0.0.1" }
|
||||
rustfs-zip = { path = "./crates/zip", version = "0.0.1" }
|
||||
rustfs-config = { path = "./crates/config", version = "0.0.1" }
|
||||
rustfs-obs = { path = "crates/obs", version = "0.0.1" }
|
||||
rustfs-notify = { path = "crates/notify", version = "0.0.1" }
|
||||
rustfs-utils = { path = "crates/utils", version = "0.0.1" }
|
||||
rustfs-rio = { path = "crates/rio", version = "0.0.1" }
|
||||
rustfs-filemeta = { path = "crates/filemeta", version = "0.0.1" }
|
||||
rustfs-signer = { path = "crates/signer", version = "0.0.1" }
|
||||
rustfs-workers = { path = "crates/workers", version = "0.0.1" }
|
||||
rustfs-ahm = { path = "crates/ahm", version = "0.0.3" }
|
||||
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.5" }
|
||||
rustfs-appauth = { path = "crates/appauth", version = "0.0.5" }
|
||||
rustfs-common = { path = "crates/common", version = "0.0.5" }
|
||||
rustfs-crypto = { path = "crates/crypto", version = "0.0.5" }
|
||||
rustfs-ecstore = { path = "crates/ecstore", version = "0.0.5" }
|
||||
rustfs-iam = { path = "crates/iam", version = "0.0.5" }
|
||||
rustfs-lock = { path = "crates/lock", version = "0.0.5" }
|
||||
rustfs-madmin = { path = "crates/madmin", version = "0.0.5" }
|
||||
rustfs-policy = { path = "crates/policy", version = "0.0.5" }
|
||||
rustfs-protos = { path = "crates/protos", version = "0.0.5" }
|
||||
rustfs-s3select-query = { path = "crates/s3select-query", version = "0.0.5" }
|
||||
rustfs = { path = "./rustfs", version = "0.0.5" }
|
||||
rustfs-zip = { path = "./crates/zip", version = "0.0.5" }
|
||||
rustfs-config = { path = "./crates/config", version = "0.0.5" }
|
||||
rustfs-obs = { path = "crates/obs", version = "0.0.5" }
|
||||
rustfs-notify = { path = "crates/notify", version = "0.0.5" }
|
||||
rustfs-utils = { path = "crates/utils", version = "0.0.5" }
|
||||
rustfs-rio = { path = "crates/rio", version = "0.0.5" }
|
||||
rustfs-filemeta = { path = "crates/filemeta", version = "0.0.5" }
|
||||
rustfs-signer = { path = "crates/signer", version = "0.0.5" }
|
||||
rustfs-workers = { path = "crates/workers", version = "0.0.5" }
|
||||
aes-gcm = { version = "0.10.3", features = ["std"] }
|
||||
arc-swap = "1.7.1"
|
||||
argon2 = { version = "0.5.3", features = ["std"] }
|
||||
atoi = "2.0.0"
|
||||
async-channel = "2.3.1"
|
||||
async-channel = "2.4.0"
|
||||
async-recursion = "1.1.1"
|
||||
async-trait = "0.1.88"
|
||||
async-compression = { version = "0.4.0" }
|
||||
atomic_enum = "0.3.0"
|
||||
aws-sdk-s3 = "1.95.0"
|
||||
aws-sdk-s3 = "1.96.0"
|
||||
axum = "0.8.4"
|
||||
axum-extra = "0.10.1"
|
||||
axum-server = { version = "0.7.2", features = ["tls-rustls"] }
|
||||
@@ -107,7 +118,7 @@ dioxus = { version = "0.6.3", features = ["router"] }
|
||||
dirs = "6.0.0"
|
||||
enumset = "1.1.6"
|
||||
flatbuffers = "25.2.10"
|
||||
flate2 = "1.1.1"
|
||||
flate2 = "1.1.2"
|
||||
flexi_logger = { version = "0.31.2", features = ["trc", "dont_minimize_extra_stacks"] }
|
||||
form_urlencoded = "1.2.1"
|
||||
futures = "0.3.31"
|
||||
@@ -124,7 +135,7 @@ hyper-util = { version = "0.1.14", features = [
|
||||
"server-auto",
|
||||
"server-graceful",
|
||||
] }
|
||||
hyper-rustls = "0.27.5"
|
||||
hyper-rustls = "0.27.7"
|
||||
http = "1.3.1"
|
||||
http-body = "1.0.1"
|
||||
humantime = "2.2.0"
|
||||
@@ -171,7 +182,6 @@ pbkdf2 = "0.12.2"
|
||||
percent-encoding = "2.3.1"
|
||||
pin-project-lite = "0.2.16"
|
||||
prost = "0.13.5"
|
||||
prost-build = "0.13.5"
|
||||
quick-xml = "0.37.5"
|
||||
rand = "0.9.1"
|
||||
rdkafka = { version = "0.37.0", features = ["tokio"] }
|
||||
@@ -195,12 +205,12 @@ rmp-serde = "1.3.0"
|
||||
rsa = "0.9.8"
|
||||
rumqttc = { version = "0.24" }
|
||||
rust-embed = { version = "8.7.2" }
|
||||
rust-i18n = { version = "3.1.4" }
|
||||
rust-i18n = { version = "3.1.5" }
|
||||
rustfs-rsc = "2025.506.1"
|
||||
rustls = { version = "0.23.28" }
|
||||
rustls-pki-types = "1.12.0"
|
||||
rustls-pemfile = "2.2.0"
|
||||
s3s = { git = "https://github.com/Nugine/s3s.git", rev = "4733cdfb27b2713e832967232cbff413bb768c10" }
|
||||
s3s = { version = "0.12.0-minio-preview.1" }
|
||||
shadow-rs = { version = "1.2.0", default-features = false }
|
||||
serde = { version = "1.0.219", features = ["derive"] }
|
||||
serde_json = { version = "1.0.140", features = ["raw_value"] }
|
||||
@@ -225,7 +235,7 @@ time = { version = "0.3.41", features = [
|
||||
"macros",
|
||||
"serde",
|
||||
] }
|
||||
tokio = { version = "1.45.1", features = ["fs", "rt-multi-thread"] }
|
||||
tokio = { version = "1.46.1", features = ["fs", "rt-multi-thread"] }
|
||||
tokio-rustls = { version = "0.26.2", default-features = false }
|
||||
tokio-stream = { version = "0.1.17" }
|
||||
tokio-tar = "0.3.1"
|
||||
@@ -251,8 +261,9 @@ uuid = { version = "1.17.0", features = [
|
||||
wildmatch = { version = "2.4.0", features = ["serde"] }
|
||||
winapi = { version = "0.3.9" }
|
||||
xxhash-rust = { version = "0.8.15", features = ["xxh64", "xxh3"] }
|
||||
zip = "2.2.0"
|
||||
zip = "2.4.2"
|
||||
zstd = "0.13.3"
|
||||
anyhow = "1.0.86"
|
||||
|
||||
[profile.wasm-dev]
|
||||
inherits = "dev"
|
||||
|
||||
44
Dockerfile
@@ -12,36 +12,38 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM alpine:latest
|
||||
FROM alpine:3.18 AS builder
|
||||
|
||||
# Install runtime dependencies
|
||||
RUN apk add --no-cache \
|
||||
RUN apk add -U --no-cache \
|
||||
ca-certificates \
|
||||
tzdata \
|
||||
&& rm -rf /var/cache/apk/*
|
||||
curl \
|
||||
bash \
|
||||
unzip
|
||||
|
||||
# Create rustfs user and group
|
||||
RUN addgroup -g 1000 rustfs && \
|
||||
adduser -D -s /bin/sh -u 1000 -G rustfs rustfs
|
||||
|
||||
# Create data directories
|
||||
RUN mkdir -p /data/rustfs && \
|
||||
chown -R rustfs:rustfs /data
|
||||
RUN curl -Lo /tmp/rustfs.zip https://dl.rustfs.com/artifacts/rustfs/rustfs-x86_64-unknown-linux-musl.zip && \
|
||||
unzip -o /tmp/rustfs.zip -d /tmp && \
|
||||
mv /tmp/rustfs /rustfs && \
|
||||
chmod +x /rustfs && \
|
||||
rm -rf /tmp/*
|
||||
|
||||
# Copy binary based on target architecture
|
||||
COPY --chown=rustfs:rustfs \
|
||||
target/*/release/rustfs \
|
||||
/usr/local/bin/rustfs
|
||||
FROM alpine:3.18
|
||||
|
||||
RUN chmod +x /usr/local/bin/rustfs
|
||||
RUN apk add -U --no-cache \
|
||||
ca-certificates \
|
||||
bash
|
||||
|
||||
# Switch to non-root user
|
||||
USER rustfs
|
||||
COPY --from=builder /rustfs /usr/local/bin/rustfs
|
||||
|
||||
# Expose ports
|
||||
EXPOSE 9000 9001
|
||||
ENV RUSTFS_ACCESS_KEY=rustfsadmin \
|
||||
RUSTFS_SECRET_KEY=rustfsadmin \
|
||||
RUSTFS_ADDRESS=":9000" \
|
||||
RUSTFS_CONSOLE_ENABLE=true \
|
||||
RUST_LOG=warn
|
||||
|
||||
EXPOSE 9000
|
||||
|
||||
RUN mkdir -p /data
|
||||
VOLUME /data
|
||||
|
||||
# Set default command
|
||||
CMD ["rustfs", "/data"]
|
||||
|
||||
3
Makefile
@@ -31,7 +31,8 @@ check:
|
||||
.PHONY: test
|
||||
test:
|
||||
@echo "🧪 Running tests..."
|
||||
cargo test --all --exclude e2e_test
|
||||
cargo nextest run --all --exclude e2e_test
|
||||
cargo test --all --doc
|
||||
|
||||
.PHONY: pre-commit
|
||||
pre-commit: fmt clippy check test
|
||||
|
||||
37
README.md
@@ -1,14 +1,14 @@
|
||||
[](https://rustfs.com)
|
||||
[](https://rustfs.com)
|
||||
|
||||
|
||||
<p align="center">RustFS is a high-performance distributed object storage software built using Rust</p>
|
||||
|
||||
|
||||
<p align="center">
|
||||
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
|
||||
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
|
||||
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
|
||||
<img alt="Github Contributors" src="https://img.shields.io/github/contributors/rustfs/rustfs"/>
|
||||
<img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/rustfs/rustfs"/>
|
||||
<img alt="Discord" src="https://img.shields.io/discord/1107178041848909847?label=discord"/>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
@@ -19,11 +19,22 @@
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a>
|
||||
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a> |
|
||||
<!-- Keep these links. Translations will automatically update with the README. -->
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=de">Deutsch</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=es">Español</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=fr">français</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ja">日本語</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ko">한국어</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=pt">Português</a> |
|
||||
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ru">Русский</a>
|
||||
</p>
|
||||
|
||||
RustFS is a high-performance distributed object storage software built using Rust, one of the most popular languages worldwide. Along with MinIO, it shares a range of advantages such as simplicity, S3 compatibility, open-source nature, support for data lakes, AI, and big data. Furthermore, it has a better and more user-friendly open-source license in comparison to other storage systems, being constructed under the Apache license. As Rust serves as its foundation, RustFS provides faster speed and safer distributed features for high-performance object storage.
|
||||
|
||||
|
||||
> ⚠️ **RustFS is under rapid development. Do NOT use in production environments!**
|
||||
|
||||
## Features
|
||||
|
||||
- **High Performance**: Built with Rust, ensuring speed and efficiency.
|
||||
@@ -63,14 +74,20 @@ Stress test server parameters
|
||||
|
||||
To get started with RustFS, follow these steps:
|
||||
|
||||
1. **Install RustFS**: Download the latest release from our [GitHub Releases](https://github.com/rustfs/rustfs/releases).
|
||||
2. **Run RustFS**: Use the provided binary to start the server.
|
||||
1. **One-click installation script (Option 1)**
|
||||
|
||||
```bash
|
||||
./rustfs /data
|
||||
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
|
||||
```
|
||||
|
||||
3. **Access the Console**: Open your web browser and navigate to `http://localhost:9001` to access the RustFS console.
|
||||
2. **Docker Quick Start (Option 2)**
|
||||
|
||||
```bash
|
||||
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
|
||||
```
|
||||
|
||||
|
||||
3. **Access the Console**: Open your web browser and navigate to `http://localhost:9001` to access the RustFS console, default username and password is `rustfsadmin` .
|
||||
4. **Create a Bucket**: Use the console to create a new bucket for your objects.
|
||||
5. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs to interact with your RustFS instance.
|
||||
|
||||
|
||||
22
README_ZH.md
@@ -1,14 +1,12 @@
|
||||
[](https://rustfs.com)
|
||||
[](https://rustfs.com)
|
||||
|
||||
<p align="center">RustFS 是一个使用 Rust 构建的高性能分布式对象存储软件</p >
|
||||
|
||||
<p align="center">
|
||||
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
|
||||
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
|
||||
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
|
||||
<img alt="Github Contributors" src="https://img.shields.io/github/contributors/rustfs/rustfs"/>
|
||||
<img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/rustfs/rustfs"/>
|
||||
<img alt="Discord" src="https://img.shields.io/discord/1107178041848909847?label=discord"/>
|
||||
</p >
|
||||
|
||||
<p align="center">
|
||||
@@ -63,14 +61,20 @@ RustFS 是一个使用 Rust(全球最受欢迎的编程语言之一)构建
|
||||
|
||||
要开始使用 RustFS,请按照以下步骤操作:
|
||||
|
||||
1. **安装 RustFS**:从我们的 [GitHub Releases](https://github.com/rustfs/rustfs/releases) 下载最新版本。
|
||||
2. **运行 RustFS**:使用提供的二进制文件启动服务器。
|
||||
1. **一键脚本快速启动 (方案一)**
|
||||
|
||||
```bash
|
||||
./rustfs /data
|
||||
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
|
||||
```
|
||||
|
||||
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9001` 以访问 RustFS 控制台。
|
||||
2. **Docker快速启动(方案二)**
|
||||
|
||||
```bash
|
||||
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
|
||||
```
|
||||
|
||||
|
||||
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9001` 以访问 RustFS 控制台,默认的用户名和密码是 `rustfsadmin` 。
|
||||
4. **创建存储桶**:使用控制台为您的对象创建新的存储桶。
|
||||
5. **上传对象**:您可以直接通过控制台上传文件,或使用 S3 兼容的 API 与您的 RustFS 实例交互。
|
||||
|
||||
|
||||
18
SECURITY.md
Normal file
@@ -0,0 +1,18 @@
|
||||
# Security Policy
|
||||
|
||||
## Supported Versions
|
||||
|
||||
Use this section to tell people about which versions of your project are
|
||||
currently being supported with security updates.
|
||||
|
||||
| Version | Supported |
|
||||
| ------- | ------------------ |
|
||||
| 1.x.x | :white_check_mark: |
|
||||
|
||||
## Reporting a Vulnerability
|
||||
|
||||
Use this section to tell people how to report a vulnerability.
|
||||
|
||||
Tell them where to go, how often they can expect to get an update on a
|
||||
reported vulnerability, what to expect if the vulnerability is accepted or
|
||||
declined, etc.
|
||||
68
TODO.md
@@ -1,68 +0,0 @@
|
||||
# TODO LIST
|
||||
|
||||
## 基础存储
|
||||
|
||||
- [x] EC 可用读写数量判断 Read/WriteQuorum
|
||||
- [ ] 优化后台并发执行,可中断,传引用?
|
||||
- [x] 小文件存储到 metafile, inlinedata
|
||||
- [x] 完善 bucketmeta
|
||||
- [x] 对象锁
|
||||
- [x] 边读写边 hash,实现 reader 嵌套
|
||||
- [x] 远程 rpc
|
||||
- [x] 错误类型判断,程序中判断错误类型,如何统一错误
|
||||
- [x] 优化 xlmeta, 自定义 msg 数据结构
|
||||
- [ ] 优化 io.reader 参考 GetObjectNInfo 方便 io copy 如果 异步写,再平衡
|
||||
- [ ] 代码优化 使用范型?
|
||||
- [ ] 抽象出 metafile 存储
|
||||
|
||||
## 基础功能
|
||||
|
||||
- [ ] 桶操作
|
||||
- [x] 创建 CreateBucket
|
||||
- [x] 列表 ListBuckets
|
||||
- [ ] 桶下面的文件列表 ListObjects
|
||||
- [x] 简单实现功能
|
||||
- [ ] 优化并发读取
|
||||
- [ ] 删除
|
||||
- [x] 详情 HeadBucket
|
||||
- [ ] 文件操作
|
||||
- [x] 上传 PutObject
|
||||
- [x] 大文件上传
|
||||
- [x] 创建分片上传 CreateMultipartUpload
|
||||
- [x] 上传分片 PubObjectPart
|
||||
- [x] 提交完成 CompleteMultipartUpload
|
||||
- [x] 取消上传 AbortMultipartUpload
|
||||
- [x] 下载 GetObject
|
||||
- [x] 删除 DeleteObjects
|
||||
- [ ] 版本控制
|
||||
- [ ] 对象锁
|
||||
- [ ] 复制 CopyObject
|
||||
- [ ] 详情 HeadObject
|
||||
- [ ] 对象预先签名(get、put、head、post)
|
||||
|
||||
## 扩展功能
|
||||
|
||||
- [ ] 用户管理
|
||||
- [ ] Policy 管理
|
||||
- [ ] AK/SK分配管理
|
||||
- [ ] data scanner 统计和对象修复
|
||||
- [ ] 桶配额
|
||||
- [ ] 桶只读
|
||||
- [ ] 桶复制
|
||||
- [ ] 桶事件通知
|
||||
- [ ] 桶公开、桶私有
|
||||
- [ ] 对象生命周期管理
|
||||
- [ ] prometheus 对接
|
||||
- [ ] 日志收集和日志外发
|
||||
- [ ] 对象压缩
|
||||
- [ ] STS
|
||||
- [ ] 分层(阿里云、腾讯云、S3 远程对接)
|
||||
|
||||
|
||||
|
||||
## 性能优化
|
||||
- [ ] bitrot impl AsyncRead/AsyncWrite
|
||||
- [ ] erasure 并发读写
|
||||
- [x] 完善删除逻辑,并发处理,先移动到回收站,
|
||||
- [ ] 空间不足时清空回收站
|
||||
- [ ] list_object 使用 reader 传输
|
||||
@@ -37,7 +37,9 @@ copyright = "Copyright 2025 rustfs.com"
|
||||
|
||||
icon = [
|
||||
"assets/icons/icon.icns",
|
||||
"assets/icons/icon.ico"
|
||||
"assets/icons/icon.ico",
|
||||
"assets/icons/icon.png",
|
||||
"assets/icons/rustfs-icon.png",
|
||||
]
|
||||
#[bundle.macos]
|
||||
#provider_short_name = "RustFs"
|
||||
|
||||
BIN
cli/rustfs-gui/assets/icon.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
BIN
cli/rustfs-gui/assets/icons/icon.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_128x128.png
Normal file
|
After Width: | Height: | Size: 4.5 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_128x128@2x.png
Normal file
|
After Width: | Height: | Size: 9.9 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_16x16.png
Normal file
|
After Width: | Height: | Size: 498 B |
BIN
cli/rustfs-gui/assets/icons/icon_16x16@2x.png
Normal file
|
After Width: | Height: | Size: 969 B |
BIN
cli/rustfs-gui/assets/icons/icon_256x256.png
Normal file
|
After Width: | Height: | Size: 9.9 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_256x256@2x.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_32x32.png
Normal file
|
After Width: | Height: | Size: 969 B |
BIN
cli/rustfs-gui/assets/icons/icon_32x32@2x.png
Normal file
|
After Width: | Height: | Size: 2.0 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_512x512.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
BIN
cli/rustfs-gui/assets/icons/icon_512x512@2x.png
Normal file
|
After Width: | Height: | Size: 47 KiB |
BIN
cli/rustfs-gui/assets/icons/rustfs-icon.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
BIN
cli/rustfs-gui/assets/rustfs-icon.png
Normal file
|
After Width: | Height: | Size: 23 KiB |
@@ -1,20 +1,15 @@
|
||||
<svg width="1558" height="260" viewBox="0 0 1558 260" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<g clip-path="url(#clip0_0_3)">
|
||||
<path d="M1288.5 112.905H1159.75V58.4404H1262L1270 0L1074 0V260H1159.75V162.997H1296.95L1288.5 112.905Z"
|
||||
fill="#0196D0"/>
|
||||
<path d="M1058.62 58.4404V0H789V58.4404H881.133V260H966.885V58.4404H1058.62Z" fill="#0196D0"/>
|
||||
<path d="M521 179.102V0L454.973 15V161C454.973 181.124 452.084 193.146 443.5 202C434.916 211.257 419.318 214.5 400.5 214.5C381.022 214.5 366.744 210.854 357.5 202C348.916 193.548 346.357 175.721 346.357 156V0L280 15V175.48C280 208.08 290.234 229.412 309.712 241.486C329.19 253.56 358.903 260 400.5 260C440.447 260 470.159 253.56 490.297 241.486C510.766 229.412 521 208.483 521 179.102Z"
|
||||
fill="#0196D0"/>
|
||||
<path d="M172.84 84.2813C172.84 97.7982 168.249 107.737 158.41 113.303C149.883 118.471 137.092 121.254 120.693 122.049V162.997C129.876 163.792 138.076 166.177 144.307 176.514L184.647 260H265L225.316 180.489C213.181 155.046 201.374 149.48 178.744 143.517C212.197 138.349 241.386 118.471 241.386 73.1499C241.386 53.2722 233.843 30.2141 218.756 17.8899C203.998 5.56575 183.991 0 159.394 0H120.693V48.5015H127.58C142.23 48.5015 153.6 51.4169 161.689 57.2477C169.233 62.8135 172.84 71.5596 172.84 84.2813ZM120.693 122.049C119.163 122.049 117.741 122.049 116.43 122.049H68.5457V48.5015H120.693V0H0V260H70.5137V162.997H110.526C113.806 162.997 117.741 162.997 120.693 162.997V122.049Z"
|
||||
fill="#0196D0"/>
|
||||
<path d="M774 179.297C774 160.829 766.671 144.669 752.013 131.972C738.127 119.66 712.025 110.169 673.708 103.5C662.136 101.191 651.722 99.6523 643.235 97.3437C586.532 84.6467 594.632 52.7118 650.564 52.7118C680.651 52.7118 709.582 61.946 738.127 66.9478C742.37 67.7174 743.913 68.1021 744.298 68.1021L750.47 12.697C720.383 3.46282 684.895 0 654.036 0C616.619 0 587.689 6.54088 567.245 19.2379C546.801 31.9349 536 57.7137 536 82.3382C536 103.5 543.715 119.66 559.916 131.972C575.731 143.515 604.276 152.749 645.55 160.059C658.279 162.368 668.694 163.907 676.794 166.215C685.023 168.524 691.066 170.704 694.924 172.756C702.253 176.604 706.11 182.375 706.11 188.531C706.11 196.611 701.481 202.767 692.224 207C664.836 220.081 587.689 212.001 556.83 198.15L543.715 247.784C547.186 248.169 552.972 249.323 559.916 250.477C616.619 259.327 690.681 270.869 741.212 238.935C762.814 225.468 774 206.23 774 179.297Z"
|
||||
fill="#0196D0"/>
|
||||
<path d="M1558 179.568C1558 160.383 1550.42 144.268 1535.67 131.99C1521.32 119.968 1494.34 110.631 1454.74 103.981C1442.38 101.679 1432.01 99.3764 1422.84 97.8416C1422.44 97.8416 1422.04 97.8416 1422.04 97.4579V112.422L1361.04 75.2038L1422.04 38.3692V52.9496C1424.7 52.9496 1427.49 52.9496 1430.41 52.9496C1461.51 52.9496 1491.42 62.5419 1521.32 67.5299C1525.31 67.9136 1526.9 67.9136 1527.3 67.9136L1533.68 12.6619C1502.98 3.83692 1465.9 0 1434 0C1395.33 0 1365.43 6.52277 1345.09 19.5683C1323.16 32.6139 1312 57.9376 1312 82.8776C1312 103.981 1320.37 120.096 1336.72 131.607C1353.46 143.885 1382.97 153.093 1425.23 160.383C1434 161.535 1441.18 162.686 1447.56 164.22L1448.36 150.791L1507.36 190.312L1445.57 224.844L1445.96 212.949C1409.68 215.635 1357.45 209.112 1333.53 197.985L1320.37 247.482C1323.56 248.249 1329.54 248.633 1336.72 250.551C1395.33 259.376 1471.88 270.887 1524.11 238.657C1546.84 225.611 1558 205.659 1558 179.568Z"
|
||||
fill="#0196D0"/>
|
||||
</g>
|
||||
<defs>
|
||||
<clipPath id="clip0_0_3">
|
||||
<rect width="1558" height="260" fill="white"/>
|
||||
</clipPath>
|
||||
</defs>
|
||||
<g clip-path="url(#clip0_0_3)">
|
||||
<path d="M1288.5 112.905H1159.75V58.4404H1262L1270 0L1074 0V260H1159.75V162.997H1296.95L1288.5 112.905Z" fill="#0196D0"/>
|
||||
<path d="M1058.62 58.4404V0H789V58.4404H881.133V260H966.885V58.4404H1058.62Z" fill="#0196D0"/>
|
||||
<path d="M521 179.102V0L454.973 15V161C454.973 181.124 452.084 193.146 443.5 202C434.916 211.257 419.318 214.5 400.5 214.5C381.022 214.5 366.744 210.854 357.5 202C348.916 193.548 346.357 175.721 346.357 156V0L280 15V175.48C280 208.08 290.234 229.412 309.712 241.486C329.19 253.56 358.903 260 400.5 260C440.447 260 470.159 253.56 490.297 241.486C510.766 229.412 521 208.483 521 179.102Z" fill="#0196D0"/>
|
||||
<path d="M172.84 84.2813C172.84 97.7982 168.249 107.737 158.41 113.303C149.883 118.471 137.092 121.254 120.693 122.049V162.997C129.876 163.792 138.076 166.177 144.307 176.514L184.647 260H265L225.316 180.489C213.181 155.046 201.374 149.48 178.744 143.517C212.197 138.349 241.386 118.471 241.386 73.1499C241.386 53.2722 233.843 30.2141 218.756 17.8899C203.998 5.56575 183.991 0 159.394 0H120.693V48.5015H127.58C142.23 48.5015 153.6 51.4169 161.689 57.2477C169.233 62.8135 172.84 71.5596 172.84 84.2813ZM120.693 122.049C119.163 122.049 117.741 122.049 116.43 122.049H68.5457V48.5015H120.693V0H0V260H70.5137V162.997H110.526C113.806 162.997 117.741 162.997 120.693 162.997V122.049Z" fill="#0196D0"/>
|
||||
<path d="M774 179.297C774 160.829 766.671 144.669 752.013 131.972C738.127 119.66 712.025 110.169 673.708 103.5C662.136 101.191 651.722 99.6523 643.235 97.3437C586.532 84.6467 594.632 52.7118 650.564 52.7118C680.651 52.7118 709.582 61.946 738.127 66.9478C742.37 67.7174 743.913 68.1021 744.298 68.1021L750.47 12.697C720.383 3.46282 684.895 0 654.036 0C616.619 0 587.689 6.54088 567.245 19.2379C546.801 31.9349 536 57.7137 536 82.3382C536 103.5 543.715 119.66 559.916 131.972C575.731 143.515 604.276 152.749 645.55 160.059C658.279 162.368 668.694 163.907 676.794 166.215C685.023 168.524 691.066 170.704 694.924 172.756C702.253 176.604 706.11 182.375 706.11 188.531C706.11 196.611 701.481 202.767 692.224 207C664.836 220.081 587.689 212.001 556.83 198.15L543.715 247.784C547.186 248.169 552.972 249.323 559.916 250.477C616.619 259.327 690.681 270.869 741.212 238.935C762.814 225.468 774 206.23 774 179.297Z" fill="#0196D0"/>
|
||||
<path d="M1558 179.568C1558 160.383 1550.42 144.268 1535.67 131.99C1521.32 119.968 1494.34 110.631 1454.74 103.981C1442.38 101.679 1432.01 99.3764 1422.84 97.8416C1422.44 97.8416 1422.04 97.8416 1422.04 97.4579V112.422L1361.04 75.2038L1422.04 38.3692V52.9496C1424.7 52.9496 1427.49 52.9496 1430.41 52.9496C1461.51 52.9496 1491.42 62.5419 1521.32 67.5299C1525.31 67.9136 1526.9 67.9136 1527.3 67.9136L1533.68 12.6619C1502.98 3.83692 1465.9 0 1434 0C1395.33 0 1365.43 6.52277 1345.09 19.5683C1323.16 32.6139 1312 57.9376 1312 82.8776C1312 103.981 1320.37 120.096 1336.72 131.607C1353.46 143.885 1382.97 153.093 1425.23 160.383C1434 161.535 1441.18 162.686 1447.56 164.22L1448.36 150.791L1507.36 190.312L1445.57 224.844L1445.96 212.949C1409.68 215.635 1357.45 209.112 1333.53 197.985L1320.37 247.482C1323.56 248.249 1329.54 248.633 1336.72 250.551C1395.33 259.376 1471.88 270.887 1524.11 238.657C1546.84 225.611 1558 205.659 1558 179.568Z" fill="#0196D0"/>
|
||||
</g>
|
||||
<defs>
|
||||
<clipPath id="clip0_0_3">
|
||||
<rect width="1558" height="260" fill="white"/>
|
||||
</clipPath>
|
||||
</defs>
|
||||
</svg>
|
||||
|
||||
|
Before Width: | Height: | Size: 3.5 KiB After Width: | Height: | Size: 3.4 KiB |
35
crates/ahm/Cargo.toml
Normal file
@@ -0,0 +1,35 @@
|
||||
[package]
|
||||
name = "rustfs-ahm"
|
||||
version = "0.0.3"
|
||||
edition = "2021"
|
||||
authors = ["RustFS Team"]
|
||||
license = "Apache-2.0"
|
||||
description = "RustFS AHM (Automatic Health Management) Scanner"
|
||||
|
||||
[dependencies]
|
||||
rustfs-ecstore = { workspace = true }
|
||||
rustfs-common = { workspace = true }
|
||||
rustfs-filemeta = { workspace = true }
|
||||
rustfs-madmin = { workspace = true }
|
||||
rustfs-utils = { workspace = true }
|
||||
tokio = { workspace = true, features = ["full"] }
|
||||
tokio-util = { workspace = true }
|
||||
tracing = { workspace = true }
|
||||
serde = { workspace = true, features = ["derive"] }
|
||||
serde_json = { workspace = true }
|
||||
thiserror = { workspace = true }
|
||||
bytes = { workspace = true }
|
||||
time = { workspace = true, features = ["serde"] }
|
||||
uuid = { workspace = true, features = ["v4", "serde"] }
|
||||
anyhow = { workspace = true }
|
||||
async-trait = { workspace = true }
|
||||
futures = { workspace = true }
|
||||
url = { workspace = true }
|
||||
rustfs-lock = { workspace = true }
|
||||
|
||||
lazy_static = { workspace = true }
|
||||
|
||||
[dev-dependencies]
|
||||
rmp-serde = { workspace = true }
|
||||
tokio-test = "0.4"
|
||||
serde_json = "1.0"
|
||||
45
crates/ahm/src/error.rs
Normal file
@@ -0,0 +1,45 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use thiserror::Error;
|
||||
|
||||
#[derive(Debug, Error)]
|
||||
pub enum Error {
|
||||
#[error("I/O error: {0}")]
|
||||
Io(#[from] std::io::Error),
|
||||
|
||||
#[error("Storage error: {0}")]
|
||||
Storage(#[from] rustfs_ecstore::error::Error),
|
||||
|
||||
#[error("Configuration error: {0}")]
|
||||
Config(String),
|
||||
|
||||
#[error("Scanner error: {0}")]
|
||||
Scanner(String),
|
||||
|
||||
#[error("Metrics error: {0}")]
|
||||
Metrics(String),
|
||||
|
||||
#[error(transparent)]
|
||||
Other(#[from] anyhow::Error),
|
||||
}
|
||||
|
||||
pub type Result<T, E = Error> = std::result::Result<T, E>;
|
||||
|
||||
// Implement conversion from ahm::Error to std::io::Error for use in main.rs
|
||||
impl From<Error> for std::io::Error {
|
||||
fn from(err: Error) -> Self {
|
||||
std::io::Error::other(err)
|
||||
}
|
||||
}
|
||||
54
crates/ahm/src/lib.rs
Normal file
@@ -0,0 +1,54 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::sync::OnceLock;
|
||||
use tokio_util::sync::CancellationToken;
|
||||
|
||||
pub mod error;
|
||||
pub mod scanner;
|
||||
|
||||
pub use error::{Error, Result};
|
||||
pub use scanner::{
|
||||
load_data_usage_from_backend, store_data_usage_in_backend, BucketTargetUsageInfo, BucketUsageInfo, DataUsageInfo, Scanner,
|
||||
ScannerMetrics,
|
||||
};
|
||||
|
||||
// Global cancellation token for AHM services (scanner and other background tasks)
|
||||
static GLOBAL_AHM_SERVICES_CANCEL_TOKEN: OnceLock<CancellationToken> = OnceLock::new();
|
||||
|
||||
/// Initialize the global AHM services cancellation token
|
||||
pub fn init_ahm_services_cancel_token(cancel_token: CancellationToken) -> Result<()> {
|
||||
GLOBAL_AHM_SERVICES_CANCEL_TOKEN
|
||||
.set(cancel_token)
|
||||
.map_err(|_| Error::Config("AHM services cancel token already initialized".to_string()))
|
||||
}
|
||||
|
||||
/// Get the global AHM services cancellation token
|
||||
pub fn get_ahm_services_cancel_token() -> Option<&'static CancellationToken> {
|
||||
GLOBAL_AHM_SERVICES_CANCEL_TOKEN.get()
|
||||
}
|
||||
|
||||
/// Create and initialize the global AHM services cancellation token
|
||||
pub fn create_ahm_services_cancel_token() -> CancellationToken {
|
||||
let cancel_token = CancellationToken::new();
|
||||
init_ahm_services_cancel_token(cancel_token.clone()).expect("AHM services cancel token already initialized");
|
||||
cancel_token
|
||||
}
|
||||
|
||||
/// Shutdown all AHM services gracefully
|
||||
pub fn shutdown_ahm_services() {
|
||||
if let Some(cancel_token) = GLOBAL_AHM_SERVICES_CANCEL_TOKEN.get() {
|
||||
cancel_token.cancel();
|
||||
}
|
||||
}
|
||||
1247
crates/ahm/src/scanner/data_scanner.rs
Normal file
671
crates/ahm/src/scanner/data_usage.rs
Normal file
@@ -0,0 +1,671 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::{collections::HashMap, sync::Arc, time::SystemTime};
|
||||
|
||||
use rustfs_ecstore::{bucket::metadata_sys::get_replication_config, config::com::read_config, store::ECStore};
|
||||
use rustfs_utils::path::SLASH_SEPARATOR;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tracing::{error, info, warn};
|
||||
|
||||
use crate::error::{Error, Result};
|
||||
|
||||
// Data usage storage constants
|
||||
pub const DATA_USAGE_ROOT: &str = SLASH_SEPARATOR;
|
||||
const DATA_USAGE_OBJ_NAME: &str = ".usage.json";
|
||||
const DATA_USAGE_BLOOM_NAME: &str = ".bloomcycle.bin";
|
||||
pub const DATA_USAGE_CACHE_NAME: &str = ".usage-cache.bin";
|
||||
|
||||
// Data usage storage paths
|
||||
lazy_static::lazy_static! {
|
||||
pub static ref DATA_USAGE_BUCKET: String = format!("{}{}{}",
|
||||
rustfs_ecstore::disk::RUSTFS_META_BUCKET,
|
||||
SLASH_SEPARATOR,
|
||||
rustfs_ecstore::disk::BUCKET_META_PREFIX
|
||||
);
|
||||
pub static ref DATA_USAGE_OBJ_NAME_PATH: String = format!("{}{}{}",
|
||||
rustfs_ecstore::disk::BUCKET_META_PREFIX,
|
||||
SLASH_SEPARATOR,
|
||||
DATA_USAGE_OBJ_NAME
|
||||
);
|
||||
pub static ref DATA_USAGE_BLOOM_NAME_PATH: String = format!("{}{}{}",
|
||||
rustfs_ecstore::disk::BUCKET_META_PREFIX,
|
||||
SLASH_SEPARATOR,
|
||||
DATA_USAGE_BLOOM_NAME
|
||||
);
|
||||
}
|
||||
|
||||
/// Bucket target usage info provides replication statistics
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
|
||||
pub struct BucketTargetUsageInfo {
|
||||
pub replication_pending_size: u64,
|
||||
pub replication_failed_size: u64,
|
||||
pub replicated_size: u64,
|
||||
pub replica_size: u64,
|
||||
pub replication_pending_count: u64,
|
||||
pub replication_failed_count: u64,
|
||||
pub replicated_count: u64,
|
||||
}
|
||||
|
||||
/// Bucket usage info provides bucket-level statistics
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
|
||||
pub struct BucketUsageInfo {
|
||||
pub size: u64,
|
||||
// Following five fields suffixed with V1 are here for backward compatibility
|
||||
// Total Size for objects that have not yet been replicated
|
||||
pub replication_pending_size_v1: u64,
|
||||
// Total size for objects that have witness one or more failures and will be retried
|
||||
pub replication_failed_size_v1: u64,
|
||||
// Total size for objects that have been replicated to destination
|
||||
pub replicated_size_v1: u64,
|
||||
// Total number of objects pending replication
|
||||
pub replication_pending_count_v1: u64,
|
||||
// Total number of objects that failed replication
|
||||
pub replication_failed_count_v1: u64,
|
||||
|
||||
pub objects_count: u64,
|
||||
pub object_size_histogram: HashMap<String, u64>,
|
||||
pub object_versions_histogram: HashMap<String, u64>,
|
||||
pub versions_count: u64,
|
||||
pub delete_markers_count: u64,
|
||||
pub replica_size: u64,
|
||||
pub replica_count: u64,
|
||||
pub replication_info: HashMap<String, BucketTargetUsageInfo>,
|
||||
}
|
||||
|
||||
/// DataUsageInfo represents data usage stats of the underlying storage
|
||||
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
|
||||
pub struct DataUsageInfo {
|
||||
/// Total capacity
|
||||
pub total_capacity: u64,
|
||||
/// Total used capacity
|
||||
pub total_used_capacity: u64,
|
||||
/// Total free capacity
|
||||
pub total_free_capacity: u64,
|
||||
|
||||
/// LastUpdate is the timestamp of when the data usage info was last updated
|
||||
pub last_update: Option<SystemTime>,
|
||||
|
||||
/// Objects total count across all buckets
|
||||
pub objects_total_count: u64,
|
||||
/// Versions total count across all buckets
|
||||
pub versions_total_count: u64,
|
||||
/// Delete markers total count across all buckets
|
||||
pub delete_markers_total_count: u64,
|
||||
/// Objects total size across all buckets
|
||||
pub objects_total_size: u64,
|
||||
/// Replication info across all buckets
|
||||
pub replication_info: HashMap<String, BucketTargetUsageInfo>,
|
||||
|
||||
/// Total number of buckets in this cluster
|
||||
pub buckets_count: u64,
|
||||
/// Buckets usage info provides following information across all buckets
|
||||
pub buckets_usage: HashMap<String, BucketUsageInfo>,
|
||||
/// Deprecated kept here for backward compatibility reasons
|
||||
pub bucket_sizes: HashMap<String, u64>,
|
||||
}
|
||||
|
||||
/// Size summary for a single object or group of objects
|
||||
#[derive(Debug, Default, Clone)]
|
||||
pub struct SizeSummary {
|
||||
/// Total size
|
||||
pub total_size: usize,
|
||||
/// Number of versions
|
||||
pub versions: usize,
|
||||
/// Number of delete markers
|
||||
pub delete_markers: usize,
|
||||
/// Replicated size
|
||||
pub replicated_size: usize,
|
||||
/// Replicated count
|
||||
pub replicated_count: usize,
|
||||
/// Pending size
|
||||
pub pending_size: usize,
|
||||
/// Failed size
|
||||
pub failed_size: usize,
|
||||
/// Replica size
|
||||
pub replica_size: usize,
|
||||
/// Replica count
|
||||
pub replica_count: usize,
|
||||
/// Pending count
|
||||
pub pending_count: usize,
|
||||
/// Failed count
|
||||
pub failed_count: usize,
|
||||
/// Replication target stats
|
||||
pub repl_target_stats: HashMap<String, ReplTargetSizeSummary>,
|
||||
}
|
||||
|
||||
/// Replication target size summary
|
||||
#[derive(Debug, Default, Clone)]
|
||||
pub struct ReplTargetSizeSummary {
|
||||
/// Replicated size
|
||||
pub replicated_size: usize,
|
||||
/// Replicated count
|
||||
pub replicated_count: usize,
|
||||
/// Pending size
|
||||
pub pending_size: usize,
|
||||
/// Failed size
|
||||
pub failed_size: usize,
|
||||
/// Pending count
|
||||
pub pending_count: usize,
|
||||
/// Failed count
|
||||
pub failed_count: usize,
|
||||
}
|
||||
|
||||
impl DataUsageInfo {
|
||||
/// Create a new DataUsageInfo
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
/// Add object metadata to data usage statistics
|
||||
pub fn add_object(&mut self, object_path: &str, meta_object: &rustfs_filemeta::MetaObject) {
|
||||
// This method is kept for backward compatibility
|
||||
// For accurate version counting, use add_object_from_file_meta instead
|
||||
let bucket_name = match self.extract_bucket_from_path(object_path) {
|
||||
Ok(name) => name,
|
||||
Err(_) => return,
|
||||
};
|
||||
|
||||
// Update bucket statistics
|
||||
if let Some(bucket_usage) = self.buckets_usage.get_mut(&bucket_name) {
|
||||
bucket_usage.size += meta_object.size as u64;
|
||||
bucket_usage.objects_count += 1;
|
||||
bucket_usage.versions_count += 1; // Simplified: assume 1 version per object
|
||||
|
||||
// Update size histogram
|
||||
let total_size = meta_object.size as u64;
|
||||
let size_ranges = [
|
||||
("0-1KB", 0, 1024),
|
||||
("1KB-1MB", 1024, 1024 * 1024),
|
||||
("1MB-10MB", 1024 * 1024, 10 * 1024 * 1024),
|
||||
("10MB-100MB", 10 * 1024 * 1024, 100 * 1024 * 1024),
|
||||
("100MB-1GB", 100 * 1024 * 1024, 1024 * 1024 * 1024),
|
||||
("1GB+", 1024 * 1024 * 1024, u64::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_size, max_size) in size_ranges {
|
||||
if total_size >= min_size && total_size < max_size {
|
||||
*bucket_usage.object_size_histogram.entry(range_name.to_string()).or_insert(0) += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Update version histogram (simplified - count as single version)
|
||||
*bucket_usage
|
||||
.object_versions_histogram
|
||||
.entry("SINGLE_VERSION".to_string())
|
||||
.or_insert(0) += 1;
|
||||
} else {
|
||||
// Create new bucket usage
|
||||
let mut bucket_usage = BucketUsageInfo {
|
||||
size: meta_object.size as u64,
|
||||
objects_count: 1,
|
||||
versions_count: 1,
|
||||
..Default::default()
|
||||
};
|
||||
bucket_usage.object_size_histogram.insert("0-1KB".to_string(), 1);
|
||||
bucket_usage.object_versions_histogram.insert("SINGLE_VERSION".to_string(), 1);
|
||||
self.buckets_usage.insert(bucket_name, bucket_usage);
|
||||
}
|
||||
|
||||
// Update global statistics
|
||||
self.objects_total_size += meta_object.size as u64;
|
||||
self.objects_total_count += 1;
|
||||
self.versions_total_count += 1;
|
||||
}
|
||||
|
||||
/// Add object from FileMeta for accurate version counting
|
||||
pub fn add_object_from_file_meta(&mut self, object_path: &str, file_meta: &rustfs_filemeta::FileMeta) {
|
||||
let bucket_name = match self.extract_bucket_from_path(object_path) {
|
||||
Ok(name) => name,
|
||||
Err(_) => return,
|
||||
};
|
||||
|
||||
// Calculate accurate statistics from all versions
|
||||
let mut total_size = 0u64;
|
||||
let mut versions_count = 0u64;
|
||||
let mut delete_markers_count = 0u64;
|
||||
let mut latest_object_size = 0u64;
|
||||
|
||||
// Process all versions to get accurate counts
|
||||
for version in &file_meta.versions {
|
||||
match rustfs_filemeta::FileMetaVersion::try_from(version.clone()) {
|
||||
Ok(ver) => {
|
||||
if let Some(obj) = ver.object {
|
||||
total_size += obj.size as u64;
|
||||
versions_count += 1;
|
||||
latest_object_size = obj.size as u64; // Keep track of latest object size
|
||||
} else if ver.delete_marker.is_some() {
|
||||
delete_markers_count += 1;
|
||||
}
|
||||
}
|
||||
Err(_) => {
|
||||
// Skip invalid versions
|
||||
continue;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Update bucket statistics
|
||||
if let Some(bucket_usage) = self.buckets_usage.get_mut(&bucket_name) {
|
||||
bucket_usage.size += total_size;
|
||||
bucket_usage.objects_count += 1;
|
||||
bucket_usage.versions_count += versions_count;
|
||||
bucket_usage.delete_markers_count += delete_markers_count;
|
||||
|
||||
// Update size histogram based on latest object size
|
||||
let size_ranges = [
|
||||
("0-1KB", 0, 1024),
|
||||
("1KB-1MB", 1024, 1024 * 1024),
|
||||
("1MB-10MB", 1024 * 1024, 10 * 1024 * 1024),
|
||||
("10MB-100MB", 10 * 1024 * 1024, 100 * 1024 * 1024),
|
||||
("100MB-1GB", 100 * 1024 * 1024, 1024 * 1024 * 1024),
|
||||
("1GB+", 1024 * 1024 * 1024, u64::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_size, max_size) in size_ranges {
|
||||
if latest_object_size >= min_size && latest_object_size < max_size {
|
||||
*bucket_usage.object_size_histogram.entry(range_name.to_string()).or_insert(0) += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Update version histogram based on actual version count
|
||||
let version_ranges = [
|
||||
("1", 1, 1),
|
||||
("2-5", 2, 5),
|
||||
("6-10", 6, 10),
|
||||
("11-50", 11, 50),
|
||||
("51-100", 51, 100),
|
||||
("100+", 101, usize::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_versions, max_versions) in version_ranges {
|
||||
if versions_count as usize >= min_versions && versions_count as usize <= max_versions {
|
||||
*bucket_usage
|
||||
.object_versions_histogram
|
||||
.entry(range_name.to_string())
|
||||
.or_insert(0) += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
// Create new bucket usage
|
||||
let mut bucket_usage = BucketUsageInfo {
|
||||
size: total_size,
|
||||
objects_count: 1,
|
||||
versions_count,
|
||||
delete_markers_count,
|
||||
..Default::default()
|
||||
};
|
||||
|
||||
// Set size histogram
|
||||
let size_ranges = [
|
||||
("0-1KB", 0, 1024),
|
||||
("1KB-1MB", 1024, 1024 * 1024),
|
||||
("1MB-10MB", 1024 * 1024, 10 * 1024 * 1024),
|
||||
("10MB-100MB", 10 * 1024 * 1024, 100 * 1024 * 1024),
|
||||
("100MB-1GB", 100 * 1024 * 1024, 1024 * 1024 * 1024),
|
||||
("1GB+", 1024 * 1024 * 1024, u64::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_size, max_size) in size_ranges {
|
||||
if latest_object_size >= min_size && latest_object_size < max_size {
|
||||
bucket_usage.object_size_histogram.insert(range_name.to_string(), 1);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
// Set version histogram
|
||||
let version_ranges = [
|
||||
("1", 1, 1),
|
||||
("2-5", 2, 5),
|
||||
("6-10", 6, 10),
|
||||
("11-50", 11, 50),
|
||||
("51-100", 51, 100),
|
||||
("100+", 101, usize::MAX),
|
||||
];
|
||||
|
||||
for (range_name, min_versions, max_versions) in version_ranges {
|
||||
if versions_count as usize >= min_versions && versions_count as usize <= max_versions {
|
||||
bucket_usage.object_versions_histogram.insert(range_name.to_string(), 1);
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
self.buckets_usage.insert(bucket_name, bucket_usage);
|
||||
// Update buckets count when adding new bucket
|
||||
self.buckets_count = self.buckets_usage.len() as u64;
|
||||
}
|
||||
|
||||
// Update global statistics
|
||||
self.objects_total_size += total_size;
|
||||
self.objects_total_count += 1;
|
||||
self.versions_total_count += versions_count;
|
||||
self.delete_markers_total_count += delete_markers_count;
|
||||
}
|
||||
|
||||
/// Extract bucket name from object path
|
||||
fn extract_bucket_from_path(&self, object_path: &str) -> Result<String> {
|
||||
let parts: Vec<&str> = object_path.split('/').collect();
|
||||
if parts.is_empty() {
|
||||
return Err(Error::Scanner("Invalid object path: empty".to_string()));
|
||||
}
|
||||
Ok(parts[0].to_string())
|
||||
}
|
||||
|
||||
/// Update capacity information
|
||||
pub fn update_capacity(&mut self, total: u64, used: u64, free: u64) {
|
||||
self.total_capacity = total;
|
||||
self.total_used_capacity = used;
|
||||
self.total_free_capacity = free;
|
||||
self.last_update = Some(SystemTime::now());
|
||||
}
|
||||
|
||||
/// Add bucket usage info
|
||||
pub fn add_bucket_usage(&mut self, bucket: String, usage: BucketUsageInfo) {
|
||||
self.buckets_usage.insert(bucket.clone(), usage);
|
||||
self.buckets_count = self.buckets_usage.len() as u64;
|
||||
self.last_update = Some(SystemTime::now());
|
||||
}
|
||||
|
||||
/// Get bucket usage info
|
||||
pub fn get_bucket_usage(&self, bucket: &str) -> Option<&BucketUsageInfo> {
|
||||
self.buckets_usage.get(bucket)
|
||||
}
|
||||
|
||||
/// Calculate total statistics from all buckets
|
||||
pub fn calculate_totals(&mut self) {
|
||||
self.objects_total_count = 0;
|
||||
self.versions_total_count = 0;
|
||||
self.delete_markers_total_count = 0;
|
||||
self.objects_total_size = 0;
|
||||
|
||||
for usage in self.buckets_usage.values() {
|
||||
self.objects_total_count += usage.objects_count;
|
||||
self.versions_total_count += usage.versions_count;
|
||||
self.delete_markers_total_count += usage.delete_markers_count;
|
||||
self.objects_total_size += usage.size;
|
||||
}
|
||||
}
|
||||
|
||||
/// Merge another DataUsageInfo into this one
|
||||
pub fn merge(&mut self, other: &DataUsageInfo) {
|
||||
// Merge bucket usage
|
||||
for (bucket, usage) in &other.buckets_usage {
|
||||
if let Some(existing) = self.buckets_usage.get_mut(bucket) {
|
||||
existing.merge(usage);
|
||||
} else {
|
||||
self.buckets_usage.insert(bucket.clone(), usage.clone());
|
||||
}
|
||||
}
|
||||
|
||||
// Recalculate totals
|
||||
self.calculate_totals();
|
||||
|
||||
// Ensure buckets_count stays consistent with buckets_usage
|
||||
self.buckets_count = self.buckets_usage.len() as u64;
|
||||
|
||||
// Update last update time
|
||||
if let Some(other_update) = other.last_update {
|
||||
if self.last_update.is_none() || other_update > self.last_update.unwrap() {
|
||||
self.last_update = Some(other_update);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl BucketUsageInfo {
|
||||
/// Create a new BucketUsageInfo
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
/// Add size summary to this bucket usage
|
||||
pub fn add_size_summary(&mut self, summary: &SizeSummary) {
|
||||
self.size += summary.total_size as u64;
|
||||
self.versions_count += summary.versions as u64;
|
||||
self.delete_markers_count += summary.delete_markers as u64;
|
||||
self.replica_size += summary.replica_size as u64;
|
||||
self.replica_count += summary.replica_count as u64;
|
||||
}
|
||||
|
||||
/// Merge another BucketUsageInfo into this one
|
||||
pub fn merge(&mut self, other: &BucketUsageInfo) {
|
||||
self.size += other.size;
|
||||
self.objects_count += other.objects_count;
|
||||
self.versions_count += other.versions_count;
|
||||
self.delete_markers_count += other.delete_markers_count;
|
||||
self.replica_size += other.replica_size;
|
||||
self.replica_count += other.replica_count;
|
||||
|
||||
// Merge histograms
|
||||
for (key, value) in &other.object_size_histogram {
|
||||
*self.object_size_histogram.entry(key.clone()).or_insert(0) += value;
|
||||
}
|
||||
|
||||
for (key, value) in &other.object_versions_histogram {
|
||||
*self.object_versions_histogram.entry(key.clone()).or_insert(0) += value;
|
||||
}
|
||||
|
||||
// Merge replication info
|
||||
for (target, info) in &other.replication_info {
|
||||
let entry = self.replication_info.entry(target.clone()).or_default();
|
||||
entry.replicated_size += info.replicated_size;
|
||||
entry.replica_size += info.replica_size;
|
||||
entry.replication_pending_size += info.replication_pending_size;
|
||||
entry.replication_failed_size += info.replication_failed_size;
|
||||
entry.replication_pending_count += info.replication_pending_count;
|
||||
entry.replication_failed_count += info.replication_failed_count;
|
||||
entry.replicated_count += info.replicated_count;
|
||||
}
|
||||
|
||||
// Merge backward compatibility fields
|
||||
self.replication_pending_size_v1 += other.replication_pending_size_v1;
|
||||
self.replication_failed_size_v1 += other.replication_failed_size_v1;
|
||||
self.replicated_size_v1 += other.replicated_size_v1;
|
||||
self.replication_pending_count_v1 += other.replication_pending_count_v1;
|
||||
self.replication_failed_count_v1 += other.replication_failed_count_v1;
|
||||
}
|
||||
}
|
||||
|
||||
impl SizeSummary {
|
||||
/// Create a new SizeSummary
|
||||
pub fn new() -> Self {
|
||||
Self::default()
|
||||
}
|
||||
|
||||
/// Add another SizeSummary to this one
|
||||
pub fn add(&mut self, other: &SizeSummary) {
|
||||
self.total_size += other.total_size;
|
||||
self.versions += other.versions;
|
||||
self.delete_markers += other.delete_markers;
|
||||
self.replicated_size += other.replicated_size;
|
||||
self.replicated_count += other.replicated_count;
|
||||
self.pending_size += other.pending_size;
|
||||
self.failed_size += other.failed_size;
|
||||
self.replica_size += other.replica_size;
|
||||
self.replica_count += other.replica_count;
|
||||
self.pending_count += other.pending_count;
|
||||
self.failed_count += other.failed_count;
|
||||
|
||||
// Merge replication target stats
|
||||
for (target, stats) in &other.repl_target_stats {
|
||||
let entry = self.repl_target_stats.entry(target.clone()).or_default();
|
||||
entry.replicated_size += stats.replicated_size;
|
||||
entry.replicated_count += stats.replicated_count;
|
||||
entry.pending_size += stats.pending_size;
|
||||
entry.failed_size += stats.failed_size;
|
||||
entry.pending_count += stats.pending_count;
|
||||
entry.failed_count += stats.failed_count;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Store data usage info to backend storage
|
||||
pub async fn store_data_usage_in_backend(data_usage_info: DataUsageInfo, store: Arc<ECStore>) -> Result<()> {
|
||||
let data =
|
||||
serde_json::to_vec(&data_usage_info).map_err(|e| Error::Config(format!("Failed to serialize data usage info: {e}")))?;
|
||||
|
||||
// Save to backend using the same mechanism as original code
|
||||
rustfs_ecstore::config::com::save_config(store, &DATA_USAGE_OBJ_NAME_PATH, data)
|
||||
.await
|
||||
.map_err(Error::Storage)?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Load data usage info from backend storage
|
||||
pub async fn load_data_usage_from_backend(store: Arc<ECStore>) -> Result<DataUsageInfo> {
|
||||
let buf = match read_config(store, &DATA_USAGE_OBJ_NAME_PATH).await {
|
||||
Ok(data) => data,
|
||||
Err(e) => {
|
||||
error!("Failed to read data usage info from backend: {}", e);
|
||||
if e == rustfs_ecstore::error::Error::ConfigNotFound {
|
||||
return Ok(DataUsageInfo::default());
|
||||
}
|
||||
return Err(Error::Storage(e));
|
||||
}
|
||||
};
|
||||
|
||||
let mut data_usage_info: DataUsageInfo =
|
||||
serde_json::from_slice(&buf).map_err(|e| Error::Config(format!("Failed to deserialize data usage info: {e}")))?;
|
||||
|
||||
warn!("Loaded data usage info from backend {:?}", &data_usage_info);
|
||||
|
||||
// Handle backward compatibility like original code
|
||||
if data_usage_info.buckets_usage.is_empty() {
|
||||
data_usage_info.buckets_usage = data_usage_info
|
||||
.bucket_sizes
|
||||
.iter()
|
||||
.map(|(bucket, &size)| {
|
||||
(
|
||||
bucket.clone(),
|
||||
BucketUsageInfo {
|
||||
size,
|
||||
..Default::default()
|
||||
},
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
}
|
||||
|
||||
if data_usage_info.bucket_sizes.is_empty() {
|
||||
data_usage_info.bucket_sizes = data_usage_info
|
||||
.buckets_usage
|
||||
.iter()
|
||||
.map(|(bucket, bui)| (bucket.clone(), bui.size))
|
||||
.collect();
|
||||
}
|
||||
|
||||
for (bucket, bui) in &data_usage_info.buckets_usage {
|
||||
if bui.replicated_size_v1 > 0
|
||||
|| bui.replication_failed_count_v1 > 0
|
||||
|| bui.replication_failed_size_v1 > 0
|
||||
|| bui.replication_pending_count_v1 > 0
|
||||
{
|
||||
if let Ok((cfg, _)) = get_replication_config(bucket).await {
|
||||
if !cfg.role.is_empty() {
|
||||
data_usage_info.replication_info.insert(
|
||||
cfg.role.clone(),
|
||||
BucketTargetUsageInfo {
|
||||
replication_failed_size: bui.replication_failed_size_v1,
|
||||
replication_failed_count: bui.replication_failed_count_v1,
|
||||
replicated_size: bui.replicated_size_v1,
|
||||
replication_pending_count: bui.replication_pending_count_v1,
|
||||
replication_pending_size: bui.replication_pending_size_v1,
|
||||
..Default::default()
|
||||
},
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(data_usage_info)
|
||||
}
|
||||
|
||||
/// Example function showing how to use AHM data usage functionality
|
||||
/// This demonstrates the integration pattern for DataUsageInfoHandler
|
||||
pub async fn example_data_usage_integration() -> Result<()> {
|
||||
// Get the global storage instance
|
||||
let Some(store) = rustfs_ecstore::new_object_layer_fn() else {
|
||||
return Err(Error::Config("Storage not initialized".to_string()));
|
||||
};
|
||||
|
||||
// Load data usage from backend (this replaces the original load_data_usage_from_backend)
|
||||
let data_usage = load_data_usage_from_backend(store).await?;
|
||||
|
||||
info!(
|
||||
"Loaded data usage info: {} buckets, {} total objects",
|
||||
data_usage.buckets_count, data_usage.objects_total_count
|
||||
);
|
||||
|
||||
// Example: Store updated data usage back to backend
|
||||
// This would typically be called by the scanner after collecting new statistics
|
||||
// store_data_usage_in_backend(data_usage, store).await?;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_data_usage_info_creation() {
|
||||
let mut info = DataUsageInfo::new();
|
||||
info.update_capacity(1000, 500, 500);
|
||||
|
||||
assert_eq!(info.total_capacity, 1000);
|
||||
assert_eq!(info.total_used_capacity, 500);
|
||||
assert_eq!(info.total_free_capacity, 500);
|
||||
assert!(info.last_update.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_bucket_usage_info_merge() {
|
||||
let mut usage1 = BucketUsageInfo::new();
|
||||
usage1.size = 100;
|
||||
usage1.objects_count = 10;
|
||||
usage1.versions_count = 5;
|
||||
|
||||
let mut usage2 = BucketUsageInfo::new();
|
||||
usage2.size = 200;
|
||||
usage2.objects_count = 20;
|
||||
usage2.versions_count = 10;
|
||||
|
||||
usage1.merge(&usage2);
|
||||
|
||||
assert_eq!(usage1.size, 300);
|
||||
assert_eq!(usage1.objects_count, 30);
|
||||
assert_eq!(usage1.versions_count, 15);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_size_summary_add() {
|
||||
let mut summary1 = SizeSummary::new();
|
||||
summary1.total_size = 100;
|
||||
summary1.versions = 5;
|
||||
|
||||
let mut summary2 = SizeSummary::new();
|
||||
summary2.total_size = 200;
|
||||
summary2.versions = 10;
|
||||
|
||||
summary1.add(&summary2);
|
||||
|
||||
assert_eq!(summary1.total_size, 300);
|
||||
assert_eq!(summary1.versions, 15);
|
||||
}
|
||||
}
|
||||
277
crates/ahm/src/scanner/histogram.rs
Normal file
@@ -0,0 +1,277 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::collections::HashMap;
|
||||
|
||||
/// Size interval for object size histogram
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct SizeInterval {
|
||||
pub start: u64,
|
||||
pub end: u64,
|
||||
pub name: &'static str,
|
||||
}
|
||||
|
||||
/// Version interval for object versions histogram
|
||||
#[derive(Debug, Clone)]
|
||||
pub struct VersionInterval {
|
||||
pub start: u64,
|
||||
pub end: u64,
|
||||
pub name: &'static str,
|
||||
}
|
||||
|
||||
/// Object size histogram intervals
|
||||
pub const OBJECTS_HISTOGRAM_INTERVALS: &[SizeInterval] = &[
|
||||
SizeInterval {
|
||||
start: 0,
|
||||
end: 1024 - 1,
|
||||
name: "LESS_THAN_1_KiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 1024,
|
||||
end: 1024 * 1024 - 1,
|
||||
name: "1_KiB_TO_1_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 1024 * 1024,
|
||||
end: 10 * 1024 * 1024 - 1,
|
||||
name: "1_MiB_TO_10_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 10 * 1024 * 1024,
|
||||
end: 64 * 1024 * 1024 - 1,
|
||||
name: "10_MiB_TO_64_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 64 * 1024 * 1024,
|
||||
end: 128 * 1024 * 1024 - 1,
|
||||
name: "64_MiB_TO_128_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 128 * 1024 * 1024,
|
||||
end: 512 * 1024 * 1024 - 1,
|
||||
name: "128_MiB_TO_512_MiB",
|
||||
},
|
||||
SizeInterval {
|
||||
start: 512 * 1024 * 1024,
|
||||
end: u64::MAX,
|
||||
name: "MORE_THAN_512_MiB",
|
||||
},
|
||||
];
|
||||
|
||||
/// Object version count histogram intervals
|
||||
pub const OBJECTS_VERSION_COUNT_INTERVALS: &[VersionInterval] = &[
|
||||
VersionInterval {
|
||||
start: 1,
|
||||
end: 1,
|
||||
name: "1_VERSION",
|
||||
},
|
||||
VersionInterval {
|
||||
start: 2,
|
||||
end: 10,
|
||||
name: "2_TO_10_VERSIONS",
|
||||
},
|
||||
VersionInterval {
|
||||
start: 11,
|
||||
end: 100,
|
||||
name: "11_TO_100_VERSIONS",
|
||||
},
|
||||
VersionInterval {
|
||||
start: 101,
|
||||
end: 1000,
|
||||
name: "101_TO_1000_VERSIONS",
|
||||
},
|
||||
VersionInterval {
|
||||
start: 1001,
|
||||
end: u64::MAX,
|
||||
name: "MORE_THAN_1000_VERSIONS",
|
||||
},
|
||||
];
|
||||
|
||||
/// Size histogram for object size distribution
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct SizeHistogram {
|
||||
counts: Vec<u64>,
|
||||
}
|
||||
|
||||
/// Versions histogram for object version count distribution
|
||||
#[derive(Debug, Clone, Default)]
|
||||
pub struct VersionsHistogram {
|
||||
counts: Vec<u64>,
|
||||
}
|
||||
|
||||
impl SizeHistogram {
|
||||
/// Create a new size histogram
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
counts: vec![0; OBJECTS_HISTOGRAM_INTERVALS.len()],
|
||||
}
|
||||
}
|
||||
|
||||
/// Add a size to the histogram
|
||||
pub fn add(&mut self, size: u64) {
|
||||
for (idx, interval) in OBJECTS_HISTOGRAM_INTERVALS.iter().enumerate() {
|
||||
if size >= interval.start && size <= interval.end {
|
||||
self.counts[idx] += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the histogram as a map
|
||||
pub fn to_map(&self) -> HashMap<String, u64> {
|
||||
let mut result = HashMap::new();
|
||||
for (idx, count) in self.counts.iter().enumerate() {
|
||||
let interval = &OBJECTS_HISTOGRAM_INTERVALS[idx];
|
||||
result.insert(interval.name.to_string(), *count);
|
||||
}
|
||||
result
|
||||
}
|
||||
|
||||
/// Merge another histogram into this one
|
||||
pub fn merge(&mut self, other: &SizeHistogram) {
|
||||
for (idx, count) in other.counts.iter().enumerate() {
|
||||
self.counts[idx] += count;
|
||||
}
|
||||
}
|
||||
|
||||
/// Get total count
|
||||
pub fn total_count(&self) -> u64 {
|
||||
self.counts.iter().sum()
|
||||
}
|
||||
|
||||
/// Reset the histogram
|
||||
pub fn reset(&mut self) {
|
||||
for count in &mut self.counts {
|
||||
*count = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
impl VersionsHistogram {
|
||||
/// Create a new versions histogram
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
counts: vec![0; OBJECTS_VERSION_COUNT_INTERVALS.len()],
|
||||
}
|
||||
}
|
||||
|
||||
/// Add a version count to the histogram
|
||||
pub fn add(&mut self, versions: u64) {
|
||||
for (idx, interval) in OBJECTS_VERSION_COUNT_INTERVALS.iter().enumerate() {
|
||||
if versions >= interval.start && versions <= interval.end {
|
||||
self.counts[idx] += 1;
|
||||
break;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Get the histogram as a map
|
||||
pub fn to_map(&self) -> HashMap<String, u64> {
|
||||
let mut result = HashMap::new();
|
||||
for (idx, count) in self.counts.iter().enumerate() {
|
||||
let interval = &OBJECTS_VERSION_COUNT_INTERVALS[idx];
|
||||
result.insert(interval.name.to_string(), *count);
|
||||
}
|
||||
result
|
||||
}
|
||||
|
||||
/// Merge another histogram into this one
|
||||
pub fn merge(&mut self, other: &VersionsHistogram) {
|
||||
for (idx, count) in other.counts.iter().enumerate() {
|
||||
self.counts[idx] += count;
|
||||
}
|
||||
}
|
||||
|
||||
/// Get total count
|
||||
pub fn total_count(&self) -> u64 {
|
||||
self.counts.iter().sum()
|
||||
}
|
||||
|
||||
/// Reset the histogram
|
||||
pub fn reset(&mut self) {
|
||||
for count in &mut self.counts {
|
||||
*count = 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_size_histogram() {
|
||||
let mut histogram = SizeHistogram::new();
|
||||
|
||||
// Add some sizes
|
||||
histogram.add(512); // LESS_THAN_1_KiB
|
||||
histogram.add(1024); // 1_KiB_TO_1_MiB
|
||||
histogram.add(1024 * 1024); // 1_MiB_TO_10_MiB
|
||||
histogram.add(5 * 1024 * 1024); // 1_MiB_TO_10_MiB
|
||||
|
||||
let map = histogram.to_map();
|
||||
|
||||
assert_eq!(map.get("LESS_THAN_1_KiB"), Some(&1));
|
||||
assert_eq!(map.get("1_KiB_TO_1_MiB"), Some(&1));
|
||||
assert_eq!(map.get("1_MiB_TO_10_MiB"), Some(&2));
|
||||
assert_eq!(map.get("10_MiB_TO_64_MiB"), Some(&0));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_versions_histogram() {
|
||||
let mut histogram = VersionsHistogram::new();
|
||||
|
||||
// Add some version counts
|
||||
histogram.add(1); // 1_VERSION
|
||||
histogram.add(5); // 2_TO_10_VERSIONS
|
||||
histogram.add(50); // 11_TO_100_VERSIONS
|
||||
histogram.add(500); // 101_TO_1000_VERSIONS
|
||||
|
||||
let map = histogram.to_map();
|
||||
|
||||
assert_eq!(map.get("1_VERSION"), Some(&1));
|
||||
assert_eq!(map.get("2_TO_10_VERSIONS"), Some(&1));
|
||||
assert_eq!(map.get("11_TO_100_VERSIONS"), Some(&1));
|
||||
assert_eq!(map.get("101_TO_1000_VERSIONS"), Some(&1));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_histogram_merge() {
|
||||
let mut histogram1 = SizeHistogram::new();
|
||||
histogram1.add(1024);
|
||||
histogram1.add(1024 * 1024);
|
||||
|
||||
let mut histogram2 = SizeHistogram::new();
|
||||
histogram2.add(1024);
|
||||
histogram2.add(5 * 1024 * 1024);
|
||||
|
||||
histogram1.merge(&histogram2);
|
||||
|
||||
let map = histogram1.to_map();
|
||||
assert_eq!(map.get("1_KiB_TO_1_MiB"), Some(&2)); // 1 from histogram1 + 1 from histogram2
|
||||
assert_eq!(map.get("1_MiB_TO_10_MiB"), Some(&2)); // 1 from histogram1 + 1 from histogram2
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_histogram_reset() {
|
||||
let mut histogram = SizeHistogram::new();
|
||||
histogram.add(1024);
|
||||
histogram.add(1024 * 1024);
|
||||
|
||||
assert_eq!(histogram.total_count(), 2);
|
||||
|
||||
histogram.reset();
|
||||
assert_eq!(histogram.total_count(), 0);
|
||||
}
|
||||
}
|
||||
284
crates/ahm/src/scanner/metrics.rs
Normal file
@@ -0,0 +1,284 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
use std::{
|
||||
collections::HashMap,
|
||||
sync::atomic::{AtomicU64, Ordering},
|
||||
time::{Duration, SystemTime},
|
||||
};
|
||||
|
||||
use serde::{Deserialize, Serialize};
|
||||
use tracing::info;
|
||||
|
||||
/// Scanner metrics
|
||||
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||
pub struct ScannerMetrics {
|
||||
/// Total objects scanned since server start
|
||||
pub objects_scanned: u64,
|
||||
/// Total object versions scanned since server start
|
||||
pub versions_scanned: u64,
|
||||
/// Total directories scanned since server start
|
||||
pub directories_scanned: u64,
|
||||
/// Total bucket scans started since server start
|
||||
pub bucket_scans_started: u64,
|
||||
/// Total bucket scans finished since server start
|
||||
pub bucket_scans_finished: u64,
|
||||
/// Total objects with health issues found
|
||||
pub objects_with_issues: u64,
|
||||
/// Total heal tasks queued
|
||||
pub heal_tasks_queued: u64,
|
||||
/// Total heal tasks completed
|
||||
pub heal_tasks_completed: u64,
|
||||
/// Total heal tasks failed
|
||||
pub heal_tasks_failed: u64,
|
||||
/// Last scan activity time
|
||||
pub last_activity: Option<SystemTime>,
|
||||
/// Current scan cycle
|
||||
pub current_cycle: u64,
|
||||
/// Total scan cycles completed
|
||||
pub total_cycles: u64,
|
||||
/// Current scan duration
|
||||
pub current_scan_duration: Option<Duration>,
|
||||
/// Average scan duration
|
||||
pub avg_scan_duration: Duration,
|
||||
/// Objects scanned per second
|
||||
pub objects_per_second: f64,
|
||||
/// Buckets scanned per second
|
||||
pub buckets_per_second: f64,
|
||||
/// Storage metrics by bucket
|
||||
pub bucket_metrics: HashMap<String, BucketMetrics>,
|
||||
/// Disk metrics
|
||||
pub disk_metrics: HashMap<String, DiskMetrics>,
|
||||
}
|
||||
|
||||
/// Bucket-specific metrics
|
||||
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||
pub struct BucketMetrics {
|
||||
/// Bucket name
|
||||
pub bucket: String,
|
||||
/// Total objects in bucket
|
||||
pub total_objects: u64,
|
||||
/// Total size of objects in bucket (bytes)
|
||||
pub total_size: u64,
|
||||
/// Objects with health issues
|
||||
pub objects_with_issues: u64,
|
||||
/// Last scan time
|
||||
pub last_scan_time: Option<SystemTime>,
|
||||
/// Scan duration
|
||||
pub scan_duration: Option<Duration>,
|
||||
/// Heal tasks queued for this bucket
|
||||
pub heal_tasks_queued: u64,
|
||||
/// Heal tasks completed for this bucket
|
||||
pub heal_tasks_completed: u64,
|
||||
/// Heal tasks failed for this bucket
|
||||
pub heal_tasks_failed: u64,
|
||||
}
|
||||
|
||||
/// Disk-specific metrics
|
||||
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
|
||||
pub struct DiskMetrics {
|
||||
/// Disk path
|
||||
pub disk_path: String,
|
||||
/// Total disk space (bytes)
|
||||
pub total_space: u64,
|
||||
/// Used disk space (bytes)
|
||||
pub used_space: u64,
|
||||
/// Free disk space (bytes)
|
||||
pub free_space: u64,
|
||||
/// Objects scanned on this disk
|
||||
pub objects_scanned: u64,
|
||||
/// Objects with issues on this disk
|
||||
pub objects_with_issues: u64,
|
||||
/// Last scan time
|
||||
pub last_scan_time: Option<SystemTime>,
|
||||
/// Whether disk is online
|
||||
pub is_online: bool,
|
||||
/// Whether disk is being scanned
|
||||
pub is_scanning: bool,
|
||||
}
|
||||
|
||||
/// Thread-safe metrics collector
|
||||
pub struct MetricsCollector {
|
||||
/// Atomic counters for real-time metrics
|
||||
objects_scanned: AtomicU64,
|
||||
versions_scanned: AtomicU64,
|
||||
directories_scanned: AtomicU64,
|
||||
bucket_scans_started: AtomicU64,
|
||||
bucket_scans_finished: AtomicU64,
|
||||
objects_with_issues: AtomicU64,
|
||||
heal_tasks_queued: AtomicU64,
|
||||
heal_tasks_completed: AtomicU64,
|
||||
heal_tasks_failed: AtomicU64,
|
||||
current_cycle: AtomicU64,
|
||||
total_cycles: AtomicU64,
|
||||
}
|
||||
|
||||
impl MetricsCollector {
|
||||
/// Create a new metrics collector
|
||||
pub fn new() -> Self {
|
||||
Self {
|
||||
objects_scanned: AtomicU64::new(0),
|
||||
versions_scanned: AtomicU64::new(0),
|
||||
directories_scanned: AtomicU64::new(0),
|
||||
bucket_scans_started: AtomicU64::new(0),
|
||||
bucket_scans_finished: AtomicU64::new(0),
|
||||
objects_with_issues: AtomicU64::new(0),
|
||||
heal_tasks_queued: AtomicU64::new(0),
|
||||
heal_tasks_completed: AtomicU64::new(0),
|
||||
heal_tasks_failed: AtomicU64::new(0),
|
||||
current_cycle: AtomicU64::new(0),
|
||||
total_cycles: AtomicU64::new(0),
|
||||
}
|
||||
}
|
||||
|
||||
/// Increment objects scanned count
|
||||
pub fn increment_objects_scanned(&self, count: u64) {
|
||||
self.objects_scanned.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment versions scanned count
|
||||
pub fn increment_versions_scanned(&self, count: u64) {
|
||||
self.versions_scanned.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment directories scanned count
|
||||
pub fn increment_directories_scanned(&self, count: u64) {
|
||||
self.directories_scanned.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment bucket scans started count
|
||||
pub fn increment_bucket_scans_started(&self, count: u64) {
|
||||
self.bucket_scans_started.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment bucket scans finished count
|
||||
pub fn increment_bucket_scans_finished(&self, count: u64) {
|
||||
self.bucket_scans_finished.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment objects with issues count
|
||||
pub fn increment_objects_with_issues(&self, count: u64) {
|
||||
self.objects_with_issues.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment heal tasks queued count
|
||||
pub fn increment_heal_tasks_queued(&self, count: u64) {
|
||||
self.heal_tasks_queued.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment heal tasks completed count
|
||||
pub fn increment_heal_tasks_completed(&self, count: u64) {
|
||||
self.heal_tasks_completed.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment heal tasks failed count
|
||||
pub fn increment_heal_tasks_failed(&self, count: u64) {
|
||||
self.heal_tasks_failed.fetch_add(count, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Set current cycle
|
||||
pub fn set_current_cycle(&self, cycle: u64) {
|
||||
self.current_cycle.store(cycle, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Increment total cycles
|
||||
pub fn increment_total_cycles(&self) {
|
||||
self.total_cycles.fetch_add(1, Ordering::Relaxed);
|
||||
}
|
||||
|
||||
/// Get current metrics snapshot
|
||||
pub fn get_metrics(&self) -> ScannerMetrics {
|
||||
ScannerMetrics {
|
||||
objects_scanned: self.objects_scanned.load(Ordering::Relaxed),
|
||||
versions_scanned: self.versions_scanned.load(Ordering::Relaxed),
|
||||
directories_scanned: self.directories_scanned.load(Ordering::Relaxed),
|
||||
bucket_scans_started: self.bucket_scans_started.load(Ordering::Relaxed),
|
||||
bucket_scans_finished: self.bucket_scans_finished.load(Ordering::Relaxed),
|
||||
objects_with_issues: self.objects_with_issues.load(Ordering::Relaxed),
|
||||
heal_tasks_queued: self.heal_tasks_queued.load(Ordering::Relaxed),
|
||||
heal_tasks_completed: self.heal_tasks_completed.load(Ordering::Relaxed),
|
||||
heal_tasks_failed: self.heal_tasks_failed.load(Ordering::Relaxed),
|
||||
last_activity: Some(SystemTime::now()),
|
||||
current_cycle: self.current_cycle.load(Ordering::Relaxed),
|
||||
total_cycles: self.total_cycles.load(Ordering::Relaxed),
|
||||
current_scan_duration: None, // Will be set by scanner
|
||||
avg_scan_duration: Duration::ZERO, // Will be calculated
|
||||
objects_per_second: 0.0, // Will be calculated
|
||||
buckets_per_second: 0.0, // Will be calculated
|
||||
bucket_metrics: HashMap::new(), // Will be populated by scanner
|
||||
disk_metrics: HashMap::new(), // Will be populated by scanner
|
||||
}
|
||||
}
|
||||
|
||||
/// Reset all metrics
|
||||
pub fn reset(&self) {
|
||||
self.objects_scanned.store(0, Ordering::Relaxed);
|
||||
self.versions_scanned.store(0, Ordering::Relaxed);
|
||||
self.directories_scanned.store(0, Ordering::Relaxed);
|
||||
self.bucket_scans_started.store(0, Ordering::Relaxed);
|
||||
self.bucket_scans_finished.store(0, Ordering::Relaxed);
|
||||
self.objects_with_issues.store(0, Ordering::Relaxed);
|
||||
self.heal_tasks_queued.store(0, Ordering::Relaxed);
|
||||
self.heal_tasks_completed.store(0, Ordering::Relaxed);
|
||||
self.heal_tasks_failed.store(0, Ordering::Relaxed);
|
||||
self.current_cycle.store(0, Ordering::Relaxed);
|
||||
self.total_cycles.store(0, Ordering::Relaxed);
|
||||
|
||||
info!("Scanner metrics reset");
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for MetricsCollector {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn test_metrics_collector_creation() {
|
||||
let collector = MetricsCollector::new();
|
||||
let metrics = collector.get_metrics();
|
||||
assert_eq!(metrics.objects_scanned, 0);
|
||||
assert_eq!(metrics.versions_scanned, 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_metrics_increment() {
|
||||
let collector = MetricsCollector::new();
|
||||
|
||||
collector.increment_objects_scanned(10);
|
||||
collector.increment_versions_scanned(5);
|
||||
collector.increment_objects_with_issues(2);
|
||||
|
||||
let metrics = collector.get_metrics();
|
||||
assert_eq!(metrics.objects_scanned, 10);
|
||||
assert_eq!(metrics.versions_scanned, 5);
|
||||
assert_eq!(metrics.objects_with_issues, 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_metrics_reset() {
|
||||
let collector = MetricsCollector::new();
|
||||
|
||||
collector.increment_objects_scanned(10);
|
||||
collector.reset();
|
||||
|
||||
let metrics = collector.get_metrics();
|
||||
assert_eq!(metrics.objects_scanned, 0);
|
||||
}
|
||||
}
|
||||
25
crates/ahm/src/scanner/mod.rs
Normal file
@@ -0,0 +1,25 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
|
||||
pub mod data_scanner;
|
||||
pub mod data_usage;
|
||||
pub mod histogram;
|
||||
pub mod metrics;
|
||||
|
||||
// Re-export main types for convenience
|
||||
pub use data_scanner::Scanner;
|
||||
pub use data_usage::{
|
||||
load_data_usage_from_backend, store_data_usage_in_backend, BucketTargetUsageInfo, BucketUsageInfo, DataUsageInfo,
|
||||
};
|
||||
pub use metrics::ScannerMetrics;
|
||||
@@ -19,6 +19,10 @@ license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Application authentication and authorization for RustFS, providing secure access control and user management."
|
||||
keywords = ["authentication", "authorization", "security", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "authentication"]
|
||||
|
||||
[dependencies]
|
||||
base64-simd = { workspace = true }
|
||||
|
||||
37
crates/appauth/README.md
Normal file
@@ -0,0 +1,37 @@
|
||||
[](https://rustfs.com)
|
||||
|
||||
# RustFS AppAuth - Application Authentication
|
||||
|
||||
<p align="center">
|
||||
<strong>Application-level authentication and authorization module for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
|
||||
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS AppAuth** provides application-level authentication and authorization capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
- JWT-based authentication with secure token management
|
||||
- RBAC (Role-Based Access Control) for fine-grained permissions
|
||||
- Multi-tenant application isolation and management
|
||||
- OAuth 2.0 and OpenID Connect integration
|
||||
- API key management and rotation
|
||||
- Session management with configurable expiration
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
@@ -23,14 +23,14 @@ use std::io::{Error, Result};
|
||||
|
||||
#[derive(Serialize, Deserialize, Debug, Default, Clone)]
|
||||
pub struct Token {
|
||||
pub name: String, // 应用 ID
|
||||
pub expired: u64, // 到期时间 (UNIX 时间戳)
|
||||
pub name: String, // Application ID
|
||||
pub expired: u64, // Expiry time (UNIX timestamp)
|
||||
}
|
||||
|
||||
// 公钥生成 Token
|
||||
// [token] Token 对象
|
||||
// [key] 公钥字符串
|
||||
// 返回 base64 处理的加密字符串
|
||||
/// Public key generation Token
|
||||
/// [token] Token object
|
||||
/// [key] Public key string
|
||||
/// Returns the encrypted string processed by base64
|
||||
pub fn gencode(token: &Token, key: &str) -> Result<String> {
|
||||
let data = serde_json::to_vec(token)?;
|
||||
let public_key = RsaPublicKey::from_public_key_pem(key).map_err(Error::other)?;
|
||||
@@ -38,10 +38,10 @@ pub fn gencode(token: &Token, key: &str) -> Result<String> {
|
||||
Ok(base64_simd::URL_SAFE_NO_PAD.encode_to_string(&encrypted_data))
|
||||
}
|
||||
|
||||
// 私钥解析 Token
|
||||
// [token] base64 处理的加密字符串
|
||||
// [key] 私钥字符串
|
||||
// 返回 Token 对象
|
||||
/// Private key resolution Token
|
||||
/// [token] Encrypted string processed by base64
|
||||
/// [key] Private key string
|
||||
/// Return to the Token object
|
||||
pub fn parse(token: &str, key: &str) -> Result<Token> {
|
||||
let encrypted_data = base64_simd::URL_SAFE_NO_PAD
|
||||
.decode_to_vec(token.as_bytes())
|
||||
|
||||
@@ -19,6 +19,10 @@ edition.workspace = true
|
||||
license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Common utilities and data structures for RustFS, providing shared functionality across the project."
|
||||
keywords = ["common", "utilities", "data-structures", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "data-structures"]
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
|
||||
37
crates/common/README.md
Normal file
@@ -0,0 +1,37 @@
|
||||
[](https://rustfs.com)
|
||||
|
||||
# RustFS Common - Shared Components
|
||||
|
||||
<p align="center">
|
||||
<strong>Shared components and common utilities module for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
|
||||
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS Common** provides shared components and common utilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
- Shared data structures and type definitions
|
||||
- Common error handling and result types
|
||||
- Utility functions used across modules
|
||||
- Configuration structures and validation
|
||||
- Logging and tracing infrastructure
|
||||
- Cross-platform compatibility helpers
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
@@ -19,6 +19,10 @@ license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Configuration management for RustFS, providing a centralized way to manage application settings and features."
|
||||
keywords = ["configuration", "settings", "management", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "config"]
|
||||
|
||||
[dependencies]
|
||||
const-str = { workspace = true, optional = true }
|
||||
|
||||
37
crates/config/README.md
Normal file
@@ -0,0 +1,37 @@
|
||||
[](https://rustfs.com)
|
||||
|
||||
# RustFS Config - Configuration Management
|
||||
|
||||
<p align="center">
|
||||
<strong>Configuration management and validation module for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
|
||||
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS Config** provides configuration management and validation capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
- Multi-format configuration support (TOML, YAML, JSON, ENV)
|
||||
- Environment variable integration and override
|
||||
- Configuration validation and type safety
|
||||
- Hot-reload capabilities for dynamic updates
|
||||
- Default value management and fallbacks
|
||||
- Secure credential handling and encryption
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
@@ -19,6 +19,11 @@ license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Cryptography and security features for RustFS, providing encryption, hashing, and secure authentication mechanisms."
|
||||
keywords = ["cryptography", "encryption", "hashing", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "cryptography"]
|
||||
documentation = "https://docs.rs/rustfs-crypto/latest/rustfs_crypto/"
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
|
||||
37
crates/crypto/README.md
Normal file
@@ -0,0 +1,37 @@
|
||||
[](https://rustfs.com)
|
||||
|
||||
# RustFS Crypto - Cryptographic Operations
|
||||
|
||||
<p align="center">
|
||||
<strong>High-performance cryptographic operations module for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
|
||||
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS Crypto** provides high-performance cryptographic operations for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
- AES-GCM encryption with hardware acceleration
|
||||
- RSA and ECDSA digital signature support
|
||||
- Secure hash functions (SHA-256, BLAKE3)
|
||||
- Key derivation and management utilities
|
||||
- Stream ciphers for large data encryption
|
||||
- Hardware security module integration
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
@@ -19,6 +19,12 @@ edition.workspace = true
|
||||
license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Erasure coding storage backend for RustFS, providing efficient data storage and retrieval with redundancy."
|
||||
keywords = ["erasure-coding", "storage", "rustfs", "Minio", "solomon"]
|
||||
categories = ["web-programming", "development-tools", "filesystem"]
|
||||
documentation = "https://docs.rs/rustfs-ecstore/latest/rustfs_ecstore/"
|
||||
|
||||
|
||||
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
|
||||
[lints]
|
||||
|
||||
64
crates/ecstore/README.md
Normal file
@@ -0,0 +1,64 @@
|
||||
[](https://rustfs.com)
|
||||
|
||||
# RustFS ECStore - Erasure Coding Storage
|
||||
|
||||
<p align="center">
|
||||
<strong>High-performance erasure coding storage engine for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
|
||||
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS ECStore** provides erasure coding storage capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
- Reed-Solomon erasure coding implementation
|
||||
- Configurable redundancy levels (N+K schemes)
|
||||
- Automatic data healing and reconstruction
|
||||
- Multi-drive support with intelligent placement
|
||||
- Parallel encoding/decoding for performance
|
||||
- Efficient disk space utilization
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
|
||||
```
|
||||
Copyright 2024 RustFS Team
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
<p align="center">
|
||||
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
|
||||
All other trademarks are the property of their respective owners.
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
Made with ❤️ by the RustFS Storage Team
|
||||
</p>
|
||||
@@ -1,103 +1,19 @@
|
||||
# ECStore - Erasure Coding Storage
|
||||
|
||||
ECStore provides erasure coding functionality for the RustFS project, using high-performance Reed-Solomon SIMD
|
||||
implementation for optimal performance.
|
||||
ECStore provides erasure coding functionality for the RustFS project, using high-performance Reed-Solomon SIMD implementation for optimal performance.
|
||||
|
||||
## Reed-Solomon Implementation
|
||||
## Features
|
||||
|
||||
### SIMD Backend (Only)
|
||||
- **Reed-Solomon Implementation**: High-performance SIMD-optimized erasure coding
|
||||
- **Cross-Platform Compatibility**: Support for x86_64, aarch64, and other architectures
|
||||
- **Performance Optimized**: SIMD instructions for maximum throughput
|
||||
- **Thread Safety**: Safe concurrent access with caching optimizations
|
||||
- **Scalable**: Excellent performance for high-throughput scenarios
|
||||
|
||||
- **Performance**: Uses SIMD optimization for high-performance encoding/decoding
|
||||
- **Compatibility**: Works with any shard size through SIMD implementation
|
||||
- **Reliability**: High-performance SIMD implementation for large data processing
|
||||
- **Use case**: Optimized for maximum performance in large data processing scenarios
|
||||
## Documentation
|
||||
|
||||
### Usage Example
|
||||
For complete documentation, examples, and usage information, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
```rust
|
||||
use rustfs_ecstore::erasure_coding::Erasure;
|
||||
## License
|
||||
|
||||
// Create erasure coding instance
|
||||
// 4 data shards, 2 parity shards, 1KB block size
|
||||
let erasure = Erasure::new(4, 2, 1024);
|
||||
|
||||
// Encode data
|
||||
let data = b"hello world from rustfs erasure coding";
|
||||
let shards = erasure.encode_data(data) ?;
|
||||
|
||||
// Simulate loss of one shard
|
||||
let mut shards_opt: Vec<Option<Vec<u8> > > = shards
|
||||
.iter()
|
||||
.map( | b| Some(b.to_vec()))
|
||||
.collect();
|
||||
shards_opt[2] = None; // Lose shard 2
|
||||
|
||||
// Reconstruct missing data
|
||||
erasure.decode_data( & mut shards_opt) ?;
|
||||
|
||||
// Recover original data
|
||||
let mut recovered = Vec::new();
|
||||
for shard in shards_opt.iter().take(4) { // Only data shards
|
||||
recovered.extend_from_slice(shard.as_ref().unwrap());
|
||||
}
|
||||
recovered.truncate(data.len());
|
||||
assert_eq!(&recovered, data);
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
### SIMD Implementation Benefits
|
||||
|
||||
- **High Throughput**: Optimized for large block sizes (>= 1KB recommended)
|
||||
- **CPU Optimization**: Leverages modern CPU SIMD instructions
|
||||
- **Scalability**: Excellent performance for high-throughput scenarios
|
||||
|
||||
### Implementation Details
|
||||
|
||||
#### `reed-solomon-simd`
|
||||
|
||||
- **Instance Caching**: Encoder/decoder instances are cached and reused for optimal performance
|
||||
- **Thread Safety**: Thread-safe with RwLock-based caching
|
||||
- **SIMD Optimization**: Leverages CPU SIMD instructions for maximum performance
|
||||
- **Reset Capability**: Cached instances are reset for different parameters, avoiding unnecessary allocations
|
||||
|
||||
### Performance Tips
|
||||
|
||||
1. **Batch Operations**: When possible, batch multiple small operations into larger blocks
|
||||
2. **Block Size Optimization**: Use block sizes that are multiples of 64 bytes for optimal SIMD performance
|
||||
3. **Memory Allocation**: Pre-allocate buffers when processing multiple blocks
|
||||
4. **Cache Warming**: Initial operations may be slower due to cache setup, subsequent operations benefit from caching
|
||||
|
||||
## Cross-Platform Compatibility
|
||||
|
||||
The SIMD implementation supports:
|
||||
|
||||
- x86_64 with advanced SIMD instructions (AVX2, SSE)
|
||||
- aarch64 (ARM64) with NEON SIMD optimizations
|
||||
- Other architectures with fallback implementations
|
||||
|
||||
The implementation automatically selects the best available SIMD instructions for the target platform, providing optimal
|
||||
performance across different architectures.
|
||||
|
||||
## Testing and Benchmarking
|
||||
|
||||
Run performance benchmarks:
|
||||
|
||||
```bash
|
||||
# Run erasure coding benchmarks
|
||||
cargo bench --bench erasure_benchmark
|
||||
|
||||
# Run comparison benchmarks
|
||||
cargo bench --bench comparison_benchmark
|
||||
|
||||
# Generate benchmark reports
|
||||
./run_benchmarks.sh
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
All operations return `Result` types with comprehensive error information:
|
||||
|
||||
- Encoding errors: Invalid parameters, insufficient memory
|
||||
- Decoding errors: Too many missing shards, corrupted data
|
||||
- Configuration errors: Invalid shard counts, unsupported parameters
|
||||
This project is licensed under the Apache License, Version 2.0.
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(unused_imports)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
@@ -12,6 +11,7 @@
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
#![allow(unused_imports)]
|
||||
#![allow(unused_variables)]
|
||||
#![allow(unused_mut)]
|
||||
#![allow(unused_assignments)]
|
||||
@@ -41,7 +41,7 @@ const ERR_LIFECYCLE_DUPLICATE_ID: &str = "Rule ID must be unique. Found same ID
|
||||
const _ERR_XML_NOT_WELL_FORMED: &str =
|
||||
"The XML you provided was not well-formed or did not validate against our published schema";
|
||||
const ERR_LIFECYCLE_BUCKET_LOCKED: &str =
|
||||
"ExpiredObjectAllVersions element and DelMarkerExpiration action cannot be used on an object locked bucket";
|
||||
"ExpiredObjectAllVersions element and DelMarkerExpiration action cannot be used on an retention bucket";
|
||||
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub enum IlmAction {
|
||||
@@ -102,30 +102,30 @@ impl RuleValidate for LifecycleRule {
|
||||
}
|
||||
|
||||
fn validate_status(&self) -> Result<()> {
|
||||
if self.Status.len() == 0 {
|
||||
return errEmptyRuleStatus;
|
||||
if self.status.len() == 0 {
|
||||
return ErrEmptyRuleStatus;
|
||||
}
|
||||
|
||||
if self.Status != Enabled && self.Status != Disabled {
|
||||
return errInvalidRuleStatus;
|
||||
if self.status != Enabled && self.status != Disabled {
|
||||
return ErrInvalidRuleStatus;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn validate_expiration(&self) -> Result<()> {
|
||||
self.Expiration.Validate();
|
||||
self.expiration.validate();
|
||||
}
|
||||
|
||||
fn validate_noncurrent_expiration(&self) -> Result<()> {
|
||||
self.NoncurrentVersionExpiration.Validate()
|
||||
self.noncurrent_version_expiration.validate()
|
||||
}
|
||||
|
||||
fn validate_prefix_and_filter(&self) -> Result<()> {
|
||||
if !self.Prefix.set && self.Filter.IsEmpty() || self.Prefix.set && !self.Filter.IsEmpty() {
|
||||
return errXMLNotWellFormed;
|
||||
if !self.prefix.set && self.Filter.isempty() || self.prefix.set && !self.filter.isempty() {
|
||||
return ErrXMLNotWellFormed;
|
||||
}
|
||||
if !self.Prefix.set {
|
||||
return self.Filter.Validate();
|
||||
if !self.prefix.set {
|
||||
return self.filter.validate();
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
@@ -267,7 +267,7 @@ impl Lifecycle for BucketLifecycleConfiguration {
|
||||
r.validate()?;
|
||||
if let Some(expiration) = r.expiration.as_ref() {
|
||||
if let Some(expired_object_delete_marker) = expiration.expired_object_delete_marker {
|
||||
if lr_retention && (!expired_object_delete_marker) {
|
||||
if lr_retention && (expired_object_delete_marker) {
|
||||
return Err(std::io::Error::other(ERR_LIFECYCLE_BUCKET_LOCKED));
|
||||
}
|
||||
}
|
||||
|
||||
@@ -20,12 +20,12 @@
|
||||
#![allow(clippy::all)]
|
||||
|
||||
use lazy_static::lazy_static;
|
||||
use rustfs_utils::HashAlgorithm;
|
||||
use std::collections::HashMap;
|
||||
|
||||
use crate::client::{api_put_object::PutObjectOptions, api_s3_datatypes::ObjectPart};
|
||||
use crate::{disk::DiskAPI, store_api::GetObjectReader};
|
||||
use rustfs_utils::crypto::{base64_decode, base64_encode};
|
||||
use rustfs_utils::hasher::{Hasher, Sha256};
|
||||
use s3s::header::{
|
||||
X_AMZ_CHECKSUM_ALGORITHM, X_AMZ_CHECKSUM_CRC32, X_AMZ_CHECKSUM_CRC32C, X_AMZ_CHECKSUM_SHA1, X_AMZ_CHECKSUM_SHA256,
|
||||
};
|
||||
@@ -133,7 +133,7 @@ impl ChecksumMode {
|
||||
}
|
||||
}
|
||||
|
||||
pub fn hasher(&self) -> Result<Box<dyn Hasher>, std::io::Error> {
|
||||
pub fn hasher(&self) -> Result<HashAlgorithm, std::io::Error> {
|
||||
match /*C_ChecksumMask & **/self {
|
||||
/*ChecksumMode::ChecksumCRC32 => {
|
||||
return Ok(Box::new(crc32fast::Hasher::new()));
|
||||
@@ -145,7 +145,7 @@ impl ChecksumMode {
|
||||
return Ok(Box::new(sha1::new()));
|
||||
}*/
|
||||
ChecksumMode::ChecksumSHA256 => {
|
||||
return Ok(Box::new(Sha256::new()));
|
||||
return Ok(HashAlgorithm::SHA256);
|
||||
}
|
||||
/*ChecksumMode::ChecksumCRC64NVME => {
|
||||
return Ok(Box::new(crc64nvme.New());
|
||||
@@ -170,8 +170,8 @@ impl ChecksumMode {
|
||||
return Ok("".to_string());
|
||||
}
|
||||
let mut h = self.hasher()?;
|
||||
h.write(b);
|
||||
Ok(base64_encode(h.sum().as_bytes()))
|
||||
let hash = h.hash_encode(b);
|
||||
Ok(base64_encode(hash.as_ref()))
|
||||
}
|
||||
|
||||
pub fn to_string(&self) -> String {
|
||||
@@ -201,15 +201,15 @@ impl ChecksumMode {
|
||||
}
|
||||
}
|
||||
|
||||
pub fn check_sum_reader(&self, r: GetObjectReader) -> Result<Checksum, std::io::Error> {
|
||||
let mut h = self.hasher()?;
|
||||
Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
|
||||
}
|
||||
// pub fn check_sum_reader(&self, r: GetObjectReader) -> Result<Checksum, std::io::Error> {
|
||||
// let mut h = self.hasher()?;
|
||||
// Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
|
||||
// }
|
||||
|
||||
pub fn check_sum_bytes(&self, b: &[u8]) -> Result<Checksum, std::io::Error> {
|
||||
let mut h = self.hasher()?;
|
||||
Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
|
||||
}
|
||||
// pub fn check_sum_bytes(&self, b: &[u8]) -> Result<Checksum, std::io::Error> {
|
||||
// let mut h = self.hasher()?;
|
||||
// Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
|
||||
// }
|
||||
|
||||
pub fn composite_checksum(&self, p: &mut [ObjectPart]) -> Result<Checksum, std::io::Error> {
|
||||
if !self.can_composite() {
|
||||
@@ -227,10 +227,10 @@ impl ChecksumMode {
|
||||
let c = self.base();
|
||||
let crc_bytes = Vec::<u8>::with_capacity(p.len() * self.raw_byte_len() as usize);
|
||||
let mut h = self.hasher()?;
|
||||
h.write(&crc_bytes);
|
||||
let hash = h.hash_encode(crc_bytes.as_ref());
|
||||
Ok(Checksum {
|
||||
checksum_type: self.clone(),
|
||||
r: h.sum().as_bytes().to_vec(),
|
||||
r: hash.as_ref().to_vec(),
|
||||
computed: false,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(clippy::map_entry)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(clippy::map_entry)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
|
||||
184
crates/ecstore/src/client/api_get_object_acl.rs
Normal file
@@ -0,0 +1,184 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
#![allow(unused_imports)]
|
||||
#![allow(unused_variables)]
|
||||
#![allow(unused_mut)]
|
||||
#![allow(unused_assignments)]
|
||||
#![allow(unused_must_use)]
|
||||
#![allow(clippy::all)]
|
||||
|
||||
use bytes::Bytes;
|
||||
use http::{HeaderMap, HeaderValue};
|
||||
use s3s::dto::Owner;
|
||||
use std::collections::HashMap;
|
||||
use std::io::Cursor;
|
||||
use tokio::io::BufReader;
|
||||
|
||||
use crate::client::{
|
||||
api_error_response::{err_invalid_argument, http_resp_to_error_response},
|
||||
api_get_options::GetObjectOptions,
|
||||
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
|
||||
};
|
||||
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
|
||||
|
||||
#[derive(Clone, Debug, Default, serde::Serialize, serde::Deserialize)]
|
||||
pub struct Grantee {
|
||||
pub id: String,
|
||||
pub display_name: String,
|
||||
pub uri: String,
|
||||
}
|
||||
|
||||
#[derive(Clone, Debug, Default, serde::Serialize, serde::Deserialize)]
|
||||
pub struct Grant {
|
||||
pub grantee: Grantee,
|
||||
pub permission: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, serde::Serialize, serde::Deserialize)]
|
||||
pub struct AccessControlList {
|
||||
pub grant: Vec<Grant>,
|
||||
pub permission: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, serde::Deserialize)]
|
||||
pub struct AccessControlPolicy {
|
||||
#[serde(skip)]
|
||||
owner: Owner,
|
||||
pub access_control_list: AccessControlList,
|
||||
}
|
||||
|
||||
impl TransitionClient {
|
||||
pub async fn get_object_acl(&self, bucket_name: &str, object_name: &str) -> Result<ObjectInfo, std::io::Error> {
|
||||
let mut url_values = HashMap::new();
|
||||
url_values.insert("acl".to_string(), "".to_string());
|
||||
let mut resp = self
|
||||
.execute_method(
|
||||
http::Method::GET,
|
||||
&mut RequestMetadata {
|
||||
bucket_name: bucket_name.to_string(),
|
||||
object_name: object_name.to_string(),
|
||||
query_values: url_values,
|
||||
custom_header: HeaderMap::new(),
|
||||
content_sha256_hex: EMPTY_STRING_SHA256_HASH.to_string(),
|
||||
content_body: ReaderImpl::Body(Bytes::new()),
|
||||
content_length: 0,
|
||||
content_md5_base64: "".to_string(),
|
||||
stream_sha256: false,
|
||||
trailer: HeaderMap::new(),
|
||||
pre_sign_url: Default::default(),
|
||||
add_crc: Default::default(),
|
||||
extra_pre_sign_header: Default::default(),
|
||||
bucket_location: Default::default(),
|
||||
expires: Default::default(),
|
||||
},
|
||||
)
|
||||
.await?;
|
||||
|
||||
if resp.status() != http::StatusCode::OK {
|
||||
let b = resp.body().bytes().expect("err").to_vec();
|
||||
return Err(std::io::Error::other(http_resp_to_error_response(resp, b, bucket_name, object_name)));
|
||||
}
|
||||
|
||||
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
|
||||
let mut res = match serde_xml_rs::from_str::<AccessControlPolicy>(&String::from_utf8(b).unwrap()) {
|
||||
Ok(result) => result,
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err.to_string()));
|
||||
}
|
||||
};
|
||||
|
||||
let mut obj_info = self
|
||||
.stat_object(bucket_name, object_name, &GetObjectOptions::default())
|
||||
.await?;
|
||||
|
||||
obj_info.owner.display_name = res.owner.display_name.clone();
|
||||
obj_info.owner.id = res.owner.id.clone();
|
||||
|
||||
//obj_info.grant.extend(res.access_control_list.grant);
|
||||
|
||||
let canned_acl = get_canned_acl(&res);
|
||||
if canned_acl != "" {
|
||||
obj_info
|
||||
.metadata
|
||||
.insert("X-Amz-Acl", HeaderValue::from_str(&canned_acl).unwrap());
|
||||
return Ok(obj_info);
|
||||
}
|
||||
|
||||
let grant_acl = get_amz_grant_acl(&res);
|
||||
/*for (k, v) in grant_acl {
|
||||
obj_info.metadata.insert(HeaderName::from_bytes(k.as_bytes()).unwrap(), HeaderValue::from_str(&v.to_string()).unwrap());
|
||||
}*/
|
||||
|
||||
Ok(obj_info)
|
||||
}
|
||||
}
|
||||
|
||||
fn get_canned_acl(ac_policy: &AccessControlPolicy) -> String {
|
||||
let grants = ac_policy.access_control_list.grant.clone();
|
||||
|
||||
if grants.len() == 1 {
|
||||
if grants[0].grantee.uri == "" && grants[0].permission == "FULL_CONTROL" {
|
||||
return "private".to_string();
|
||||
}
|
||||
} else if grants.len() == 2 {
|
||||
for g in grants {
|
||||
if g.grantee.uri == "http://acs.amazonaws.com/groups/global/AuthenticatedUsers" && &g.permission == "READ" {
|
||||
return "authenticated-read".to_string();
|
||||
}
|
||||
if g.grantee.uri == "http://acs.amazonaws.com/groups/global/AllUsers" && &g.permission == "READ" {
|
||||
return "public-read".to_string();
|
||||
}
|
||||
if g.permission == "READ" && g.grantee.id == ac_policy.owner.id.clone().unwrap() {
|
||||
return "bucket-owner-read".to_string();
|
||||
}
|
||||
}
|
||||
} else if grants.len() == 3 {
|
||||
for g in grants {
|
||||
if g.grantee.uri == "http://acs.amazonaws.com/groups/global/AllUsers" && g.permission == "WRITE" {
|
||||
return "public-read-write".to_string();
|
||||
}
|
||||
}
|
||||
}
|
||||
"".to_string()
|
||||
}
|
||||
|
||||
pub fn get_amz_grant_acl(ac_policy: &AccessControlPolicy) -> HashMap<String, Vec<String>> {
|
||||
let grants = ac_policy.access_control_list.grant.clone();
|
||||
let mut res = HashMap::<String, Vec<String>>::new();
|
||||
|
||||
for g in grants {
|
||||
let mut id = "id=".to_string();
|
||||
id.push_str(&g.grantee.id);
|
||||
let permission: &str = &g.permission;
|
||||
match permission {
|
||||
"READ" => {
|
||||
res.entry("X-Amz-Grant-Read".to_string()).or_insert(vec![]).push(id);
|
||||
}
|
||||
"WRITE" => {
|
||||
res.entry("X-Amz-Grant-Write".to_string()).or_insert(vec![]).push(id);
|
||||
}
|
||||
"READ_ACP" => {
|
||||
res.entry("X-Amz-Grant-Read-Acp".to_string()).or_insert(vec![]).push(id);
|
||||
}
|
||||
"WRITE_ACP" => {
|
||||
res.entry("X-Amz-Grant-Write-Acp".to_string()).or_insert(vec![]).push(id);
|
||||
}
|
||||
"FULL_CONTROL" => {
|
||||
res.entry("X-Amz-Grant-Full-Control".to_string()).or_insert(vec![]).push(id);
|
||||
}
|
||||
_ => (),
|
||||
}
|
||||
}
|
||||
res
|
||||
}
|
||||
244
crates/ecstore/src/client/api_get_object_attributes.rs
Normal file
@@ -0,0 +1,244 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
#![allow(unused_imports)]
|
||||
#![allow(unused_variables)]
|
||||
#![allow(unused_mut)]
|
||||
#![allow(unused_assignments)]
|
||||
#![allow(unused_must_use)]
|
||||
#![allow(clippy::all)]
|
||||
|
||||
use bytes::Bytes;
|
||||
use http::{HeaderMap, HeaderValue};
|
||||
use std::collections::HashMap;
|
||||
use std::io::Cursor;
|
||||
use time::OffsetDateTime;
|
||||
use tokio::io::BufReader;
|
||||
|
||||
use crate::client::constants::{GET_OBJECT_ATTRIBUTES_MAX_PARTS, GET_OBJECT_ATTRIBUTES_TAGS, ISO8601_DATEFORMAT};
|
||||
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
|
||||
use s3s::header::{
|
||||
X_AMZ_DELETE_MARKER, X_AMZ_MAX_PARTS, X_AMZ_METADATA_DIRECTIVE, X_AMZ_OBJECT_ATTRIBUTES, X_AMZ_PART_NUMBER_MARKER,
|
||||
X_AMZ_REQUEST_CHARGED, X_AMZ_RESTORE, X_AMZ_VERSION_ID,
|
||||
};
|
||||
use s3s::{Body, dto::Owner};
|
||||
|
||||
use crate::client::{
|
||||
api_error_response::err_invalid_argument,
|
||||
api_get_object_acl::AccessControlPolicy,
|
||||
api_get_options::GetObjectOptions,
|
||||
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
|
||||
};
|
||||
|
||||
pub struct ObjectAttributesOptions {
|
||||
pub max_parts: i64,
|
||||
pub version_id: String,
|
||||
pub part_number_marker: i64,
|
||||
//server_side_encryption: encrypt::ServerSide,
|
||||
}
|
||||
|
||||
pub struct ObjectAttributes {
|
||||
pub version_id: String,
|
||||
pub last_modified: OffsetDateTime,
|
||||
pub object_attributes_response: ObjectAttributesResponse,
|
||||
}
|
||||
|
||||
impl ObjectAttributes {
|
||||
fn new() -> Self {
|
||||
Self {
|
||||
version_id: "".to_string(),
|
||||
last_modified: OffsetDateTime::now_utc(),
|
||||
object_attributes_response: ObjectAttributesResponse::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, serde::Deserialize)]
|
||||
pub struct Checksum {
|
||||
checksum_crc32: String,
|
||||
checksum_crc32c: String,
|
||||
checksum_sha1: String,
|
||||
checksum_sha256: String,
|
||||
}
|
||||
|
||||
impl Checksum {
|
||||
fn new() -> Self {
|
||||
Self {
|
||||
checksum_crc32: "".to_string(),
|
||||
checksum_crc32c: "".to_string(),
|
||||
checksum_sha1: "".to_string(),
|
||||
checksum_sha256: "".to_string(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, serde::Deserialize)]
|
||||
pub struct ObjectParts {
|
||||
pub parts_count: i64,
|
||||
pub part_number_marker: i64,
|
||||
pub next_part_number_marker: i64,
|
||||
pub max_parts: i64,
|
||||
is_truncated: bool,
|
||||
parts: Vec<ObjectAttributePart>,
|
||||
}
|
||||
|
||||
impl ObjectParts {
|
||||
fn new() -> Self {
|
||||
Self {
|
||||
parts_count: 0,
|
||||
part_number_marker: 0,
|
||||
next_part_number_marker: 0,
|
||||
max_parts: 0,
|
||||
is_truncated: false,
|
||||
parts: Vec::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, serde::Deserialize)]
|
||||
pub struct ObjectAttributesResponse {
|
||||
pub etag: String,
|
||||
pub storage_class: String,
|
||||
pub object_size: i64,
|
||||
pub checksum: Checksum,
|
||||
pub object_parts: ObjectParts,
|
||||
}
|
||||
|
||||
impl ObjectAttributesResponse {
|
||||
fn new() -> Self {
|
||||
Self {
|
||||
etag: "".to_string(),
|
||||
storage_class: "".to_string(),
|
||||
object_size: 0,
|
||||
checksum: Checksum::new(),
|
||||
object_parts: ObjectParts::new(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, serde::Deserialize)]
|
||||
struct ObjectAttributePart {
|
||||
checksum_crc32: String,
|
||||
checksum_crc32c: String,
|
||||
checksum_sha1: String,
|
||||
checksum_sha256: String,
|
||||
part_number: i64,
|
||||
size: i64,
|
||||
}
|
||||
|
||||
impl ObjectAttributes {
|
||||
pub async fn parse_response(&mut self, resp: &mut http::Response<Body>) -> Result<(), std::io::Error> {
|
||||
let h = resp.headers();
|
||||
let mod_time = OffsetDateTime::parse(h.get("Last-Modified").unwrap().to_str().unwrap(), ISO8601_DATEFORMAT).unwrap(); //RFC7231Time
|
||||
self.last_modified = mod_time;
|
||||
self.version_id = h.get(X_AMZ_VERSION_ID).unwrap().to_str().unwrap().to_string();
|
||||
|
||||
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
|
||||
let mut response = match serde_xml_rs::from_str::<ObjectAttributesResponse>(&String::from_utf8(b).unwrap()) {
|
||||
Ok(result) => result,
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err.to_string()));
|
||||
}
|
||||
};
|
||||
self.object_attributes_response = response;
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
|
||||
impl TransitionClient {
|
||||
pub async fn get_object_attributes(
|
||||
&self,
|
||||
bucket_name: &str,
|
||||
object_name: &str,
|
||||
opts: ObjectAttributesOptions,
|
||||
) -> Result<ObjectAttributes, std::io::Error> {
|
||||
let mut url_values = HashMap::new();
|
||||
url_values.insert("attributes".to_string(), "".to_string());
|
||||
if opts.version_id != "" {
|
||||
url_values.insert("versionId".to_string(), opts.version_id);
|
||||
}
|
||||
|
||||
let mut headers = HeaderMap::new();
|
||||
headers.insert(X_AMZ_OBJECT_ATTRIBUTES, HeaderValue::from_str(GET_OBJECT_ATTRIBUTES_TAGS).unwrap());
|
||||
|
||||
if opts.part_number_marker > 0 {
|
||||
headers.insert(
|
||||
X_AMZ_PART_NUMBER_MARKER,
|
||||
HeaderValue::from_str(&opts.part_number_marker.to_string()).unwrap(),
|
||||
);
|
||||
}
|
||||
|
||||
if opts.max_parts > 0 {
|
||||
headers.insert(X_AMZ_MAX_PARTS, HeaderValue::from_str(&opts.max_parts.to_string()).unwrap());
|
||||
} else {
|
||||
headers.insert(
|
||||
X_AMZ_MAX_PARTS,
|
||||
HeaderValue::from_str(&GET_OBJECT_ATTRIBUTES_MAX_PARTS.to_string()).unwrap(),
|
||||
);
|
||||
}
|
||||
|
||||
/*if opts.server_side_encryption.is_some() {
|
||||
opts.server_side_encryption.Marshal(headers);
|
||||
}*/
|
||||
|
||||
let mut resp = self
|
||||
.execute_method(
|
||||
http::Method::HEAD,
|
||||
&mut RequestMetadata {
|
||||
bucket_name: bucket_name.to_string(),
|
||||
object_name: object_name.to_string(),
|
||||
query_values: url_values,
|
||||
custom_header: headers,
|
||||
content_sha256_hex: EMPTY_STRING_SHA256_HASH.to_string(),
|
||||
content_md5_base64: "".to_string(),
|
||||
content_body: ReaderImpl::Body(Bytes::new()),
|
||||
content_length: 0,
|
||||
stream_sha256: false,
|
||||
trailer: HeaderMap::new(),
|
||||
pre_sign_url: Default::default(),
|
||||
add_crc: Default::default(),
|
||||
extra_pre_sign_header: Default::default(),
|
||||
bucket_location: Default::default(),
|
||||
expires: Default::default(),
|
||||
},
|
||||
)
|
||||
.await?;
|
||||
|
||||
let h = resp.headers();
|
||||
let has_etag = h.get("ETag").unwrap().to_str().unwrap();
|
||||
if !has_etag.is_empty() {
|
||||
return Err(std::io::Error::other(
|
||||
"get_object_attributes is not supported by the current endpoint version",
|
||||
));
|
||||
}
|
||||
|
||||
if resp.status() != http::StatusCode::OK {
|
||||
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
|
||||
let err_body = String::from_utf8(b).unwrap();
|
||||
let mut er = match serde_xml_rs::from_str::<AccessControlPolicy>(&err_body) {
|
||||
Ok(result) => result,
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err.to_string()));
|
||||
}
|
||||
};
|
||||
|
||||
return Err(std::io::Error::other(er.access_control_list.permission));
|
||||
}
|
||||
|
||||
let mut oa = ObjectAttributes::new();
|
||||
oa.parse_response(&mut resp).await?;
|
||||
|
||||
Ok(oa)
|
||||
}
|
||||
}
|
||||
147
crates/ecstore/src/client/api_get_object_file.rs
Normal file
@@ -0,0 +1,147 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
#![allow(unused_imports)]
|
||||
#![allow(unused_variables)]
|
||||
#![allow(unused_mut)]
|
||||
#![allow(unused_assignments)]
|
||||
#![allow(unused_must_use)]
|
||||
#![allow(clippy::all)]
|
||||
|
||||
use bytes::Bytes;
|
||||
use http::HeaderMap;
|
||||
use std::io::Cursor;
|
||||
#[cfg(not(windows))]
|
||||
use std::os::unix::fs::MetadataExt;
|
||||
#[cfg(not(windows))]
|
||||
use std::os::unix::fs::OpenOptionsExt;
|
||||
#[cfg(not(windows))]
|
||||
use std::os::unix::fs::PermissionsExt;
|
||||
#[cfg(windows)]
|
||||
use std::os::windows::fs::MetadataExt;
|
||||
use tokio::io::BufReader;
|
||||
|
||||
use crate::client::{
|
||||
api_error_response::err_invalid_argument,
|
||||
api_get_options::GetObjectOptions,
|
||||
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
|
||||
};
|
||||
|
||||
impl TransitionClient {
|
||||
pub async fn fget_object(
|
||||
&self,
|
||||
bucket_name: &str,
|
||||
object_name: &str,
|
||||
file_path: &str,
|
||||
opts: GetObjectOptions,
|
||||
) -> Result<(), std::io::Error> {
|
||||
match std::fs::metadata(file_path) {
|
||||
Ok(file_path_stat) => {
|
||||
let ft = file_path_stat.file_type();
|
||||
if ft.is_dir() {
|
||||
return Err(std::io::Error::other(err_invalid_argument("filename is a directory.")));
|
||||
}
|
||||
}
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
}
|
||||
|
||||
let path = std::path::Path::new(file_path);
|
||||
if let Some(parent) = path.parent() {
|
||||
if let Some(object_dir) = parent.file_name() {
|
||||
match std::fs::create_dir_all(object_dir) {
|
||||
Ok(_) => {
|
||||
let dir = std::path::Path::new(object_dir);
|
||||
if let Ok(dir_stat) = dir.metadata() {
|
||||
#[cfg(not(windows))]
|
||||
dir_stat.permissions().set_mode(0o700);
|
||||
}
|
||||
}
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let object_stat = match self.stat_object(bucket_name, object_name, &opts).await {
|
||||
Ok(object_stat) => object_stat,
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
};
|
||||
|
||||
let mut file_part_path = file_path.to_string();
|
||||
file_part_path.push_str("" /*sum_sha256_hex(object_stat.etag.as_bytes())*/);
|
||||
file_part_path.push_str(".part.rustfs");
|
||||
|
||||
#[cfg(not(windows))]
|
||||
let file_part = match std::fs::OpenOptions::new().mode(0o600).open(file_part_path.clone()) {
|
||||
Ok(file_part) => file_part,
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
};
|
||||
#[cfg(windows)]
|
||||
let file_part = match std::fs::OpenOptions::new().open(file_part_path.clone()) {
|
||||
Ok(file_part) => file_part,
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
};
|
||||
|
||||
let mut close_and_remove = true;
|
||||
/*defer(|| {
|
||||
if close_and_remove {
|
||||
_ = file_part.close();
|
||||
let _ = std::fs::remove(file_part_path);
|
||||
}
|
||||
});*/
|
||||
|
||||
let st = match file_part.metadata() {
|
||||
Ok(st) => st,
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
};
|
||||
|
||||
let mut opts = opts;
|
||||
#[cfg(windows)]
|
||||
if st.file_size() > 0 {
|
||||
opts.set_range(st.file_size() as i64, 0);
|
||||
}
|
||||
|
||||
let object_reader = match self.get_object(bucket_name, object_name, &opts) {
|
||||
Ok(object_reader) => object_reader,
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
};
|
||||
|
||||
/*if let Err(err) = std::fs::copy(file_part, object_reader) {
|
||||
return Err(std::io::Error::other(err));
|
||||
}*/
|
||||
|
||||
close_and_remove = false;
|
||||
/*if let Err(err) = file_part.close() {
|
||||
return Err(std::io::Error::other(err));
|
||||
}*/
|
||||
|
||||
if let Err(err) = std::fs::rename(file_part_path, file_path) {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
@@ -29,9 +29,9 @@ use crate::client::api_error_response::err_invalid_argument;
|
||||
#[derive(Default)]
|
||||
#[allow(dead_code)]
|
||||
pub struct AdvancedGetOptions {
|
||||
replication_deletemarker: bool,
|
||||
is_replication_ready_for_deletemarker: bool,
|
||||
replication_proxy_request: String,
|
||||
pub replication_delete_marker: bool,
|
||||
pub is_replication_ready_for_delete_marker: bool,
|
||||
pub replication_proxy_request: String,
|
||||
}
|
||||
|
||||
pub struct GetObjectOptions {
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(clippy::map_entry)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(clippy::map_entry)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
@@ -25,7 +24,6 @@ use std::{collections::HashMap, sync::Arc};
|
||||
use time::{Duration, OffsetDateTime, macros::format_description};
|
||||
use tracing::{error, info, warn};
|
||||
|
||||
use rustfs_utils::hasher::Hasher;
|
||||
use s3s::dto::{ObjectLockLegalHoldStatus, ObjectLockRetentionMode, ReplicationStatus};
|
||||
use s3s::header::{
|
||||
X_AMZ_OBJECT_LOCK_LEGAL_HOLD, X_AMZ_OBJECT_LOCK_MODE, X_AMZ_OBJECT_LOCK_RETAIN_UNTIL_DATE, X_AMZ_REPLICATION_STATUS,
|
||||
@@ -364,18 +362,14 @@ impl TransitionClient {
|
||||
if opts.send_content_md5 {
|
||||
let mut md5_hasher = self.md5_hasher.lock().unwrap();
|
||||
let hash = md5_hasher.as_mut().expect("err");
|
||||
hash.write(&buf[..length]);
|
||||
md5_base64 = base64_encode(hash.sum().as_bytes());
|
||||
let hash = hash.hash_encode(&buf[..length]);
|
||||
md5_base64 = base64_encode(hash.as_ref());
|
||||
} else {
|
||||
let csum;
|
||||
{
|
||||
let mut crc = opts.auto_checksum.hasher()?;
|
||||
crc.reset();
|
||||
crc.write(&buf[..length]);
|
||||
csum = crc.sum();
|
||||
}
|
||||
let mut crc = opts.auto_checksum.hasher()?;
|
||||
let csum = crc.hash_encode(&buf[..length]);
|
||||
|
||||
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key().as_bytes()) {
|
||||
custom_header.insert(header_name, base64_encode(csum.as_bytes()).parse().unwrap());
|
||||
custom_header.insert(header_name, base64_encode(csum.as_ref()).parse().expect("err"));
|
||||
} else {
|
||||
warn!("Invalid header name: {}", opts.auto_checksum.key());
|
||||
}
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(clippy::map_entry)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
@@ -31,7 +30,6 @@ use tracing::{error, info};
|
||||
use url::form_urlencoded::Serializer;
|
||||
use uuid::Uuid;
|
||||
|
||||
use rustfs_utils::hasher::Hasher;
|
||||
use s3s::header::{X_AMZ_EXPIRATION, X_AMZ_VERSION_ID};
|
||||
use s3s::{Body, dto::StreamingBlob};
|
||||
//use crate::disk::{Reader, BufferReader};
|
||||
@@ -117,8 +115,8 @@ impl TransitionClient {
|
||||
let length = buf.len();
|
||||
|
||||
for (k, v) in hash_algos.iter_mut() {
|
||||
v.write(&buf[..length]);
|
||||
hash_sums.insert(k.to_string(), Vec::try_from(v.sum().as_bytes()).unwrap());
|
||||
let hash = v.hash_encode(&buf[..length]);
|
||||
hash_sums.insert(k.to_string(), hash.as_ref().to_vec());
|
||||
}
|
||||
|
||||
//let rd = newHook(bytes.NewReader(buf[..length]), opts.progress);
|
||||
@@ -134,15 +132,11 @@ impl TransitionClient {
|
||||
sha256_hex = hex_simd::encode_to_string(hash_sums["sha256"].clone(), hex_simd::AsciiCase::Lower);
|
||||
//}
|
||||
if hash_sums.len() == 0 {
|
||||
let csum;
|
||||
{
|
||||
let mut crc = opts.auto_checksum.hasher()?;
|
||||
crc.reset();
|
||||
crc.write(&buf[..length]);
|
||||
csum = crc.sum();
|
||||
}
|
||||
let mut crc = opts.auto_checksum.hasher()?;
|
||||
let csum = crc.hash_encode(&buf[..length]);
|
||||
|
||||
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key().as_bytes()) {
|
||||
custom_header.insert(header_name, base64_encode(csum.as_bytes()).parse().expect("err"));
|
||||
custom_header.insert(header_name, base64_encode(csum.as_ref()).parse().expect("err"));
|
||||
} else {
|
||||
warn!("Invalid header name: {}", opts.auto_checksum.key());
|
||||
}
|
||||
@@ -297,8 +291,6 @@ impl TransitionClient {
|
||||
};
|
||||
|
||||
let resp = self.execute_method(http::Method::PUT, &mut req_metadata).await?;
|
||||
//defer closeResponse(resp)
|
||||
//if resp.is_none() {
|
||||
if resp.status() != StatusCode::OK {
|
||||
return Err(std::io::Error::other(http_resp_to_error_response(
|
||||
resp,
|
||||
@@ -366,13 +358,13 @@ impl TransitionClient {
|
||||
bucket_name: bucket_name.to_string(),
|
||||
object_name: object_name.to_string(),
|
||||
query_values: url_values,
|
||||
custom_header: headers,
|
||||
content_body: ReaderImpl::Body(complete_multipart_upload_buffer),
|
||||
content_length: 100, //complete_multipart_upload_bytes.len(),
|
||||
content_sha256_hex: "".to_string(), //hex_simd::encode_to_string(complete_multipart_upload_bytes, hex_simd::AsciiCase::Lower),
|
||||
custom_header: headers,
|
||||
content_md5_base64: "".to_string(),
|
||||
stream_sha256: Default::default(),
|
||||
trailer: Default::default(),
|
||||
content_md5_base64: "".to_string(),
|
||||
pre_sign_url: Default::default(),
|
||||
add_crc: Default::default(),
|
||||
extra_pre_sign_header: Default::default(),
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(clippy::map_entry)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
@@ -40,7 +39,7 @@ use crate::client::{
|
||||
constants::ISO8601_DATEFORMAT,
|
||||
transition_api::{ReaderImpl, RequestMetadata, TransitionClient, UploadInfo},
|
||||
};
|
||||
use rustfs_utils::hasher::Hasher;
|
||||
|
||||
use rustfs_utils::{crypto::base64_encode, path::trim_etag};
|
||||
use s3s::header::{X_AMZ_EXPIRATION, X_AMZ_VERSION_ID};
|
||||
|
||||
@@ -153,21 +152,16 @@ impl TransitionClient {
|
||||
if opts.send_content_md5 {
|
||||
let mut md5_hasher = self.md5_hasher.lock().unwrap();
|
||||
let md5_hash = md5_hasher.as_mut().expect("err");
|
||||
md5_hash.reset();
|
||||
md5_hash.write(&buf[..length]);
|
||||
md5_base64 = base64_encode(md5_hash.sum().as_bytes());
|
||||
let hash = md5_hash.hash_encode(&buf[..length]);
|
||||
md5_base64 = base64_encode(hash.as_ref());
|
||||
} else {
|
||||
let csum;
|
||||
{
|
||||
let mut crc = opts.auto_checksum.hasher()?;
|
||||
crc.reset();
|
||||
crc.write(&buf[..length]);
|
||||
csum = crc.sum();
|
||||
}
|
||||
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key_capitalized().as_bytes()) {
|
||||
custom_header.insert(header_name, HeaderValue::from_str(&base64_encode(csum.as_bytes())).expect("err"));
|
||||
let mut crc = opts.auto_checksum.hasher()?;
|
||||
let csum = crc.hash_encode(&buf[..length]);
|
||||
|
||||
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key().as_bytes()) {
|
||||
custom_header.insert(header_name, base64_encode(csum.as_ref()).parse().expect("err"));
|
||||
} else {
|
||||
warn!("Invalid header name: {}", opts.auto_checksum.key_capitalized());
|
||||
warn!("Invalid header name: {}", opts.auto_checksum.key());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -308,17 +302,11 @@ impl TransitionClient {
|
||||
|
||||
let mut custom_header = HeaderMap::new();
|
||||
if !opts.send_content_md5 {
|
||||
let csum;
|
||||
{
|
||||
let mut crc = opts.auto_checksum.hasher()?;
|
||||
crc.reset();
|
||||
crc.write(&buf[..length]);
|
||||
csum = crc.sum();
|
||||
}
|
||||
let mut crc = opts.auto_checksum.hasher()?;
|
||||
let csum = crc.hash_encode(&buf[..length]);
|
||||
|
||||
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key().as_bytes()) {
|
||||
if let Ok(header_value) = HeaderValue::from_str(&base64_encode(csum.as_bytes())) {
|
||||
custom_header.insert(header_name, header_value);
|
||||
}
|
||||
custom_header.insert(header_name, base64_encode(csum.as_ref()).parse().expect("err"));
|
||||
} else {
|
||||
warn!("Invalid header name: {}", opts.auto_checksum.key());
|
||||
}
|
||||
@@ -334,8 +322,8 @@ impl TransitionClient {
|
||||
if opts.send_content_md5 {
|
||||
let mut md5_hasher = clone_self.md5_hasher.lock().unwrap();
|
||||
let md5_hash = md5_hasher.as_mut().expect("err");
|
||||
md5_hash.write(&buf[..length]);
|
||||
md5_base64 = base64_encode(md5_hash.sum().as_bytes());
|
||||
let hash = md5_hash.hash_encode(&buf[..length]);
|
||||
md5_base64 = base64_encode(hash.as_ref());
|
||||
}
|
||||
|
||||
//defer wg.Done()
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(clippy::map_entry)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
@@ -21,6 +20,7 @@
|
||||
|
||||
use bytes::Bytes;
|
||||
use http::{HeaderMap, HeaderValue, Method, StatusCode};
|
||||
use rustfs_utils::{HashAlgorithm, crypto::base64_encode};
|
||||
use s3s::S3ErrorCode;
|
||||
use s3s::dto::ReplicationStatus;
|
||||
use s3s::header::X_AMZ_BYPASS_GOVERNANCE_RETENTION;
|
||||
@@ -38,7 +38,6 @@ use crate::{
|
||||
store_api::{GetObjectReader, ObjectInfo, StorageAPI},
|
||||
};
|
||||
use rustfs_utils::hash::EMPTY_STRING_SHA256_HASH;
|
||||
use rustfs_utils::hasher::{sum_md5_base64, sum_sha256_hex};
|
||||
|
||||
pub struct RemoveBucketOptions {
|
||||
_forced_elete: bool,
|
||||
@@ -330,8 +329,8 @@ impl TransitionClient {
|
||||
query_values: url_values.clone(),
|
||||
content_body: ReaderImpl::Body(Bytes::from(remove_bytes.clone())),
|
||||
content_length: remove_bytes.len() as i64,
|
||||
content_md5_base64: sum_md5_base64(&remove_bytes),
|
||||
content_sha256_hex: sum_sha256_hex(&remove_bytes),
|
||||
content_md5_base64: base64_encode(&HashAlgorithm::Md5.hash_encode(&remove_bytes).as_ref()),
|
||||
content_sha256_hex: base64_encode(&HashAlgorithm::SHA256.hash_encode(&remove_bytes).as_ref()),
|
||||
custom_header: headers,
|
||||
object_name: "".to_string(),
|
||||
stream_sha256: false,
|
||||
|
||||
172
crates/ecstore/src/client/api_restore.rs
Normal file
@@ -0,0 +1,172 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
#![allow(unused_imports)]
|
||||
#![allow(unused_variables)]
|
||||
#![allow(unused_mut)]
|
||||
#![allow(unused_assignments)]
|
||||
#![allow(unused_must_use)]
|
||||
#![allow(clippy::all)]
|
||||
|
||||
use bytes::Bytes;
|
||||
use http::HeaderMap;
|
||||
use std::collections::HashMap;
|
||||
use std::io::Cursor;
|
||||
use tokio::io::BufReader;
|
||||
|
||||
use crate::client::{
|
||||
api_error_response::{err_invalid_argument, http_resp_to_error_response},
|
||||
api_get_object_acl::AccessControlList,
|
||||
api_get_options::GetObjectOptions,
|
||||
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
|
||||
};
|
||||
|
||||
const TIER_STANDARD: &str = "Standard";
|
||||
const TIER_BULK: &str = "Bulk";
|
||||
const TIER_EXPEDITED: &str = "Expedited";
|
||||
|
||||
#[derive(Debug, Default, serde::Serialize)]
|
||||
pub struct GlacierJobParameters {
|
||||
pub tier: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, serde::Serialize, serde::Deserialize)]
|
||||
pub struct Encryption {
|
||||
pub encryption_type: String,
|
||||
pub kms_context: String,
|
||||
pub kms_key_id: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, serde::Serialize, serde::Deserialize)]
|
||||
pub struct MetadataEntry {
|
||||
pub name: String,
|
||||
pub value: String,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, serde::Serialize)]
|
||||
pub struct S3 {
|
||||
pub access_control_list: AccessControlList,
|
||||
pub bucket_name: String,
|
||||
pub prefix: String,
|
||||
pub canned_acl: String,
|
||||
pub encryption: Encryption,
|
||||
pub storage_class: String,
|
||||
//tagging: Tags,
|
||||
pub user_metadata: MetadataEntry,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, serde::Serialize)]
|
||||
pub struct SelectParameters {
|
||||
pub expression_type: String,
|
||||
pub expression: String,
|
||||
//input_serialization: SelectObjectInputSerialization,
|
||||
//output_serialization: SelectObjectOutputSerialization,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, serde::Serialize)]
|
||||
pub struct OutputLocation(pub S3);
|
||||
|
||||
#[derive(Debug, Default, serde::Serialize)]
|
||||
pub struct RestoreRequest {
|
||||
pub restore_type: String,
|
||||
pub tier: String,
|
||||
pub days: i64,
|
||||
pub glacier_job_parameters: GlacierJobParameters,
|
||||
pub description: String,
|
||||
pub select_parameters: SelectParameters,
|
||||
pub output_location: OutputLocation,
|
||||
}
|
||||
|
||||
impl RestoreRequest {
|
||||
fn set_days(&mut self, v: i64) {
|
||||
self.days = v;
|
||||
}
|
||||
|
||||
fn set_glacier_job_parameters(&mut self, v: GlacierJobParameters) {
|
||||
self.glacier_job_parameters = v;
|
||||
}
|
||||
|
||||
fn set_type(&mut self, v: &str) {
|
||||
self.restore_type = v.to_string();
|
||||
}
|
||||
|
||||
fn set_tier(&mut self, v: &str) {
|
||||
self.tier = v.to_string();
|
||||
}
|
||||
|
||||
fn set_description(&mut self, v: &str) {
|
||||
self.description = v.to_string();
|
||||
}
|
||||
|
||||
fn set_select_parameters(&mut self, v: SelectParameters) {
|
||||
self.select_parameters = v;
|
||||
}
|
||||
|
||||
fn set_output_location(&mut self, v: OutputLocation) {
|
||||
self.output_location = v;
|
||||
}
|
||||
}
|
||||
|
||||
impl TransitionClient {
|
||||
pub async fn restore_object(
|
||||
&self,
|
||||
bucket_name: &str,
|
||||
object_name: &str,
|
||||
version_id: &str,
|
||||
restore_req: &RestoreRequest,
|
||||
) -> Result<(), std::io::Error> {
|
||||
let restore_request = match serde_xml_rs::to_string(restore_req) {
|
||||
Ok(buf) => buf,
|
||||
Err(e) => {
|
||||
return Err(std::io::Error::other(e));
|
||||
}
|
||||
};
|
||||
let restore_request_bytes = restore_request.as_bytes().to_vec();
|
||||
|
||||
let mut url_values = HashMap::new();
|
||||
url_values.insert("restore".to_string(), "".to_string());
|
||||
if version_id != "" {
|
||||
url_values.insert("versionId".to_string(), version_id.to_string());
|
||||
}
|
||||
|
||||
let restore_request_buffer = Bytes::from(restore_request_bytes.clone());
|
||||
let resp = self
|
||||
.execute_method(
|
||||
http::Method::HEAD,
|
||||
&mut RequestMetadata {
|
||||
bucket_name: bucket_name.to_string(),
|
||||
object_name: object_name.to_string(),
|
||||
query_values: url_values,
|
||||
custom_header: HeaderMap::new(),
|
||||
content_sha256_hex: "".to_string(), //sum_sha256_hex(&restore_request_bytes),
|
||||
content_md5_base64: "".to_string(), //sum_md5_base64(&restore_request_bytes),
|
||||
content_body: ReaderImpl::Body(restore_request_buffer),
|
||||
content_length: restore_request_bytes.len() as i64,
|
||||
stream_sha256: false,
|
||||
trailer: HeaderMap::new(),
|
||||
pre_sign_url: Default::default(),
|
||||
add_crc: Default::default(),
|
||||
extra_pre_sign_header: Default::default(),
|
||||
bucket_location: Default::default(),
|
||||
expires: Default::default(),
|
||||
},
|
||||
)
|
||||
.await?;
|
||||
|
||||
let b = resp.body().bytes().expect("err").to_vec();
|
||||
if resp.status() != http::StatusCode::ACCEPTED && resp.status() != http::StatusCode::OK {
|
||||
return Err(std::io::Error::other(http_resp_to_error_response(resp, b, bucket_name, "")));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(clippy::map_entry)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
|
||||
166
crates/ecstore/src/client/api_stat.rs
Normal file
@@ -0,0 +1,166 @@
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
// you may not use this file except in compliance with the License.
|
||||
// You may obtain a copy of the License at
|
||||
//
|
||||
// http://www.apache.org/licenses/LICENSE-2.0
|
||||
//
|
||||
// Unless required by applicable law or agreed to in writing, software
|
||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
#![allow(unused_imports)]
|
||||
#![allow(unused_variables)]
|
||||
#![allow(unused_mut)]
|
||||
#![allow(unused_assignments)]
|
||||
#![allow(unused_must_use)]
|
||||
#![allow(clippy::all)]
|
||||
|
||||
use bytes::Bytes;
|
||||
use http::{HeaderMap, HeaderValue};
|
||||
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
|
||||
use std::{collections::HashMap, str::FromStr};
|
||||
use tokio::io::BufReader;
|
||||
use uuid::Uuid;
|
||||
|
||||
use crate::client::{
|
||||
api_error_response::{ErrorResponse, err_invalid_argument, http_resp_to_error_response},
|
||||
api_get_options::GetObjectOptions,
|
||||
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
|
||||
};
|
||||
use s3s::header::{X_AMZ_DELETE_MARKER, X_AMZ_VERSION_ID};
|
||||
|
||||
impl TransitionClient {
|
||||
pub async fn bucket_exists(&self, bucket_name: &str) -> Result<bool, std::io::Error> {
|
||||
let resp = self
|
||||
.execute_method(
|
||||
http::Method::HEAD,
|
||||
&mut RequestMetadata {
|
||||
bucket_name: bucket_name.to_string(),
|
||||
object_name: "".to_string(),
|
||||
query_values: HashMap::new(),
|
||||
custom_header: HeaderMap::new(),
|
||||
content_sha256_hex: EMPTY_STRING_SHA256_HASH.to_string(),
|
||||
content_md5_base64: "".to_string(),
|
||||
content_body: ReaderImpl::Body(Bytes::new()),
|
||||
content_length: 0,
|
||||
stream_sha256: false,
|
||||
trailer: HeaderMap::new(),
|
||||
pre_sign_url: Default::default(),
|
||||
add_crc: Default::default(),
|
||||
extra_pre_sign_header: Default::default(),
|
||||
bucket_location: Default::default(),
|
||||
expires: Default::default(),
|
||||
},
|
||||
)
|
||||
.await;
|
||||
|
||||
if let Ok(resp) = resp {
|
||||
let b = resp.body().bytes().expect("err").to_vec();
|
||||
let resperr = http_resp_to_error_response(resp, b, bucket_name, "");
|
||||
/*if to_error_response(resperr).code == "NoSuchBucket" {
|
||||
return Ok(false);
|
||||
}
|
||||
if resp.status_code() != http::StatusCode::OK {
|
||||
return Ok(false);
|
||||
}*/
|
||||
}
|
||||
Ok(true)
|
||||
}
|
||||
|
||||
pub async fn stat_object(
|
||||
&self,
|
||||
bucket_name: &str,
|
||||
object_name: &str,
|
||||
opts: &GetObjectOptions,
|
||||
) -> Result<ObjectInfo, std::io::Error> {
|
||||
let mut headers = opts.header();
|
||||
if opts.internal.replication_delete_marker {
|
||||
headers.insert("X-Source-DeleteMarker", HeaderValue::from_str("true").unwrap());
|
||||
}
|
||||
if opts.internal.is_replication_ready_for_delete_marker {
|
||||
headers.insert("X-Check-Replication-Ready", HeaderValue::from_str("true").unwrap());
|
||||
}
|
||||
|
||||
let resp = self
|
||||
.execute_method(
|
||||
http::Method::HEAD,
|
||||
&mut RequestMetadata {
|
||||
bucket_name: bucket_name.to_string(),
|
||||
object_name: object_name.to_string(),
|
||||
query_values: opts.to_query_values(),
|
||||
custom_header: headers,
|
||||
content_sha256_hex: EMPTY_STRING_SHA256_HASH.to_string(),
|
||||
content_md5_base64: "".to_string(),
|
||||
content_body: ReaderImpl::Body(Bytes::new()),
|
||||
content_length: 0,
|
||||
stream_sha256: false,
|
||||
trailer: HeaderMap::new(),
|
||||
pre_sign_url: Default::default(),
|
||||
add_crc: Default::default(),
|
||||
extra_pre_sign_header: Default::default(),
|
||||
bucket_location: Default::default(),
|
||||
expires: Default::default(),
|
||||
},
|
||||
)
|
||||
.await;
|
||||
|
||||
match resp {
|
||||
Ok(resp) => {
|
||||
let h = resp.headers();
|
||||
let delete_marker = if let Some(x_amz_delete_marker) = h.get(X_AMZ_DELETE_MARKER.as_str()) {
|
||||
x_amz_delete_marker.to_str().unwrap() == "true"
|
||||
} else {
|
||||
false
|
||||
};
|
||||
let replication_ready = if let Some(x_amz_delete_marker) = h.get("X-Replication-Ready") {
|
||||
x_amz_delete_marker.to_str().unwrap() == "true"
|
||||
} else {
|
||||
false
|
||||
};
|
||||
if resp.status() != http::StatusCode::OK && resp.status() != http::StatusCode::PARTIAL_CONTENT {
|
||||
if resp.status() == http::StatusCode::METHOD_NOT_ALLOWED && opts.version_id != "" && delete_marker {
|
||||
let err_resp = ErrorResponse {
|
||||
status_code: resp.status(),
|
||||
code: s3s::S3ErrorCode::MethodNotAllowed,
|
||||
message: "the specified method is not allowed against this resource.".to_string(),
|
||||
bucket_name: bucket_name.to_string(),
|
||||
key: object_name.to_string(),
|
||||
..Default::default()
|
||||
};
|
||||
return Ok(ObjectInfo {
|
||||
version_id: match Uuid::from_str(h.get(X_AMZ_VERSION_ID).unwrap().to_str().unwrap()) {
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
return Err(std::io::Error::other(e));
|
||||
}
|
||||
},
|
||||
is_delete_marker: delete_marker,
|
||||
..Default::default()
|
||||
});
|
||||
//err_resp
|
||||
}
|
||||
return Ok(ObjectInfo {
|
||||
version_id: match Uuid::from_str(h.get(X_AMZ_VERSION_ID).unwrap().to_str().unwrap()) {
|
||||
Ok(v) => v,
|
||||
Err(e) => {
|
||||
return Err(std::io::Error::other(e));
|
||||
}
|
||||
},
|
||||
is_delete_marker: delete_marker,
|
||||
replication_ready: replication_ready,
|
||||
..Default::default()
|
||||
});
|
||||
//http_resp_to_error_response(resp, bucket_name, object_name)
|
||||
}
|
||||
|
||||
Ok(to_object_info(bucket_name, object_name, h).unwrap())
|
||||
}
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(clippy::map_entry)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
@@ -31,7 +30,6 @@ use crate::client::{
|
||||
transition_api::{Document, TransitionClient},
|
||||
};
|
||||
use rustfs_utils::hash::EMPTY_STRING_SHA256_HASH;
|
||||
use rustfs_utils::hasher::{Hasher, Sha256};
|
||||
use s3s::Body;
|
||||
use s3s::S3ErrorCode;
|
||||
|
||||
@@ -125,9 +123,11 @@ impl TransitionClient {
|
||||
url_str = target_url.to_string();
|
||||
}
|
||||
|
||||
let mut req_builder = Request::builder().method(http::Method::GET).uri(url_str);
|
||||
let Ok(mut req) = Request::builder().method(http::Method::GET).uri(url_str).body(Body::empty()) else {
|
||||
return Err(std::io::Error::other("create request error"));
|
||||
};
|
||||
|
||||
self.set_user_agent(&mut req_builder);
|
||||
self.set_user_agent(&mut req);
|
||||
|
||||
let value;
|
||||
{
|
||||
@@ -154,22 +154,12 @@ impl TransitionClient {
|
||||
}
|
||||
|
||||
if signer_type == SignatureType::SignatureAnonymous {
|
||||
let req = match req_builder.body(Body::empty()) {
|
||||
Ok(req) => return Ok(req),
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
};
|
||||
return Ok(req);
|
||||
}
|
||||
|
||||
if signer_type == SignatureType::SignatureV2 {
|
||||
let req_builder = rustfs_signer::sign_v2(req_builder, 0, &access_key_id, &secret_access_key, is_virtual_style);
|
||||
let req = match req_builder.body(Body::empty()) {
|
||||
Ok(req) => return Ok(req),
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
};
|
||||
let req = rustfs_signer::sign_v2(req, 0, &access_key_id, &secret_access_key, is_virtual_style);
|
||||
return Ok(req);
|
||||
}
|
||||
|
||||
let mut content_sha256 = EMPTY_STRING_SHA256_HASH.to_string();
|
||||
@@ -177,17 +167,10 @@ impl TransitionClient {
|
||||
content_sha256 = UNSIGNED_PAYLOAD.to_string();
|
||||
}
|
||||
|
||||
req_builder
|
||||
.headers_mut()
|
||||
.expect("err")
|
||||
req.headers_mut()
|
||||
.insert("X-Amz-Content-Sha256", content_sha256.parse().unwrap());
|
||||
let req_builder = rustfs_signer::sign_v4(req_builder, 0, &access_key_id, &secret_access_key, &session_token, "us-east-1");
|
||||
let req = match req_builder.body(Body::empty()) {
|
||||
Ok(req) => return Ok(req),
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
};
|
||||
let req = rustfs_signer::sign_v4(req, 0, &access_key_id, &secret_access_key, &session_token, "us-east-1");
|
||||
Ok(req)
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -16,6 +16,9 @@ pub mod admin_handler_utils;
|
||||
pub mod api_bucket_policy;
|
||||
pub mod api_error_response;
|
||||
pub mod api_get_object;
|
||||
pub mod api_get_object_acl;
|
||||
pub mod api_get_object_attributes;
|
||||
pub mod api_get_object_file;
|
||||
pub mod api_get_options;
|
||||
pub mod api_list;
|
||||
pub mod api_put_object;
|
||||
@@ -23,7 +26,9 @@ pub mod api_put_object_common;
|
||||
pub mod api_put_object_multipart;
|
||||
pub mod api_put_object_streaming;
|
||||
pub mod api_remove;
|
||||
pub mod api_restore;
|
||||
pub mod api_s3_datatypes;
|
||||
pub mod api_stat;
|
||||
pub mod bucket_cache;
|
||||
pub mod constants;
|
||||
pub mod credentials;
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(clippy::map_entry)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
@@ -28,8 +27,12 @@ use http::{
|
||||
};
|
||||
use hyper_rustls::{ConfigBuilderExt, HttpsConnector};
|
||||
use hyper_util::{client::legacy::Client, client::legacy::connect::HttpConnector, rt::TokioExecutor};
|
||||
use md5::Digest;
|
||||
use md5::Md5;
|
||||
use rand::Rng;
|
||||
use rustfs_utils::HashAlgorithm;
|
||||
use serde::{Deserialize, Serialize};
|
||||
use sha2::Sha256;
|
||||
use std::io::Cursor;
|
||||
use std::pin::Pin;
|
||||
use std::sync::atomic::{AtomicI32, Ordering};
|
||||
@@ -60,7 +63,6 @@ use crate::client::{
|
||||
};
|
||||
use crate::{checksum::ChecksumMode, store_api::GetObjectReader};
|
||||
use rustfs_rio::HashReader;
|
||||
use rustfs_utils::hasher::{MD5, Sha256};
|
||||
use rustfs_utils::{
|
||||
net::get_endpoint_url,
|
||||
retry::{MAX_RETRY, new_retry_timer},
|
||||
@@ -69,7 +71,6 @@ use s3s::S3ErrorCode;
|
||||
use s3s::dto::ReplicationStatus;
|
||||
use s3s::{Body, dto::Owner};
|
||||
|
||||
const _C_USER_AGENT_PREFIX: &str = "RustFS (linux; x86)";
|
||||
const C_USER_AGENT: &str = "RustFS (linux; x86)";
|
||||
|
||||
const SUCCESS_STATUS: [StatusCode; 3] = [StatusCode::OK, StatusCode::NO_CONTENT, StatusCode::PARTIAL_CONTENT];
|
||||
@@ -90,22 +91,18 @@ pub struct TransitionClient {
|
||||
pub endpoint_url: Url,
|
||||
pub creds_provider: Arc<Mutex<Credentials<Static>>>,
|
||||
pub override_signer_type: SignatureType,
|
||||
/*app_info: TODO*/
|
||||
pub secure: bool,
|
||||
pub http_client: Client<HttpsConnector<HttpConnector>, Body>,
|
||||
//pub http_trace: Httptrace.ClientTrace,
|
||||
pub bucket_loc_cache: Arc<Mutex<BucketLocationCache>>,
|
||||
pub is_trace_enabled: Arc<Mutex<bool>>,
|
||||
pub trace_errors_only: Arc<Mutex<bool>>,
|
||||
//pub trace_output: io.Writer,
|
||||
pub s3_accelerate_endpoint: Arc<Mutex<String>>,
|
||||
pub s3_dual_stack_enabled: Arc<Mutex<bool>>,
|
||||
pub region: String,
|
||||
pub random: u64,
|
||||
pub lookup: BucketLookupType,
|
||||
//pub lookupFn: func(u url.URL, bucketName string) BucketLookupType,
|
||||
pub md5_hasher: Arc<Mutex<Option<MD5>>>,
|
||||
pub sha256_hasher: Option<Sha256>,
|
||||
pub md5_hasher: Arc<Mutex<Option<HashAlgorithm>>>,
|
||||
pub sha256_hasher: Option<HashAlgorithm>,
|
||||
pub health_status: AtomicI32,
|
||||
pub trailing_header_support: bool,
|
||||
pub max_retries: i64,
|
||||
@@ -115,15 +112,11 @@ pub struct TransitionClient {
|
||||
pub struct Options {
|
||||
pub creds: Credentials<Static>,
|
||||
pub secure: bool,
|
||||
//pub transport: http.RoundTripper,
|
||||
//pub trace: *httptrace.ClientTrace,
|
||||
pub region: String,
|
||||
pub bucket_lookup: BucketLookupType,
|
||||
//pub custom_region_via_url: func(u url.URL) string,
|
||||
//pub bucket_lookup_via_url: func(u url.URL, bucketName string) BucketLookupType,
|
||||
pub trailing_headers: bool,
|
||||
pub custom_md5: Option<MD5>,
|
||||
pub custom_sha256: Option<Sha256>,
|
||||
pub custom_md5: Option<HashAlgorithm>,
|
||||
pub custom_sha256: Option<HashAlgorithm>,
|
||||
pub max_retries: i64,
|
||||
}
|
||||
|
||||
@@ -145,8 +138,6 @@ impl TransitionClient {
|
||||
async fn private_new(endpoint: &str, opts: Options) -> Result<TransitionClient, std::io::Error> {
|
||||
let endpoint_url = get_endpoint_url(endpoint, opts.secure)?;
|
||||
|
||||
//let jar = cookiejar.New(cookiejar.Options{PublicSuffixList: publicsuffix.List})?;
|
||||
|
||||
//#[cfg(feature = "ring")]
|
||||
//let _ = rustls::crypto::ring::default_provider().install_default();
|
||||
//#[cfg(feature = "aws-lc-rs")]
|
||||
@@ -154,9 +145,6 @@ impl TransitionClient {
|
||||
|
||||
let scheme = endpoint_url.scheme();
|
||||
let client;
|
||||
//if scheme == "https" {
|
||||
// client = Client::builder(TokioExecutor::new()).build_http();
|
||||
//} else {
|
||||
let tls = rustls::ClientConfig::builder().with_native_roots()?.with_no_client_auth();
|
||||
let https = hyper_rustls::HttpsConnectorBuilder::new()
|
||||
.with_tls_config(tls)
|
||||
@@ -164,7 +152,6 @@ impl TransitionClient {
|
||||
.enable_http1()
|
||||
.build();
|
||||
client = Client::builder(TokioExecutor::new()).build(https);
|
||||
//}
|
||||
|
||||
let mut clnt = TransitionClient {
|
||||
endpoint_url,
|
||||
@@ -190,11 +177,11 @@ impl TransitionClient {
|
||||
{
|
||||
let mut md5_hasher = clnt.md5_hasher.lock().unwrap();
|
||||
if md5_hasher.is_none() {
|
||||
*md5_hasher = Some(MD5::new());
|
||||
*md5_hasher = Some(HashAlgorithm::Md5);
|
||||
}
|
||||
}
|
||||
if clnt.sha256_hasher.is_none() {
|
||||
clnt.sha256_hasher = Some(Sha256::new());
|
||||
clnt.sha256_hasher = Some(HashAlgorithm::SHA256);
|
||||
}
|
||||
|
||||
clnt.trailing_header_support = opts.trailing_headers && clnt.override_signer_type == SignatureType::SignatureV4;
|
||||
@@ -210,13 +197,6 @@ impl TransitionClient {
|
||||
self.endpoint_url.clone()
|
||||
}
|
||||
|
||||
fn set_appinfo(&self, app_name: &str, app_version: &str) {
|
||||
/*if app_name != "" && app_version != "" {
|
||||
self.appInfo.app_name = app_name
|
||||
self.appInfo.app_version = app_version
|
||||
}*/
|
||||
}
|
||||
|
||||
fn trace_errors_only_off(&self) {
|
||||
let mut trace_errors_only = self.trace_errors_only.lock().unwrap();
|
||||
*trace_errors_only = false;
|
||||
@@ -241,8 +221,8 @@ impl TransitionClient {
|
||||
&self,
|
||||
is_md5_requested: bool,
|
||||
is_sha256_requested: bool,
|
||||
) -> (HashMap<String, MD5>, HashMap<String, Vec<u8>>) {
|
||||
todo!();
|
||||
) -> (HashMap<String, HashAlgorithm>, HashMap<String, Vec<u8>>) {
|
||||
todo!()
|
||||
}
|
||||
|
||||
fn is_online(&self) -> bool {
|
||||
@@ -265,6 +245,7 @@ impl TransitionClient {
|
||||
fn dump_http(&self, req: &http::Request<Body>, resp: &http::Response<Body>) -> Result<(), std::io::Error> {
|
||||
let mut resp_trace: Vec<u8>;
|
||||
|
||||
//info!("{}{}", self.trace_output, "---------BEGIN-HTTP---------");
|
||||
//info!("{}{}", self.trace_output, "---------END-HTTP---------");
|
||||
|
||||
Ok(())
|
||||
@@ -335,7 +316,7 @@ impl TransitionClient {
|
||||
//let mut retry_timer = RetryTimer::new();
|
||||
//while let Some(v) = retry_timer.next().await {
|
||||
for _ in [1; 1]
|
||||
/*new_retry_timer(req_retry, DefaultRetryUnit, DefaultRetryCap, MaxJitter)*/
|
||||
/*new_retry_timer(req_retry, default_retry_unit, default_retry_cap, max_jitter)*/
|
||||
{
|
||||
let req = self.new_request(method, metadata).await?;
|
||||
|
||||
@@ -406,7 +387,13 @@ impl TransitionClient {
|
||||
&metadata.query_values,
|
||||
)?;
|
||||
|
||||
let mut req_builder = Request::builder().method(method).uri(target_url.to_string());
|
||||
let Ok(mut req) = Request::builder()
|
||||
.method(method)
|
||||
.uri(target_url.to_string())
|
||||
.body(Body::empty())
|
||||
else {
|
||||
return Err(std::io::Error::other("create request error"));
|
||||
};
|
||||
|
||||
let value;
|
||||
{
|
||||
@@ -430,30 +417,25 @@ impl TransitionClient {
|
||||
if metadata.expires != 0 && metadata.pre_sign_url {
|
||||
if signer_type == SignatureType::SignatureAnonymous {
|
||||
return Err(std::io::Error::other(err_invalid_argument(
|
||||
"Presigned URLs cannot be generated with anonymous credentials.",
|
||||
"presigned urls cannot be generated with anonymous credentials.",
|
||||
)));
|
||||
}
|
||||
if metadata.extra_pre_sign_header.is_some() {
|
||||
if signer_type == SignatureType::SignatureV2 {
|
||||
return Err(std::io::Error::other(err_invalid_argument(
|
||||
"Extra signed headers for Presign with Signature V2 is not supported.",
|
||||
"extra signed headers for presign with signature v2 is not supported.",
|
||||
)));
|
||||
}
|
||||
let headers = req.headers_mut();
|
||||
for (k, v) in metadata.extra_pre_sign_header.as_ref().unwrap() {
|
||||
req_builder = req_builder.header(k, v);
|
||||
headers.insert(k, v.clone());
|
||||
}
|
||||
}
|
||||
if signer_type == SignatureType::SignatureV2 {
|
||||
req_builder = rustfs_signer::pre_sign_v2(
|
||||
req_builder,
|
||||
&access_key_id,
|
||||
&secret_access_key,
|
||||
metadata.expires,
|
||||
is_virtual_host,
|
||||
);
|
||||
req = rustfs_signer::pre_sign_v2(req, &access_key_id, &secret_access_key, metadata.expires, is_virtual_host);
|
||||
} else if signer_type == SignatureType::SignatureV4 {
|
||||
req_builder = rustfs_signer::pre_sign_v4(
|
||||
req_builder,
|
||||
req = rustfs_signer::pre_sign_v4(
|
||||
req,
|
||||
&access_key_id,
|
||||
&secret_access_key,
|
||||
&session_token,
|
||||
@@ -462,57 +444,38 @@ impl TransitionClient {
|
||||
OffsetDateTime::now_utc(),
|
||||
);
|
||||
}
|
||||
let req = match req_builder.body(Body::empty()) {
|
||||
Ok(req) => req,
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
};
|
||||
return Ok(req);
|
||||
}
|
||||
|
||||
self.set_user_agent(&mut req_builder);
|
||||
self.set_user_agent(&mut req);
|
||||
|
||||
for (k, v) in metadata.custom_header.clone() {
|
||||
req_builder.headers_mut().expect("err").insert(k.expect("err"), v);
|
||||
req.headers_mut().insert(k.expect("err"), v);
|
||||
}
|
||||
|
||||
//req.content_length = metadata.content_length;
|
||||
if metadata.content_length <= -1 {
|
||||
let chunked_value = HeaderValue::from_str(&vec!["chunked"].join(",")).expect("err");
|
||||
req_builder
|
||||
.headers_mut()
|
||||
.expect("err")
|
||||
.insert(http::header::TRANSFER_ENCODING, chunked_value);
|
||||
req.headers_mut().insert(http::header::TRANSFER_ENCODING, chunked_value);
|
||||
}
|
||||
|
||||
if metadata.content_md5_base64.len() > 0 {
|
||||
let md5_value = HeaderValue::from_str(&metadata.content_md5_base64).expect("err");
|
||||
req_builder.headers_mut().expect("err").insert("Content-Md5", md5_value);
|
||||
req.headers_mut().insert("Content-Md5", md5_value);
|
||||
}
|
||||
|
||||
if signer_type == SignatureType::SignatureAnonymous {
|
||||
let req = match req_builder.body(Body::empty()) {
|
||||
Ok(req) => req,
|
||||
Err(err) => {
|
||||
return Err(std::io::Error::other(err));
|
||||
}
|
||||
};
|
||||
return Ok(req);
|
||||
}
|
||||
|
||||
if signer_type == SignatureType::SignatureV2 {
|
||||
req_builder =
|
||||
rustfs_signer::sign_v2(req_builder, metadata.content_length, &access_key_id, &secret_access_key, is_virtual_host);
|
||||
req = rustfs_signer::sign_v2(req, metadata.content_length, &access_key_id, &secret_access_key, is_virtual_host);
|
||||
} else if metadata.stream_sha256 && !self.secure {
|
||||
if metadata.trailer.len() > 0 {
|
||||
//req.Trailer = metadata.trailer;
|
||||
for (_, v) in &metadata.trailer {
|
||||
req_builder = req_builder.header(http::header::TRAILER, v.clone());
|
||||
req.headers_mut().insert(http::header::TRAILER, v.clone());
|
||||
}
|
||||
}
|
||||
//req_builder = rustfs_signer::streaming_sign_v4(req_builder, &access_key_id,
|
||||
// &secret_access_key, &session_token, &location, metadata.content_length, OffsetDateTime::now_utc(), self.sha256_hasher());
|
||||
} else {
|
||||
let mut sha_header = UNSIGNED_PAYLOAD.to_string();
|
||||
if metadata.content_sha256_hex != "" {
|
||||
@@ -523,11 +486,11 @@ impl TransitionClient {
|
||||
} else if metadata.trailer.len() > 0 {
|
||||
sha_header = UNSIGNED_PAYLOAD_TRAILER.to_string();
|
||||
}
|
||||
req_builder = req_builder
|
||||
.header::<HeaderName, HeaderValue>("X-Amz-Content-Sha256".parse().unwrap(), sha_header.parse().expect("err"));
|
||||
req.headers_mut()
|
||||
.insert("X-Amz-Content-Sha256".parse::<HeaderName>().unwrap(), sha_header.parse().expect("err"));
|
||||
|
||||
req_builder = rustfs_signer::sign_v4_trailer(
|
||||
req_builder,
|
||||
req = rustfs_signer::sign_v4_trailer(
|
||||
req,
|
||||
&access_key_id,
|
||||
&secret_access_key,
|
||||
&session_token,
|
||||
@@ -536,33 +499,23 @@ impl TransitionClient {
|
||||
);
|
||||
}
|
||||
|
||||
let req;
|
||||
if metadata.content_length == 0 {
|
||||
req = req_builder.body(Body::empty());
|
||||
} else {
|
||||
if metadata.content_length > 0 {
|
||||
match &mut metadata.content_body {
|
||||
ReaderImpl::Body(content_body) => {
|
||||
req = req_builder.body(Body::from(content_body.clone()));
|
||||
*req.body_mut() = Body::from(content_body.clone());
|
||||
}
|
||||
ReaderImpl::ObjectBody(content_body) => {
|
||||
req = req_builder.body(Body::from(content_body.read_all().await?));
|
||||
*req.body_mut() = Body::from(content_body.read_all().await?);
|
||||
}
|
||||
}
|
||||
//req = req_builder.body(s3s::Body::from(metadata.content_body.read_all().await?));
|
||||
}
|
||||
|
||||
match req {
|
||||
Ok(req) => Ok(req),
|
||||
Err(err) => Err(std::io::Error::other(err)),
|
||||
}
|
||||
Ok(req)
|
||||
}
|
||||
|
||||
pub fn set_user_agent(&self, req: &mut Builder) {
|
||||
let headers = req.headers_mut().expect("err");
|
||||
pub fn set_user_agent(&self, req: &mut Request<Body>) {
|
||||
let headers = req.headers_mut();
|
||||
headers.insert("User-Agent", C_USER_AGENT.parse().expect("err"));
|
||||
/*if self.app_info.app_name != "" && self.app_info.app_version != "" {
|
||||
headers.insert("User-Agent", C_USER_AGENT+" "+self.app_info.app_name+"/"+self.app_info.app_version);
|
||||
}*/
|
||||
}
|
||||
|
||||
fn make_target_url(
|
||||
@@ -945,7 +898,7 @@ pub struct ObjectMultipartInfo {
|
||||
pub key: String,
|
||||
pub size: i64,
|
||||
pub upload_id: String,
|
||||
//pub err error,
|
||||
//pub err: Error,
|
||||
}
|
||||
|
||||
pub struct UploadInfo {
|
||||
|
||||
@@ -178,6 +178,16 @@ pub async fn remove_bucket_target(bucket: &str, arn_str: &str) {
|
||||
}
|
||||
}
|
||||
|
||||
pub async fn list_bucket_targets(bucket: &str) -> Result<BucketTargets, BucketRemoteTargetNotFound> {
|
||||
if let Some(sys) = GLOBAL_Bucket_Target_Sys.get() {
|
||||
sys.list_bucket_targets(bucket).await
|
||||
} else {
|
||||
Err(BucketRemoteTargetNotFound {
|
||||
bucket: bucket.to_string(),
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
impl Default for BucketTargetSys {
|
||||
fn default() -> Self {
|
||||
Self::new()
|
||||
|
||||
@@ -145,8 +145,8 @@ impl Debug for LocalDisk {
|
||||
impl LocalDisk {
|
||||
pub async fn new(ep: &Endpoint, cleanup: bool) -> Result<Self> {
|
||||
debug!("Creating local disk");
|
||||
let root = match fs::canonicalize(ep.get_file_path()).await {
|
||||
Ok(path) => path,
|
||||
let root = match PathBuf::from(ep.get_file_path()).absolutize() {
|
||||
Ok(path) => path.into_owned(),
|
||||
Err(e) => {
|
||||
if e.kind() == ErrorKind::NotFound {
|
||||
return Err(DiskError::VolumeNotFound);
|
||||
|
||||
@@ -28,7 +28,7 @@
|
||||
//! ## Example
|
||||
//!
|
||||
//! ```rust
|
||||
//! use ecstore::erasure_coding::Erasure;
|
||||
//! use rustfs_ecstore::erasure_coding::Erasure;
|
||||
//!
|
||||
//! let erasure = Erasure::new(4, 2, 1024); // 4 data shards, 2 parity shards, 1KB block size
|
||||
//! let data = b"hello world";
|
||||
@@ -263,7 +263,7 @@ impl ReedSolomonEncoder {
|
||||
///
|
||||
/// # Example
|
||||
/// ```
|
||||
/// use ecstore::erasure_coding::Erasure;
|
||||
/// use rustfs_ecstore::erasure_coding::Erasure;
|
||||
/// let erasure = Erasure::new(4, 2, 8);
|
||||
/// let data = b"hello world";
|
||||
/// let shards = erasure.encode_data(data).unwrap();
|
||||
|
||||
@@ -30,6 +30,7 @@ use std::{
|
||||
time::SystemTime,
|
||||
};
|
||||
use tokio::sync::{OnceCell, RwLock};
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use uuid::Uuid;
|
||||
|
||||
pub const DISK_ASSUME_UNKNOWN_SIZE: u64 = 1 << 30;
|
||||
@@ -62,7 +63,12 @@ static ref globalDeploymentIDPtr: OnceLock<Uuid> = OnceLock::new();
|
||||
pub static ref GLOBAL_BOOT_TIME: OnceCell<SystemTime> = OnceCell::new();
|
||||
pub static ref GLOBAL_LocalNodeName: String = "127.0.0.1:9000".to_string();
|
||||
pub static ref GLOBAL_LocalNodeNameHex: String = rustfs_utils::crypto::hex(GLOBAL_LocalNodeName.as_bytes());
|
||||
pub static ref GLOBAL_NodeNamesHex: HashMap<String, ()> = HashMap::new();}
|
||||
pub static ref GLOBAL_NodeNamesHex: HashMap<String, ()> = HashMap::new();
|
||||
pub static ref GLOBAL_REGION: OnceLock<String> = OnceLock::new();
|
||||
}
|
||||
|
||||
// Global cancellation token for background services (data scanner and auto heal)
|
||||
static GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN: OnceLock<CancellationToken> = OnceLock::new();
|
||||
|
||||
static GLOBAL_ACTIVE_CRED: OnceLock<Credentials> = OnceLock::new();
|
||||
|
||||
@@ -182,3 +188,35 @@ pub async fn update_erasure_type(setup_type: SetupType) {
|
||||
// }
|
||||
|
||||
type TypeLocalDiskSetDrives = Vec<Vec<Vec<Option<DiskStore>>>>;
|
||||
|
||||
pub fn set_global_region(region: String) {
|
||||
GLOBAL_REGION.set(region).unwrap();
|
||||
}
|
||||
|
||||
pub fn get_global_region() -> Option<String> {
|
||||
GLOBAL_REGION.get().cloned()
|
||||
}
|
||||
|
||||
/// Initialize the global background services cancellation token
|
||||
pub fn init_background_services_cancel_token(cancel_token: CancellationToken) -> Result<(), CancellationToken> {
|
||||
GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN.set(cancel_token)
|
||||
}
|
||||
|
||||
/// Get the global background services cancellation token
|
||||
pub fn get_background_services_cancel_token() -> Option<&'static CancellationToken> {
|
||||
GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN.get()
|
||||
}
|
||||
|
||||
/// Create and initialize the global background services cancellation token
|
||||
pub fn create_background_services_cancel_token() -> CancellationToken {
|
||||
let cancel_token = CancellationToken::new();
|
||||
init_background_services_cancel_token(cancel_token.clone()).expect("Background services cancel token already initialized");
|
||||
cancel_token
|
||||
}
|
||||
|
||||
/// Shutdown all background services gracefully
|
||||
pub fn shutdown_background_services() {
|
||||
if let Some(cancel_token) = GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN.get() {
|
||||
cancel_token.cancel();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -24,6 +24,7 @@ use tokio::{
|
||||
},
|
||||
time::interval,
|
||||
};
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use tracing::{error, info};
|
||||
use uuid::Uuid;
|
||||
|
||||
@@ -32,7 +33,7 @@ use super::{
|
||||
heal_ops::{HealSequence, new_bg_heal_sequence},
|
||||
};
|
||||
use crate::error::{Error, Result};
|
||||
use crate::global::GLOBAL_MRFState;
|
||||
use crate::global::{GLOBAL_MRFState, get_background_services_cancel_token};
|
||||
use crate::heal::error::ERR_RETRY_HEALING;
|
||||
use crate::heal::heal_commands::{HEAL_ITEM_BUCKET, HealScanMode};
|
||||
use crate::heal::heal_ops::{BG_HEALING_UUID, HealSource};
|
||||
@@ -54,6 +55,13 @@ use crate::{
|
||||
pub static DEFAULT_MONITOR_NEW_DISK_INTERVAL: Duration = Duration::from_secs(10);
|
||||
|
||||
pub async fn init_auto_heal() {
|
||||
info!("Initializing auto heal background task");
|
||||
|
||||
let Some(cancel_token) = get_background_services_cancel_token() else {
|
||||
error!("Background services cancel token not initialized");
|
||||
return;
|
||||
};
|
||||
|
||||
init_background_healing().await;
|
||||
let v = env::var("_RUSTFS_AUTO_DRIVE_HEALING").unwrap_or("on".to_string());
|
||||
if v == "on" {
|
||||
@@ -61,12 +69,16 @@ pub async fn init_auto_heal() {
|
||||
GLOBAL_BackgroundHealState
|
||||
.push_heal_local_disks(&get_local_disks_to_heal().await)
|
||||
.await;
|
||||
spawn(async {
|
||||
monitor_local_disks_and_heal().await;
|
||||
|
||||
let cancel_clone = cancel_token.clone();
|
||||
spawn(async move {
|
||||
monitor_local_disks_and_heal(cancel_clone).await;
|
||||
});
|
||||
}
|
||||
spawn(async {
|
||||
GLOBAL_MRFState.heal_routine().await;
|
||||
|
||||
let cancel_clone = cancel_token.clone();
|
||||
spawn(async move {
|
||||
GLOBAL_MRFState.heal_routine_with_cancel(cancel_clone).await;
|
||||
});
|
||||
}
|
||||
|
||||
@@ -108,50 +120,66 @@ pub async fn get_local_disks_to_heal() -> Vec<Endpoint> {
|
||||
disks_to_heal
|
||||
}
|
||||
|
||||
async fn monitor_local_disks_and_heal() {
|
||||
async fn monitor_local_disks_and_heal(cancel_token: CancellationToken) {
|
||||
info!("Auto heal monitor started");
|
||||
let mut interval = interval(DEFAULT_MONITOR_NEW_DISK_INTERVAL);
|
||||
|
||||
loop {
|
||||
interval.tick().await;
|
||||
let heal_disks = GLOBAL_BackgroundHealState.get_heal_local_disk_endpoints().await;
|
||||
if heal_disks.is_empty() {
|
||||
info!("heal local disks is empty");
|
||||
interval.reset();
|
||||
continue;
|
||||
}
|
||||
tokio::select! {
|
||||
_ = cancel_token.cancelled() => {
|
||||
info!("Auto heal monitor received shutdown signal, exiting gracefully");
|
||||
break;
|
||||
}
|
||||
_ = interval.tick() => {
|
||||
let heal_disks = GLOBAL_BackgroundHealState.get_heal_local_disk_endpoints().await;
|
||||
if heal_disks.is_empty() {
|
||||
info!("heal local disks is empty");
|
||||
interval.reset();
|
||||
continue;
|
||||
}
|
||||
|
||||
info!("heal local disks: {:?}", heal_disks);
|
||||
info!("heal local disks: {:?}", heal_disks);
|
||||
|
||||
let store = new_object_layer_fn().expect("errServerNotInitialized");
|
||||
if let (_result, Some(err)) = store.heal_format(false).await.expect("heal format failed") {
|
||||
error!("heal local disk format error: {}", err);
|
||||
if err == Error::NoHealRequired {
|
||||
} else {
|
||||
info!("heal format err: {}", err.to_string());
|
||||
let store = new_object_layer_fn().expect("errServerNotInitialized");
|
||||
if let (_result, Some(err)) = store.heal_format(false).await.expect("heal format failed") {
|
||||
error!("heal local disk format error: {}", err);
|
||||
if err == Error::NoHealRequired {
|
||||
} else {
|
||||
info!("heal format err: {}", err.to_string());
|
||||
interval.reset();
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
let mut futures = Vec::new();
|
||||
for disk in heal_disks.into_ref().iter() {
|
||||
let disk_clone = disk.clone();
|
||||
let cancel_clone = cancel_token.clone();
|
||||
futures.push(async move {
|
||||
let disk_for_cancel = disk_clone.clone();
|
||||
tokio::select! {
|
||||
_ = cancel_clone.cancelled() => {
|
||||
info!("Disk healing task cancelled for disk: {}", disk_for_cancel);
|
||||
}
|
||||
_ = async {
|
||||
GLOBAL_BackgroundHealState
|
||||
.set_disk_healing_status(disk_clone.clone(), true)
|
||||
.await;
|
||||
if heal_fresh_disk(&disk_clone).await.is_err() {
|
||||
info!("heal_fresh_disk is err");
|
||||
GLOBAL_BackgroundHealState
|
||||
.set_disk_healing_status(disk_clone.clone(), false)
|
||||
.await;
|
||||
}
|
||||
GLOBAL_BackgroundHealState.pop_heal_local_disks(&[disk_clone]).await;
|
||||
} => {}
|
||||
}
|
||||
});
|
||||
}
|
||||
let _ = join_all(futures).await;
|
||||
interval.reset();
|
||||
continue;
|
||||
}
|
||||
}
|
||||
|
||||
let mut futures = Vec::new();
|
||||
for disk in heal_disks.into_ref().iter() {
|
||||
let disk_clone = disk.clone();
|
||||
futures.push(async move {
|
||||
GLOBAL_BackgroundHealState
|
||||
.set_disk_healing_status(disk_clone.clone(), true)
|
||||
.await;
|
||||
if heal_fresh_disk(&disk_clone).await.is_err() {
|
||||
info!("heal_fresh_disk is err");
|
||||
GLOBAL_BackgroundHealState
|
||||
.set_disk_healing_status(disk_clone.clone(), false)
|
||||
.await;
|
||||
return;
|
||||
}
|
||||
GLOBAL_BackgroundHealState.pop_heal_local_disks(&[disk_clone]).await;
|
||||
});
|
||||
}
|
||||
let _ = join_all(futures).await;
|
||||
interval.reset();
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -30,7 +30,7 @@ use time::{self, OffsetDateTime};
|
||||
|
||||
use super::{
|
||||
data_scanner_metric::{ScannerMetric, ScannerMetrics, globalScannerMetrics},
|
||||
data_usage::{DATA_USAGE_BLOOM_NAME_PATH, store_data_usage_in_backend},
|
||||
data_usage::{DATA_USAGE_BLOOM_NAME_PATH, DataUsageInfo, store_data_usage_in_backend},
|
||||
data_usage_cache::{DataUsageCache, DataUsageEntry, DataUsageHash},
|
||||
heal_commands::{HEAL_DEEP_SCAN, HEAL_NORMAL_SCAN, HealScanMode},
|
||||
};
|
||||
@@ -50,7 +50,7 @@ use crate::{
|
||||
metadata_sys,
|
||||
},
|
||||
event_notification::{EventArgs, send_event},
|
||||
global::GLOBAL_LocalNodeName,
|
||||
global::{GLOBAL_LocalNodeName, get_background_services_cancel_token},
|
||||
store_api::{ObjectOptions, ObjectToDelete, StorageAPI},
|
||||
};
|
||||
use crate::{
|
||||
@@ -103,7 +103,7 @@ use tokio::{
|
||||
},
|
||||
time::sleep,
|
||||
};
|
||||
use tracing::{error, info};
|
||||
use tracing::{debug, error, info};
|
||||
|
||||
const DATA_SCANNER_SLEEP_PER_FOLDER: Duration = Duration::from_millis(1); // Time to wait between folders.
|
||||
const DATA_USAGE_UPDATE_DIR_CYCLES: u32 = 16; // Visit all folders every n cycles.
|
||||
@@ -203,27 +203,52 @@ fn new_dynamic_sleeper(factor: f64, max_wait: Duration, is_scanner: bool) -> Dyn
|
||||
pub async fn init_data_scanner() {
|
||||
info!("Initializing data scanner background task");
|
||||
|
||||
let Some(cancel_token) = get_background_services_cancel_token() else {
|
||||
error!("Background services cancel token not initialized");
|
||||
return;
|
||||
};
|
||||
|
||||
let cancel_clone = cancel_token.clone();
|
||||
tokio::spawn(async move {
|
||||
info!("Data scanner background task started");
|
||||
|
||||
loop {
|
||||
// Run the data scanner
|
||||
run_data_scanner().await;
|
||||
tokio::select! {
|
||||
_ = cancel_clone.cancelled() => {
|
||||
info!("Data scanner received shutdown signal, exiting gracefully");
|
||||
break;
|
||||
}
|
||||
_ = run_data_scanner_cycle() => {
|
||||
// Calculate randomized sleep duration
|
||||
let random_factor = {
|
||||
let mut rng = rand::rng();
|
||||
rng.random_range(1.0..10.0)
|
||||
};
|
||||
let base_cycle_duration = SCANNER_CYCLE.load(Ordering::SeqCst) as f64;
|
||||
let sleep_duration_secs = random_factor * base_cycle_duration;
|
||||
|
||||
// Calculate randomized sleep duration
|
||||
// Use random factor (0.0 to 1.0) multiplied by the scanner cycle duration
|
||||
let random_factor = {
|
||||
let mut rng = rand::rng();
|
||||
rng.random_range(1.0..10.0)
|
||||
};
|
||||
let base_cycle_duration = SCANNER_CYCLE.load(Ordering::SeqCst) as f64;
|
||||
let sleep_duration_secs = random_factor * base_cycle_duration;
|
||||
let sleep_duration = Duration::from_secs_f64(sleep_duration_secs);
|
||||
|
||||
let sleep_duration = Duration::from_secs_f64(sleep_duration_secs);
|
||||
debug!(
|
||||
duration_secs = sleep_duration.as_secs(),
|
||||
"Data scanner sleeping before next cycle"
|
||||
);
|
||||
|
||||
info!(duration_secs = sleep_duration.as_secs(), "Data scanner sleeping before next cycle");
|
||||
|
||||
// Sleep with the calculated duration
|
||||
sleep(sleep_duration).await;
|
||||
// Interruptible sleep
|
||||
tokio::select! {
|
||||
_ = cancel_clone.cancelled() => {
|
||||
info!("Data scanner received shutdown signal during sleep, exiting");
|
||||
break;
|
||||
}
|
||||
_ = sleep(sleep_duration) => {
|
||||
// Continue to next cycle
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
info!("Data scanner background task stopped gracefully");
|
||||
});
|
||||
}
|
||||
|
||||
@@ -239,8 +264,8 @@ pub async fn init_data_scanner() {
|
||||
/// - Gracefully handles missing object layer
|
||||
/// - Continues operation even if individual steps fail
|
||||
/// - Logs errors appropriately without terminating the scanner
|
||||
async fn run_data_scanner() {
|
||||
info!("Starting data scanner cycle");
|
||||
async fn run_data_scanner_cycle() {
|
||||
debug!("Starting data scanner cycle");
|
||||
|
||||
// Get the object layer, return early if not available
|
||||
let Some(store) = new_object_layer_fn() else {
|
||||
@@ -248,6 +273,14 @@ async fn run_data_scanner() {
|
||||
return;
|
||||
};
|
||||
|
||||
// Check for cancellation before starting expensive operations
|
||||
if let Some(token) = get_background_services_cancel_token() {
|
||||
if token.is_cancelled() {
|
||||
debug!("Scanner cancelled before starting cycle");
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
// Load current cycle information from persistent storage
|
||||
let buf = read_config(store.clone(), &DATA_USAGE_BLOOM_NAME_PATH)
|
||||
.await
|
||||
@@ -293,7 +326,7 @@ async fn run_data_scanner() {
|
||||
}
|
||||
|
||||
// Set up data usage storage channel
|
||||
let (tx, rx) = mpsc::channel(100);
|
||||
let (tx, rx) = mpsc::channel::<DataUsageInfo>(100);
|
||||
tokio::spawn(async move {
|
||||
let _ = store_data_usage_in_backend(rx).await;
|
||||
});
|
||||
@@ -308,8 +341,8 @@ async fn run_data_scanner() {
|
||||
"Starting namespace scanner"
|
||||
);
|
||||
|
||||
// Run the namespace scanner
|
||||
match store.clone().ns_scanner(tx, cycle_info.current as usize, scan_mode).await {
|
||||
// Run the namespace scanner with cancellation support
|
||||
match execute_namespace_scan(&store, tx, cycle_info.current, scan_mode).await {
|
||||
Ok(_) => {
|
||||
info!(cycle = cycle_info.current, "Namespace scanner completed successfully");
|
||||
|
||||
@@ -349,6 +382,27 @@ async fn run_data_scanner() {
|
||||
stop_fn(&scan_result);
|
||||
}
|
||||
|
||||
/// Execute namespace scan with cancellation support
|
||||
async fn execute_namespace_scan(
|
||||
store: &Arc<ECStore>,
|
||||
tx: Sender<DataUsageInfo>,
|
||||
cycle: u64,
|
||||
scan_mode: HealScanMode,
|
||||
) -> Result<()> {
|
||||
let cancel_token =
|
||||
get_background_services_cancel_token().ok_or_else(|| Error::other("Background services not initialized"))?;
|
||||
|
||||
tokio::select! {
|
||||
result = store.ns_scanner(tx, cycle as usize, scan_mode) => {
|
||||
result.map_err(|e| Error::other(format!("Namespace scan failed: {e}")))
|
||||
}
|
||||
_ = cancel_token.cancelled() => {
|
||||
info!("Namespace scan cancelled");
|
||||
Err(Error::other("Scan cancelled"))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize, Deserialize)]
|
||||
struct BackgroundHealInfo {
|
||||
bitrot_start_time: SystemTime,
|
||||
@@ -404,7 +458,7 @@ async fn get_cycle_scan_mode(current_cycle: u64, bitrot_start_cycle: u64, bitrot
|
||||
return HEAL_DEEP_SCAN;
|
||||
}
|
||||
|
||||
if bitrot_start_time.duration_since(SystemTime::now()).unwrap() > bitrot_cycle {
|
||||
if SystemTime::now().duration_since(bitrot_start_time).unwrap_or_default() > bitrot_cycle {
|
||||
return HEAL_DEEP_SCAN;
|
||||
}
|
||||
|
||||
@@ -741,13 +795,18 @@ impl ScannerItem {
|
||||
|
||||
// Create a mutable clone if you need to modify fields
|
||||
let mut oi = oi.clone();
|
||||
oi.replication_status = ReplicationStatusType::from(
|
||||
oi.user_defined
|
||||
.get("x-amz-bucket-replication-status")
|
||||
.unwrap_or(&"PENDING".to_string()),
|
||||
);
|
||||
info!("apply status is: {:?}", oi.replication_status);
|
||||
self.heal_replication(&oi, _size_s).await;
|
||||
|
||||
let versioned = BucketVersioningSys::prefix_enabled(&oi.bucket, &oi.name).await;
|
||||
if versioned {
|
||||
oi.replication_status = ReplicationStatusType::from(
|
||||
oi.user_defined
|
||||
.get("x-amz-bucket-replication-status")
|
||||
.unwrap_or(&"PENDING".to_string()),
|
||||
);
|
||||
debug!("apply status is: {:?}", oi.replication_status);
|
||||
self.heal_replication(&oi, _size_s).await;
|
||||
}
|
||||
|
||||
done();
|
||||
|
||||
if action.delete_all() {
|
||||
|
||||
@@ -25,7 +25,8 @@ use std::time::Duration;
|
||||
use tokio::sync::RwLock;
|
||||
use tokio::sync::mpsc::{Receiver, Sender};
|
||||
use tokio::time::sleep;
|
||||
use tracing::error;
|
||||
use tokio_util::sync::CancellationToken;
|
||||
use tracing::{error, info};
|
||||
use uuid::Uuid;
|
||||
|
||||
pub const MRF_OPS_QUEUE_SIZE: u64 = 100000;
|
||||
@@ -87,56 +88,96 @@ impl MRFState {
|
||||
let _ = self.tx.send(op).await;
|
||||
}
|
||||
|
||||
pub async fn heal_routine(&self) {
|
||||
/// Enhanced heal routine with cancellation support
|
||||
///
|
||||
/// This method implements the same healing logic as the original heal_routine,
|
||||
/// but adds proper cancellation support via CancellationToken.
|
||||
/// The core logic remains identical to maintain compatibility.
|
||||
pub async fn heal_routine_with_cancel(&self, cancel_token: CancellationToken) {
|
||||
info!("MRF heal routine started with cancellation support");
|
||||
|
||||
loop {
|
||||
// rx used only there,
|
||||
if let Some(op) = self.rx.write().await.recv().await {
|
||||
if op.bucket == RUSTFS_META_BUCKET {
|
||||
for pattern in &*PATTERNS {
|
||||
if pattern.is_match(&op.object) {
|
||||
return;
|
||||
tokio::select! {
|
||||
_ = cancel_token.cancelled() => {
|
||||
info!("MRF heal routine received shutdown signal, exiting gracefully");
|
||||
break;
|
||||
}
|
||||
op_result = async {
|
||||
let mut rx_guard = self.rx.write().await;
|
||||
rx_guard.recv().await
|
||||
} => {
|
||||
if let Some(op) = op_result {
|
||||
// Special path filtering (original logic)
|
||||
if op.bucket == RUSTFS_META_BUCKET {
|
||||
for pattern in &*PATTERNS {
|
||||
if pattern.is_match(&op.object) {
|
||||
continue; // Skip this operation, continue with next
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let now = Utc::now();
|
||||
if now.sub(op.queued).num_seconds() < 1 {
|
||||
sleep(Duration::from_secs(1)).await;
|
||||
}
|
||||
// Network reconnection delay (original logic)
|
||||
let now = Utc::now();
|
||||
if now.sub(op.queued).num_seconds() < 1 {
|
||||
tokio::select! {
|
||||
_ = cancel_token.cancelled() => {
|
||||
info!("MRF heal routine cancelled during reconnection delay");
|
||||
break;
|
||||
}
|
||||
_ = sleep(Duration::from_secs(1)) => {}
|
||||
}
|
||||
}
|
||||
|
||||
let scan_mode = if op.bitrot_scan { HEAL_DEEP_SCAN } else { HEAL_NORMAL_SCAN };
|
||||
if op.object.is_empty() {
|
||||
if let Err(err) = heal_bucket(&op.bucket).await {
|
||||
error!("heal bucket failed, bucket: {}, err: {:?}", op.bucket, err);
|
||||
}
|
||||
} else if op.versions.is_empty() {
|
||||
if let Err(err) =
|
||||
heal_object(&op.bucket, &op.object, &op.version_id.clone().unwrap_or_default(), scan_mode).await
|
||||
{
|
||||
error!("heal object failed, bucket: {}, object: {}, err: {:?}", op.bucket, op.object, err);
|
||||
}
|
||||
} else {
|
||||
let vers = op.versions.len() / 16;
|
||||
if vers > 0 {
|
||||
for i in 0..vers {
|
||||
let start = i * 16;
|
||||
let end = start + 16;
|
||||
// Core healing logic (original logic preserved)
|
||||
let scan_mode = if op.bitrot_scan { HEAL_DEEP_SCAN } else { HEAL_NORMAL_SCAN };
|
||||
|
||||
if op.object.is_empty() {
|
||||
// Heal bucket (original logic)
|
||||
if let Err(err) = heal_bucket(&op.bucket).await {
|
||||
error!("heal bucket failed, bucket: {}, err: {:?}", op.bucket, err);
|
||||
}
|
||||
} else if op.versions.is_empty() {
|
||||
// Heal single object (original logic)
|
||||
if let Err(err) = heal_object(
|
||||
&op.bucket,
|
||||
&op.object,
|
||||
&Uuid::from_slice(&op.versions[start..end]).expect("").to_string(),
|
||||
scan_mode,
|
||||
)
|
||||
.await
|
||||
{
|
||||
&op.version_id.clone().unwrap_or_default(),
|
||||
scan_mode
|
||||
).await {
|
||||
error!("heal object failed, bucket: {}, object: {}, err: {:?}", op.bucket, op.object, err);
|
||||
}
|
||||
} else {
|
||||
// Heal multiple versions (original logic)
|
||||
let vers = op.versions.len() / 16;
|
||||
if vers > 0 {
|
||||
for i in 0..vers {
|
||||
// Check for cancellation before each version
|
||||
if cancel_token.is_cancelled() {
|
||||
info!("MRF heal routine cancelled during version processing");
|
||||
return;
|
||||
}
|
||||
|
||||
let start = i * 16;
|
||||
let end = start + 16;
|
||||
if let Err(err) = heal_object(
|
||||
&op.bucket,
|
||||
&op.object,
|
||||
&Uuid::from_slice(&op.versions[start..end]).expect("").to_string(),
|
||||
scan_mode,
|
||||
).await {
|
||||
error!("heal object failed, bucket: {}, object: {}, err: {:?}", op.bucket, op.object, err);
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
info!("MRF heal routine channel closed, exiting");
|
||||
break;
|
||||
}
|
||||
}
|
||||
} else {
|
||||
return;
|
||||
}
|
||||
}
|
||||
|
||||
info!("MRF heal routine stopped gracefully");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -4099,6 +4099,8 @@ impl ObjectIO for SetDisks {
|
||||
}
|
||||
}
|
||||
|
||||
drop(writers); // drop writers to close all files, this is to prevent FileAccessDenied errors when renaming data
|
||||
|
||||
let (online_disks, _, op_old_dir) = Self::rename_data(
|
||||
&shuffle_disks,
|
||||
RUSTFS_META_TMP_BUCKET,
|
||||
@@ -5039,6 +5041,8 @@ impl StorageAPI for SetDisks {
|
||||
|
||||
let fi_buff = fi.marshal_msg()?;
|
||||
|
||||
drop(writers); // drop writers to close all files
|
||||
|
||||
let part_path = format!("{}/{}/{}", upload_id_path, fi.data_dir.unwrap_or_default(), part_suffix);
|
||||
let _ = Self::rename_part(
|
||||
&disks,
|
||||
|
||||
@@ -1372,7 +1372,8 @@ impl StorageAPI for ECStore {
|
||||
}
|
||||
|
||||
if let Err(err) = self.peer_sys.make_bucket(bucket, opts).await {
|
||||
if !is_err_bucket_exists(&err.into()) {
|
||||
let err = err.into();
|
||||
if !is_err_bucket_exists(&err) {
|
||||
let _ = self
|
||||
.delete_bucket(
|
||||
bucket,
|
||||
@@ -1384,6 +1385,8 @@ impl StorageAPI for ECStore {
|
||||
)
|
||||
.await;
|
||||
}
|
||||
|
||||
return Err(err);
|
||||
};
|
||||
|
||||
let mut meta = BucketMetadata::new(bucket);
|
||||
@@ -2505,14 +2508,14 @@ fn check_object_name_for_length_and_slash(bucket: &str, object: &str) -> Result<
|
||||
|
||||
#[cfg(target_os = "windows")]
|
||||
{
|
||||
if object.contains('\\')
|
||||
|| object.contains(':')
|
||||
if object.contains(':')
|
||||
|| object.contains('*')
|
||||
|| object.contains('?')
|
||||
|| object.contains('"')
|
||||
|| object.contains('|')
|
||||
|| object.contains('<')
|
||||
|| object.contains('>')
|
||||
// || object.contains('\\')
|
||||
{
|
||||
return Err(StorageError::ObjectNameInvalid(bucket.to_owned(), object.to_owned()));
|
||||
}
|
||||
@@ -2546,9 +2549,9 @@ fn check_bucket_and_object_names(bucket: &str, object: &str) -> Result<()> {
|
||||
return Err(StorageError::ObjectNameInvalid(bucket.to_string(), object.to_string()));
|
||||
}
|
||||
|
||||
if cfg!(target_os = "windows") && object.contains('\\') {
|
||||
return Err(StorageError::ObjectNameInvalid(bucket.to_string(), object.to_string()));
|
||||
}
|
||||
// if cfg!(target_os = "windows") && object.contains('\\') {
|
||||
// return Err(StorageError::ObjectNameInvalid(bucket.to_string(), object.to_string()));
|
||||
// }
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
@@ -1,4 +1,3 @@
|
||||
#![allow(unused_imports)]
|
||||
// Copyright 2024 RustFS Team
|
||||
//
|
||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
||||
@@ -12,6 +11,7 @@
|
||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
// See the License for the specific language governing permissions and
|
||||
// limitations under the License.
|
||||
#![allow(unused_imports)]
|
||||
#![allow(unused_variables)]
|
||||
#![allow(unused_mut)]
|
||||
#![allow(unused_assignments)]
|
||||
|
||||
@@ -95,7 +95,6 @@ impl WarmBackendS3 {
|
||||
..Default::default()
|
||||
};
|
||||
let client = TransitionClient::new(&u.host().expect("err").to_string(), opts).await?;
|
||||
//client.set_appinfo(format!("s3-tier-{}", tier), ReleaseTag);
|
||||
|
||||
let client = Arc::new(client);
|
||||
let core = TransitionCore(Arc::clone(&client));
|
||||
|
||||
@@ -19,6 +19,11 @@ license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "File metadata management for RustFS, providing efficient storage and retrieval of file metadata in a distributed system."
|
||||
keywords = ["file-metadata", "storage", "retrieval", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "filesystem"]
|
||||
documentation = "https://docs.rs/rustfs-filemeta/latest/rustfs_filemeta/"
|
||||
|
||||
[dependencies]
|
||||
crc32fast = { workspace = true }
|
||||
|
||||
@@ -1,238 +1,37 @@
|
||||
# RustFS FileMeta
|
||||
[](https://rustfs.com)
|
||||
|
||||
A high-performance Rust implementation of xl-storage-format-v2, providing complete compatibility with S3-compatible metadata format while offering enhanced performance and safety.
|
||||
# RustFS FileMeta - File Metadata Management
|
||||
|
||||
## Overview
|
||||
<p align="center">
|
||||
<strong>Advanced file metadata management and indexing module for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
This crate implements the XL (Erasure Coded) metadata format used for distributed object storage. It provides:
|
||||
<p align="center">
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
|
||||
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
|
||||
</p>
|
||||
|
||||
- **Full S3 Compatibility**: 100% compatible with xl.meta file format
|
||||
- **High Performance**: Optimized for speed with sub-microsecond parsing times
|
||||
- **Memory Safety**: Written in safe Rust with comprehensive error handling
|
||||
- **Comprehensive Testing**: Extensive test suite with real metadata validation
|
||||
- **Cross-Platform**: Supports multiple CPU architectures (x86_64, aarch64)
|
||||
---
|
||||
|
||||
## Features
|
||||
## 📖 Overview
|
||||
|
||||
### Core Functionality
|
||||
- ✅ XL v2 file format parsing and serialization
|
||||
- ✅ MessagePack-based metadata encoding/decoding
|
||||
- ✅ Version management with modification time sorting
|
||||
- ✅ Erasure coding information storage
|
||||
- ✅ Inline data support for small objects
|
||||
- ✅ CRC32 integrity verification using xxHash64
|
||||
- ✅ Delete marker handling
|
||||
- ✅ Legacy version support
|
||||
**RustFS FileMeta** provides advanced file metadata management and indexing capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
### Advanced Features
|
||||
- ✅ Signature calculation for version integrity
|
||||
- ✅ Metadata validation and compatibility checking
|
||||
- ✅ Version statistics and analytics
|
||||
- ✅ Async I/O support with tokio
|
||||
- ✅ Comprehensive error handling
|
||||
- ✅ Performance benchmarking
|
||||
## ✨ Features
|
||||
|
||||
## Performance
|
||||
- High-performance metadata storage and retrieval
|
||||
- Advanced indexing with full-text search capabilities
|
||||
- File attribute management and custom metadata
|
||||
- Version tracking and history management
|
||||
- Distributed metadata replication
|
||||
- Real-time metadata synchronization
|
||||
|
||||
Based on our benchmarks:
|
||||
## 📚 Documentation
|
||||
|
||||
| Operation | Time | Description |
|
||||
|-----------|------|-------------|
|
||||
| Parse Real xl.meta | ~255 ns | Parse authentic xl metadata |
|
||||
| Parse Complex xl.meta | ~1.1 µs | Parse multi-version metadata |
|
||||
| Serialize Real xl.meta | ~659 ns | Serialize to xl format |
|
||||
| Round-trip Real xl.meta | ~1.3 µs | Parse + serialize cycle |
|
||||
| Version Statistics | ~5.2 ns | Calculate version stats |
|
||||
| Integrity Validation | ~7.8 ns | Validate metadata integrity |
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## Usage
|
||||
## 📄 License
|
||||
|
||||
### Basic Usage
|
||||
|
||||
```rust
|
||||
use rustfs_filemeta::file_meta::FileMeta;
|
||||
|
||||
// Load metadata from bytes
|
||||
let metadata = FileMeta::load(&xl_meta_bytes)?;
|
||||
|
||||
// Access version information
|
||||
for version in &metadata.versions {
|
||||
println!("Version ID: {:?}", version.header.version_id);
|
||||
println!("Mod Time: {:?}", version.header.mod_time);
|
||||
}
|
||||
|
||||
// Serialize back to bytes
|
||||
let serialized = metadata.marshal_msg()?;
|
||||
```
|
||||
|
||||
### Advanced Usage
|
||||
|
||||
```rust
|
||||
use rustfs_filemeta::file_meta::FileMeta;
|
||||
|
||||
// Load with validation
|
||||
let mut metadata = FileMeta::load(&xl_meta_bytes)?;
|
||||
|
||||
// Validate integrity
|
||||
metadata.validate_integrity()?;
|
||||
|
||||
// Check xl format compatibility
|
||||
if metadata.is_compatible_with_meta() {
|
||||
println!("Compatible with xl format");
|
||||
}
|
||||
|
||||
// Get version statistics
|
||||
let stats = metadata.get_version_stats();
|
||||
println!("Total versions: {}", stats.total_versions);
|
||||
println!("Object versions: {}", stats.object_versions);
|
||||
println!("Delete markers: {}", stats.delete_markers);
|
||||
```
|
||||
|
||||
### Working with FileInfo
|
||||
|
||||
```rust
|
||||
use rustfs_filemeta::fileinfo::FileInfo;
|
||||
use rustfs_filemeta::file_meta::FileMetaVersion;
|
||||
|
||||
// Convert FileInfo to metadata version
|
||||
let file_info = FileInfo::new("bucket", "object.txt");
|
||||
let meta_version = FileMetaVersion::from(file_info);
|
||||
|
||||
// Add version to metadata
|
||||
metadata.add_version(file_info)?;
|
||||
```
|
||||
|
||||
## Data Structures
|
||||
|
||||
### FileMeta
|
||||
The main metadata container that holds all versions and inline data:
|
||||
|
||||
```rust
|
||||
pub struct FileMeta {
|
||||
pub versions: Vec<FileMetaShallowVersion>,
|
||||
pub data: InlineData,
|
||||
pub meta_ver: u8,
|
||||
}
|
||||
```
|
||||
|
||||
### FileMetaVersion
|
||||
Represents a single object version:
|
||||
|
||||
```rust
|
||||
pub struct FileMetaVersion {
|
||||
pub version_type: VersionType,
|
||||
pub object: Option<MetaObject>,
|
||||
pub delete_marker: Option<MetaDeleteMarker>,
|
||||
pub write_version: u64,
|
||||
}
|
||||
```
|
||||
|
||||
### MetaObject
|
||||
Contains object-specific metadata including erasure coding information:
|
||||
|
||||
```rust
|
||||
pub struct MetaObject {
|
||||
pub version_id: Option<Uuid>,
|
||||
pub data_dir: Option<Uuid>,
|
||||
pub erasure_algorithm: ErasureAlgo,
|
||||
pub erasure_m: usize,
|
||||
pub erasure_n: usize,
|
||||
// ... additional fields
|
||||
}
|
||||
```
|
||||
|
||||
## File Format Compatibility
|
||||
|
||||
This implementation is fully compatible with xl-storage-format-v2:
|
||||
|
||||
- **Header Format**: XL2 v1 format with proper version checking
|
||||
- **Serialization**: MessagePack encoding identical to standard format
|
||||
- **Checksums**: xxHash64-based CRC validation
|
||||
- **Version Types**: Support for Object, Delete, and Legacy versions
|
||||
- **Inline Data**: Compatible inline data storage for small objects
|
||||
|
||||
## Testing
|
||||
|
||||
The crate includes comprehensive tests with real xl metadata:
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
cargo test
|
||||
|
||||
# Run benchmarks
|
||||
cargo bench
|
||||
|
||||
# Run with coverage
|
||||
cargo test --features coverage
|
||||
```
|
||||
|
||||
### Test Coverage
|
||||
- ✅ Real xl.meta file compatibility
|
||||
- ✅ Complex multi-version scenarios
|
||||
- ✅ Error handling and recovery
|
||||
- ✅ Inline data processing
|
||||
- ✅ Signature calculation
|
||||
- ✅ Round-trip serialization
|
||||
- ✅ Performance benchmarks
|
||||
- ✅ Edge cases and boundary conditions
|
||||
|
||||
## Architecture
|
||||
|
||||
The crate follows a modular design:
|
||||
|
||||
```
|
||||
src/
|
||||
├── file_meta.rs # Core metadata structures and logic
|
||||
├── file_meta_inline.rs # Inline data handling
|
||||
├── fileinfo.rs # File information structures
|
||||
├── test_data.rs # Test data generation
|
||||
└── lib.rs # Public API exports
|
||||
```
|
||||
|
||||
## Error Handling
|
||||
|
||||
Comprehensive error handling with detailed error messages:
|
||||
|
||||
```rust
|
||||
use rustfs_filemeta::error::Error;
|
||||
|
||||
match FileMeta::load(&invalid_data) {
|
||||
Ok(metadata) => { /* process metadata */ },
|
||||
Err(Error::InvalidFormat(msg)) => {
|
||||
eprintln!("Invalid format: {}", msg);
|
||||
},
|
||||
Err(Error::CorruptedData(msg)) => {
|
||||
eprintln!("Corrupted data: {}", msg);
|
||||
},
|
||||
Err(e) => {
|
||||
eprintln!("Other error: {}", e);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Dependencies
|
||||
|
||||
- `rmp` - MessagePack serialization
|
||||
- `uuid` - UUID handling
|
||||
- `time` - Date/time operations
|
||||
- `xxhash-rust` - Fast hashing
|
||||
- `tokio` - Async runtime (optional)
|
||||
- `criterion` - Benchmarking (dev dependency)
|
||||
|
||||
## Contributing
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a feature branch
|
||||
3. Add tests for new functionality
|
||||
4. Ensure all tests pass
|
||||
5. Submit a pull request
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
|
||||
|
||||
## Acknowledgments
|
||||
|
||||
- Original xl-storage-format-v2 implementation contributors
|
||||
- Rust community for excellent crates and tooling
|
||||
- Contributors and testers who helped improve this implementation
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
|
||||
@@ -19,6 +19,11 @@ license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Identity and Access Management (IAM) for RustFS, providing user management, roles, and permissions."
|
||||
keywords = ["iam", "identity", "access-management", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "authentication"]
|
||||
documentation = "https://docs.rs/rustfs-iam/latest/rustfs_iam/"
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
|
||||
37
crates/iam/README.md
Normal file
@@ -0,0 +1,37 @@
|
||||
[](https://rustfs.com)
|
||||
|
||||
# RustFS IAM - Identity & Access Management
|
||||
|
||||
<p align="center">
|
||||
<strong>Identity and access management system for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
|
||||
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS IAM** provides identity and access management capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
- User and group management with RBAC
|
||||
- Service account and API key authentication
|
||||
- Policy engine with fine-grained permissions
|
||||
- LDAP/Active Directory integration
|
||||
- Multi-factor authentication support
|
||||
- Session management and token validation
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
@@ -37,7 +37,7 @@ use serde::{Serialize, de::DeserializeOwned};
|
||||
use std::{collections::HashMap, sync::Arc};
|
||||
use tokio::sync::broadcast::{self, Receiver as B_Receiver};
|
||||
use tokio::sync::mpsc::{self, Sender};
|
||||
use tracing::{info, warn};
|
||||
use tracing::{debug, info, warn};
|
||||
|
||||
lazy_static! {
|
||||
pub static ref IAM_CONFIG_PREFIX: String = format!("{}/iam", RUSTFS_CONFIG_PREFIX);
|
||||
@@ -370,7 +370,15 @@ impl Store for ObjectStore {
|
||||
async fn load_iam_config<Item: DeserializeOwned>(&self, path: impl AsRef<str> + Send) -> Result<Item> {
|
||||
let mut data = read_config(self.object_api.clone(), path.as_ref()).await?;
|
||||
|
||||
data = Self::decrypt_data(&data)?;
|
||||
data = match Self::decrypt_data(&data) {
|
||||
Ok(v) => v,
|
||||
Err(err) => {
|
||||
debug!("decrypt_data failed: {}", err);
|
||||
// delete the config file when decrypt failed
|
||||
let _ = self.delete_iam_config(path.as_ref()).await;
|
||||
return Err(Error::ConfigNotFound);
|
||||
}
|
||||
};
|
||||
|
||||
Ok(serde_json::from_slice(&data)?)
|
||||
}
|
||||
|
||||
@@ -19,6 +19,11 @@ edition.workspace = true
|
||||
license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Distributed locking mechanism for RustFS, providing synchronization and coordination across distributed systems."
|
||||
keywords = ["locking", "asynchronous", "distributed", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "asynchronous"]
|
||||
documentation = "https://docs.rs/rustfs-lock/latest/rustfs_lock/"
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
|
||||
37
crates/lock/README.md
Normal file
@@ -0,0 +1,37 @@
|
||||
[](https://rustfs.com)
|
||||
|
||||
# RustFS Lock - Distributed Locking
|
||||
|
||||
<p align="center">
|
||||
<strong>High-performance distributed locking system for RustFS object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
|
||||
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS Lock** provides distributed locking capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
- Distributed lock management across cluster nodes
|
||||
- Read-write lock support with concurrent readers
|
||||
- Lock timeout and automatic lease renewal
|
||||
- Deadlock detection and prevention
|
||||
- High-availability with leader election
|
||||
- Performance-optimized locking algorithms
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
@@ -19,6 +19,11 @@ license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "Management and administration tools for RustFS, providing a web interface and API for system management."
|
||||
keywords = ["management", "administration", "web-interface", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "config"]
|
||||
documentation = "https://docs.rs/rustfs-madmin/latest/rustfs_madmin/"
|
||||
|
||||
[lints]
|
||||
workspace = true
|
||||
|
||||
37
crates/madmin/README.md
Normal file
@@ -0,0 +1,37 @@
|
||||
[](https://rustfs.com)
|
||||
|
||||
# RustFS MadAdmin - Administrative Interface
|
||||
|
||||
<p align="center">
|
||||
<strong>Advanced administrative interface and management tools for RustFS distributed object storage</strong>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
|
||||
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
|
||||
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
|
||||
</p>
|
||||
|
||||
---
|
||||
|
||||
## 📖 Overview
|
||||
|
||||
**RustFS MadAdmin** provides advanced administrative interface and management tools for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## ✨ Features
|
||||
|
||||
- Comprehensive cluster management and monitoring
|
||||
- Real-time performance metrics and analytics
|
||||
- Automated backup and disaster recovery tools
|
||||
- User and permission management interface
|
||||
- System health monitoring and alerting
|
||||
- Configuration management and deployment tools
|
||||
|
||||
## 📚 Documentation
|
||||
|
||||
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
|
||||
|
||||
## 📄 License
|
||||
|
||||
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
|
||||
@@ -19,6 +19,11 @@ license.workspace = true
|
||||
repository.workspace = true
|
||||
rust-version.workspace = true
|
||||
version.workspace = true
|
||||
homepage.workspace = true
|
||||
description = "File system notification service for RustFS, providing real-time updates on file changes and events."
|
||||
keywords = ["file-system", "notification", "real-time", "rustfs", "Minio"]
|
||||
categories = ["web-programming", "development-tools", "filesystem"]
|
||||
documentation = "https://docs.rs/rustfs-notify/latest/rustfs_notify/"
|
||||
|
||||
[dependencies]
|
||||
rustfs-config = { workspace = true, features = ["notify"] }
|
||||
|
||||