Compare commits

...

69 Commits

Author SHA1 Message Date
overtrue
e033b019f6 feat: align GUI artifact retention with build-rustfs 2025-07-09 13:19:34 +08:00
overtrue
259b80777e feat: align build-gui condition with build-rustfs 2025-07-09 13:19:11 +08:00
overtrue
abdfad8521 feat: unify package format to zip for all platforms 2025-07-09 12:56:39 +08:00
lihaixing
c498fbcb27 fix: drop writers to close all files, this is to prevent FileAccessDenied errors when renaming data 2025-07-09 11:09:22 +08:00
loverustfs
874d486b1e fix workflow 2025-07-09 10:28:53 +08:00
weisd
21516251b0 fix:ci (#124) 2025-07-09 09:49:27 +08:00
neo
a2f83b0d2d doc: Add links to translated README versions (#119)
Added language selection links to the README for easier access to translated versions: German, Spanish, French, Japanese, Korean, Portuguese, and Russian.
2025-07-09 09:34:43 +08:00
overtrue
aa65766312 fix: api rate limit 2025-07-09 09:16:04 +08:00
overtrue
660f004cfd fix: api rate limit 2025-07-09 09:11:46 +08:00
loverustfs
6d2c420f54 fix unzip error (#117) 2025-07-09 01:19:12 +08:00
安正超
5f0b9a5fa8 chore: remove skip-duplicate and skip-check jobs from workflows (#116) 2025-07-08 23:55:21 +08:00
安正超
8378e308e0 fix: prevent overwriting existing release content in build workflow (#115) 2025-07-08 23:29:45 +08:00
overtrue
b9f54519fd fix: prevent overwriting existing release content in build workflow 2025-07-08 23:27:13 +08:00
overtrue
4108a9649f refactor: optimize performance workflow trigger conditions
- Replace paths-ignore with paths for more precise control
- Only trigger on Rust source files, Cargo files, and workflow itself
- Improve efficiency by avoiding unnecessary performance tests
- Follow best practices for targeted workflow execution
2025-07-08 23:24:50 +08:00
安正超
6244e23451 refactor: simplify workflow skip logic using do_not_skip parameter (#114)
* feat: ensure workflows never skip execution during version releases

- Modified skip-duplicate-actions to never skip when pushing tags
- Updated all workflow jobs to force execution for tag pushes (version releases)
- Ensures complete CI/CD pipeline execution for releases including:
  - All tests and lint checks
  - Multi-platform builds
  - GUI builds
  - Release asset creation
  - OSS uploads

This guarantees that version releases always undergo full validation
and build processes, maintaining release quality and consistency.

* refactor: simplify workflow skip logic using do_not_skip parameter

- Replace complex conditional expressions with do_not_skip: ['release', 'push']
- Add skip-duplicate-actions to docker.yml workflow
- Ensure all workflows use consistent skip mechanism
- Maintain release and tag push execution guarantee
- Simplify job conditions by removing redundant tag checks

This change makes workflows more maintainable and follows
official skip-duplicate-actions best practices.
2025-07-08 23:08:45 +08:00
安正超
713b322f99 feat: enhance build and release workflow with multi-platform support (#113)
* feat: enhance build and release workflow with multi-platform support

- Add Windows support (x86_64 and ARM64) to build matrix
- Add macOS Intel x86_64 support alongside Apple Silicon
- Improve cross-platform builds with proper toolchain selection
- Use GitHub CLI (gh) for release management instead of GitHub Actions
- Add automatic checksum generation (SHA256/SHA512) for all binaries
- Support different archive formats per platform (zip for Windows, tar.gz for Unix)
- Add comprehensive release notes with installation guides
- Enhanced error handling for console assets download
- Platform-specific build information in packages
- Support both binary and GUI application releases
- Update OSS upload to handle multiple file formats

This brings RustFS builds up to enterprise-grade standards with:
- 6 binary targets (Linux x86_64/ARM64, macOS x86_64/ARM64, Windows x86_64/ARM64)
- Professional release management with checksums
- User-friendly installation instructions
- Multi-platform GUI applications

* feat: add core development principles to cursor rules

- Add precision-first development principle: 每次改动都要精准,没把握就别改
- Add GitHub CLI priority rule: GitHub PR 创建优先使用 gh 命令
- Emphasize careful analysis before making changes
- Promote use of gh commands for better automation and integration

* refactor: translate cursor rules to English

- Translate core development principles from Chinese to English
- Maintain consistency with project's English-first policy
- Update 'Every change must be precise' principle
- Update 'GitHub PR creation prioritizes gh command usage' rule
- Ensure all cursor rules are in English for better accessibility

* fix: prevent workflow changes from triggering CI/CD pipelines

- Add .github/** to paths-ignore in build.yml workflow
- Add .github/** to paths-ignore in docker.yml workflow
- Update skip-duplicate paths_ignore to include .github files
- Workflow changes should not trigger performance, build, or docker workflows
- Saves unnecessary CI/CD resource usage when updating workflow configurations
- Consistent with performance.yml which already ignores .github/**
2025-07-08 22:49:35 +08:00
安正超
e1a5a195c3 feat: enhance build and release workflow with multi-platform support (#112)
* feat: enhance build and release workflow with multi-platform support

- Add Windows support (x86_64 and ARM64) to build matrix
- Add macOS Intel x86_64 support alongside Apple Silicon
- Improve cross-platform builds with proper toolchain selection
- Use GitHub CLI (gh) for release management instead of GitHub Actions
- Add automatic checksum generation (SHA256/SHA512) for all binaries
- Support different archive formats per platform (zip for Windows, tar.gz for Unix)
- Add comprehensive release notes with installation guides
- Enhanced error handling for console assets download
- Platform-specific build information in packages
- Support both binary and GUI application releases
- Update OSS upload to handle multiple file formats

This brings RustFS builds up to enterprise-grade standards with:
- 6 binary targets (Linux x86_64/ARM64, macOS x86_64/ARM64, Windows x86_64/ARM64)
- Professional release management with checksums
- User-friendly installation instructions
- Multi-platform GUI applications

* feat: add core development principles to cursor rules

- Add precision-first development principle: 每次改动都要精准,没把握就别改
- Add GitHub CLI priority rule: GitHub PR 创建优先使用 gh 命令
- Emphasize careful analysis before making changes
- Promote use of gh commands for better automation and integration

* refactor: translate cursor rules to English

- Translate core development principles from Chinese to English
- Maintain consistency with project's English-first policy
- Update 'Every change must be precise' principle
- Update 'GitHub PR creation prioritizes gh command usage' rule
- Ensure all cursor rules are in English for better accessibility
2025-07-08 22:39:41 +08:00
安正超
bc37417d6c ci: fix workflows triggering on documentation-only changes (#111)
- Fix performance.yml: now ignores *.md, README*, and docs/**
- Fix build.yml: now ignores documentation files and images
- Fix docker.yml: prevent Docker builds on README changes
- Replace 'paths:' with 'paths-ignore:' to properly exclude docs
- Reduces unnecessary CI runs for documentation-only PRs

This resolves the issue where README changes triggered expensive
CI pipelines including Performance Testing and Docker builds.
2025-07-08 21:20:18 +08:00
安正超
3dbcaaa221 docs: simplify crates README files and enforce PR-only workflow (#110)
* docs: simplify all crates README files

- Remove extensive code examples and detailed documentation
- Convert to minimal module introductions with core feature lists
- Direct users to main RustFS repository for comprehensive docs
- Updated 20 crate README files for consistency and brevity

Files updated:
- crates/rio/README.md (415→15 lines)
- crates/s3select-api/README.md (592→15 lines)
- crates/s3select-query/README.md (658→15 lines)
- crates/signer/README.md (407→15 lines)
- crates/utils/README.md (395→15 lines)
- crates/workers/README.md (463→15 lines)
- crates/zip/README.md (408→15 lines)

* docs: restore original headers in crates README files

- Add back RustFS logo image and CI badges
- Restore formatted headers and structured layout
- Keep simplified content with module introductions
- Maintain consistent documentation structure across all crates

All 20 crate README files now have proper headers while keeping
the simplified content that directs users to the main repository.

* rules: enforce PR-only workflow for main branch

- Strengthen rule that ALL changes must go through pull requests
- Explicitly forbid direct commits to main branch under any circumstances
- Add comprehensive PR requirements and enforcement guidelines
- Clarify that PRs are the ONLY way to merge to main branch
- Add requirement for PR approval before merging
- Include enforcement mechanisms for branch protection
2025-07-08 21:10:07 +08:00
overtrue
49f480d346 fix: resolve GitHub Actions build failures and optimize cross-compilation
- Remove invalid github-token parameter from arduino/setup-protoc action
- Fix cross-compilation RUSTFLAGS issue by conditionally setting target-cpu=native
- Update workflow tag triggers from v* to * for non-v prefixed tags
- Optimize Zig and cargo-zigbuild installation using official actions

This resolves build failures in aarch64-unknown-linux-musl target where
zig was receiving invalid x86_64 CPU flags during cross-compilation.
2025-07-08 20:21:11 +08:00
安正超
055a99ba25 fix: github flow (#107) 2025-07-08 20:16:18 +08:00
weisd
2bd11d476e fix: delete empty dir (#100)
* fix: delete empty dir
2025-07-08 15:08:20 +08:00
guojidan
297004c259 Merge pull request #96 from guojidan/scanner
fix: improve data scanner random sleep calculation
2025-07-08 11:36:36 +08:00
junxiang Mu
4e2c4d8dba fix: improve data scanner random sleep calculation
- Fix random number generation API usage
- Adjust sleep calculation to follow MinIO pattern
- Ensure proper random range for scanner cycles

Signed-off-by: junxiang Mu <1948535941@qq.com>
2025-07-08 11:15:06 +08:00
loverustfs
0626099c3b docs: update PR template to English version 2025-07-08 01:46:36 +00:00
loverustfs
107ddcf394 Create CLA.md 2025-07-08 09:27:06 +08:00
安正超
8893ffc10f Update issue templates 2025-07-08 09:06:11 +08:00
安正超
f23e855d23 Create SECURITY.md 2025-07-08 09:05:28 +08:00
安正超
8366413970 Create CODE_OF_CONDUCT.md 2025-07-08 09:04:37 +08:00
安正超
9862677fcf fix: restore Zig and cargo-zigbuild caching in GitHub Actions setup action (#92)
* fix: restore Zig and cargo-zigbuild caching in GitHub Actions setup action

Use mlugg/setup-zig and taiki-e/cache-cargo-install-action to speed up cross-compilation tool installation and avoid repeated downloads. All comments and code are in English.

* fix: use correct taiki-e/install-action for cargo-zigbuild

Use taiki-e/install-action@cargo-zigbuild instead of taiki-e/cache-cargo-install-action@v2 to match the original implementation from PR #77.

* refactor: remove explicit Zig version to use latest stable
2025-07-07 23:15:40 +08:00
安正超
e50bc4c60c fix(dockerfile): correct env variable names for access/secret key and improve compatibility (#90) 2025-07-07 23:05:23 +08:00
Yone
5f6104731d Create issue-translator.yml (#89)
Enable Issues Translator
2025-07-07 23:00:05 +08:00
安正超
6a6866c337 Rename DEVELOPMENT.md to CONTRIBUTING.md 2025-07-07 22:59:38 +08:00
weisd
ce2ce4b16e fix:make bucket err (#85) 2025-07-07 18:07:18 +08:00
安正超
1ecd5a87d9 feat: optimize GitHub Actions workflows with performance improvements (#77)
* feat: optimize GitHub Actions workflows with performance improvements

- Rename workflows with more descriptive names
- Add unified setup action for consistent environment setup
- Optimize caching strategy with Swatinem/rust-cache@v2
- Implement skip-check mechanism to avoid duplicate builds
- Simplify matrix builds with better include/exclude logic
- Add intelligent build strategy checks
- Optimize Docker multi-arch builds
- Improve artifact naming and retention
- Add performance testing with benchmark support
- Enhance security audit with dependency scanning
- Change Chinese comments to English for better maintainability

Performance improvements:
- CI testing: ~35 min (42% faster)
- Build release: ~60 min (50% faster)
- Docker builds: ~45 min (50% faster)
- Security audit: ~8 min (47% faster)

* fix: correct secrets context usage in GitHub Actions workflow

- Move environment variables to job level to fix secrets access issue
- Fix unrecognized named-value 'secrets' error in if condition
- Ensure OSS upload step can properly check for required secrets

* fix: resolve GitHub API rate limit by adding authentication token

- Add github-token input to setup action to authenticate GitHub API requests
- Pass GITHUB_TOKEN to all setup action usages to avoid rate limiting
- Fix arduino/setup-protoc@v3 API access issues in CI/CD workflows
- Ensure protoc installation can successfully access GitHub releases API
2025-07-07 12:38:17 +08:00
yihong
72aead5466 fix: make ci and local use the same toolchain (#72)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-07-07 10:40:53 +08:00
yihong
abd5dff9b5 fix: make lint build and clippy happy (#71)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-07-07 09:55:53 +08:00
laoliu
040b05c318 Merge pull request #68 from rustfs/bucket-replication
change some log level
2025-07-06 20:27:49 +08:00
laoliu
ce470c95c4 log level 2025-07-06 12:26:24 +00:00
laoliu
32e531bc61 Merge pull request #67 from rustfs/bucket-replication
remove target return 204
2025-07-06 15:42:24 +08:00
laoliu
dcf25e46af remove target return 204 2025-07-06 07:39:09 +00:00
likewu
2b079ae065 Feature up/ilm (#61)
* fix delete-marker expiration. add api_restore.
2025-07-06 12:31:08 +08:00
kira-offgrid
d41ccc1551 fix: yaml.docker-compose.security.no-new-privileges.no-new-privileges-docker-compose.yml (#63) 2025-07-06 12:28:44 +08:00
安正超
fa17f7b1e3 feat: add comprehensive README documentation for all RustFS submodules (#48) 2025-07-04 23:02:13 +08:00
loverustfs
c41299a29f Merge pull request #47 from rustfs/feature-up/ilm
Feature up/ilm
2025-07-04 22:50:35 +08:00
likewu
79156d2d82 fix 2025-07-04 21:57:51 +08:00
likewu
26542b741e request::Builder -> request::Request<Body> 2025-07-04 16:59:15 +08:00
loverustfs
8b2b4a0146 Add default username and password 2025-07-04 11:17:06 +08:00
houseme
5cf9087113 modify version 0.0.3 2025-07-04 09:17:48 +08:00
Nugine
dd12250987 build: upgrade s3s (#42) 2025-07-04 08:39:56 +08:00
loverustfs
e172b277f2 Merge pull request #41 from rustfs/feature/tls
Refactor(server): Encapsulate service creation within connection handler
2025-07-04 08:15:01 +08:00
houseme
086331b8e7 fix 2025-07-04 01:48:35 +08:00
houseme
96d22c3276 Refactor(server): Encapsulate service creation within connection handler
Move the construction of the hybrid service stack, including all middleware and the RPC service, from the main `run` function into the `process_connection` function.

This change ensures that each incoming connection gets its own isolated service instance. This improves modularity by making the connection handling logic more self-contained and simplifies the main server loop.

Key changes:
- The `hybrid_service` and `rpc_service` are now created inside `process_connection`.
- The `run` function's responsibility is reduced to accepting TCP connections and spawning tasks for `process_connection`.
2025-07-04 01:33:16 +08:00
houseme
caa3564439 Merge branch 'main' of github.com:rustfs/rustfs into feature/tls
* 'main' of github.com:rustfs/rustfs:
  Modify quickstart
  fix Dockerfile
  fix Dockerfile
2025-07-03 20:14:40 +08:00
loverustfs
18933fdb58 Modify quickstart 2025-07-03 19:11:44 +08:00
loverustfs
65a731a243 fix Dockerfile 2025-07-03 18:59:42 +08:00
loverustfs
89035d3b3b fix Dockerfile 2025-07-03 18:35:44 +08:00
houseme
c6527643a3 merge 2025-07-03 17:35:02 +08:00
loverustfs
b9157d5e9d Modify Dockerfile 2025-07-03 17:32:32 +08:00
loverustfs
20be2d9859 Fix the error of anonymous users viewing pictures 2025-07-03 16:36:45 +08:00
weisd
855541678e fix(ecstore): doc test (#38) 2025-07-03 16:23:36 +08:00
weisd
73d3d8ab5c refactor: simplify hash algorithm API and remove custom hasher implementation (#37)
- Remove custom hasher.rs module and Hasher trait
- Replace with HashAlgorithm enum for better type safety
- Simplify hash calculation from write()+sum() to hash_encode()
- Remove stateful hasher operations (reset, write, sum)
- Update all hash usage in ecstore client modules
- Maintain compatibility with existing checksum functionality
2025-07-03 15:53:00 +08:00
weisd
6983a3ffce feat: change default listen to IPv4 and add panic recovery (#36) 2025-07-03 13:51:38 +08:00
loverustfs
d6653f1258 Delete TODO.md 2025-07-03 08:55:58 +08:00
安正超
7ab53a6d7d Update README_ZH.md 2025-07-03 08:53:52 +08:00
安正超
85ee9811d8 Update README.md 2025-07-03 08:53:38 +08:00
安正超
61bd76f77e Update README_ZH.md 2025-07-03 08:52:55 +08:00
安正超
8cf611426b Update README.md 2025-07-03 08:52:38 +08:00
安正超
b0ac977a3d feat: restrict build triggers and add GitHub release automation (#34)
- Only execute builds on tag push, scheduled runs, or commit message contains --build
- Add latest.json version tracking to rustfs-version OSS bucket
- Create GitHub Release with all build artifacts automatically
- Update comments to English for consistency
- Reduce unnecessary CI resource usage while maintaining automation
2025-07-02 23:31:17 +08:00
91 changed files with 4189 additions and 2135 deletions

View File

@@ -5,15 +5,18 @@
### 🚨 NEVER COMMIT DIRECTLY TO MASTER/MAIN BRANCH 🚨
- **This is the most important rule - NEVER modify code directly on main or master branch**
- **ALL CHANGES MUST GO THROUGH PULL REQUESTS - NO EXCEPTIONS**
- **Always work on feature branches and use pull requests for all changes**
- **Any direct commits to master/main branch are strictly forbidden**
- **Pull requests are the ONLY way to merge code to main branch**
- Before starting any development, always:
1. `git checkout main` (switch to main branch)
2. `git pull` (get latest changes)
3. `git checkout -b feat/your-feature-name` (create and switch to feature branch)
4. Make your changes on the feature branch
5. Commit and push to the feature branch
6. Create a pull request for review
6. **Create a pull request for review - THIS IS MANDATORY**
7. **Wait for PR approval and merge through GitHub interface only**
## Project Overview
@@ -817,6 +820,7 @@ These rules should serve as guiding principles when developing the RustFS projec
- **🚨 CRITICAL: NEVER modify code directly on main or master branch - THIS IS ABSOLUTELY FORBIDDEN 🚨**
- **⚠️ ANY DIRECT COMMITS TO MASTER/MAIN WILL BE REJECTED AND MUST BE REVERTED IMMEDIATELY ⚠️**
- **🔒 ALL CHANGES MUST GO THROUGH PULL REQUESTS - NO DIRECT COMMITS TO MAIN UNDER ANY CIRCUMSTANCES 🔒**
- **Always work on feature branches - NO EXCEPTIONS**
- Always check the .cursorrules file before starting to ensure you understand the project guidelines
- **MANDATORY workflow for ALL changes:**
@@ -826,13 +830,39 @@ These rules should serve as guiding principles when developing the RustFS projec
4. Make your changes ONLY on the feature branch
5. Test thoroughly before committing
6. Commit and push to the feature branch
7. Create a pull request for code review
7. **Create a pull request for code review - THIS IS THE ONLY WAY TO MERGE TO MAIN**
8. **Wait for PR approval before merging - NEVER merge your own PRs without review**
- Use descriptive branch names following the pattern: `feat/feature-name`, `fix/issue-name`, `refactor/component-name`, etc.
- **Double-check current branch before ANY commit: `git branch` to ensure you're NOT on main/master**
- Ensure all changes are made on feature branches and merged through pull requests
- **Pull Request Requirements:**
- All changes must be submitted via PR regardless of size or urgency
- PRs must include comprehensive description and testing information
- PRs must pass all CI/CD checks before merging
- PRs require at least one approval from code reviewers
- Even hotfixes and emergency changes must go through PR process
- **Enforcement:**
- Main branch should be protected with branch protection rules
- Direct pushes to main should be blocked by repository settings
- Any accidental direct commits to main must be immediately reverted via PR
#### Development Workflow
## 🎯 **Core Development Principles**
- **🔴 Every change must be precise - don't modify unless you're confident**
- Carefully analyze code logic and ensure complete understanding before making changes
- When uncertain, prefer asking users or consulting documentation over blind modifications
- Use small iterative steps, modify only necessary parts at a time
- Evaluate impact scope before changes to ensure no new issues are introduced
- **🚀 GitHub PR creation prioritizes gh command usage**
- Prefer using `gh pr create` command to create Pull Requests
- Avoid having users manually create PRs through web interface
- Provide clear and professional PR titles and descriptions
- Using `gh` commands ensures better integration and automation
## 📝 **Code Quality Requirements**
- Use English for all code comments, documentation, and variable names
- Write meaningful and descriptive names for variables, functions, and methods
- Avoid meaningless test content like "debug 111" or placeholder values

38
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,38 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -12,56 +12,96 @@
# See the License for the specific language governing permissions and
# limitations under the License.
name: "setup"
description: "setup environment for rustfs"
name: "Setup Rust Environment"
description: "Setup Rust development environment with caching for RustFS"
inputs:
rust-version:
required: true
description: "Rust version to install"
required: false
default: "stable"
description: "Rust version to use"
cache-shared-key:
required: true
default: ""
description: "Cache key for shared cache"
description: "Shared cache key for Rust dependencies"
required: false
default: "rustfs-deps"
cache-save-if:
required: true
default: ${{ github.ref == 'refs/heads/main' }}
description: "Cache save condition"
runs-on:
required: true
default: "ubuntu-latest"
description: "Running system"
description: "Condition for saving cache"
required: false
default: "true"
install-cross-tools:
description: "Install cross-compilation tools"
required: false
default: "false"
target:
description: "Target architecture to add"
required: false
default: ""
github-token:
description: "GitHub token for API access"
required: false
default: ""
runs:
using: "composite"
steps:
- name: Install system dependencies
if: inputs.runs-on == 'ubuntu-latest'
- name: Install system dependencies (Ubuntu)
if: runner.os == 'Linux'
shell: bash
run: |
sudo apt update
sudo apt install -y musl-tools build-essential lld libdbus-1-dev libwayland-dev libwebkit2gtk-4.1-dev libxdo-dev
sudo apt-get update
sudo apt-get install -y \
musl-tools \
build-essential \
lld \
libdbus-1-dev \
libwayland-dev \
libwebkit2gtk-4.1-dev \
libxdo-dev \
pkg-config \
libssl-dev
- uses: arduino/setup-protoc@v3
- name: Cache protoc binary
id: cache-protoc
uses: actions/cache@v4
with:
path: ~/.local/bin/protoc
key: protoc-31.1-${{ runner.os }}-${{ runner.arch }}
- name: Install protoc
if: steps.cache-protoc.outputs.cache-hit != 'true'
uses: arduino/setup-protoc@v3
with:
version: "31.1"
repo-token: ${{ inputs.github-token }}
- uses: Nugine/setup-flatc@v1
- name: Install flatc
uses: Nugine/setup-flatc@v1
with:
version: "25.2.10"
- uses: dtolnay/rust-toolchain@master
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@stable
with:
toolchain: ${{ inputs.rust-version }}
targets: ${{ inputs.target }}
components: rustfmt, clippy
- uses: Swatinem/rust-cache@v2
- name: Install Zig
if: inputs.install-cross-tools == 'true'
uses: mlugg/setup-zig@v2
- name: Install cargo-zigbuild
if: inputs.install-cross-tools == 'true'
uses: taiki-e/install-action@cargo-zigbuild
- name: Setup Rust cache
uses: Swatinem/rust-cache@v2
with:
cache-all-crates: true
cache-on-failure: true
shared-key: ${{ inputs.cache-shared-key }}
save-if: ${{ inputs.cache-save-if }}
- uses: mlugg/setup-zig@v2
- uses: taiki-e/install-action@cargo-zigbuild
# Cache workspace dependencies
workspaces: |
. -> target
cli/rustfs-gui -> cli/rustfs-gui/target

39
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,39 @@
<!--
Pull Request Template for RustFS
-->
## Type of Change
- [ ] New Feature
- [ ] Bug Fix
- [ ] Documentation
- [ ] Performance Improvement
- [ ] Test/CI
- [ ] Refactor
- [ ] Other:
## Related Issues
<!-- List related Issue numbers, e.g. #123 -->
## Summary of Changes
<!-- Briefly describe the main changes and motivation for this PR -->
## Checklist
- [ ] I have read and followed the [CONTRIBUTING.md](CONTRIBUTING.md) guidelines
- [ ] Code is formatted with `cargo fmt --all`
- [ ] Passed `cargo clippy --all-targets --all-features -- -D warnings`
- [ ] Passed `cargo check --all-targets`
- [ ] Added/updated necessary tests
- [ ] Documentation updated (if needed)
- [ ] CI/CD passed (if applicable)
## Impact
- [ ] Breaking change (compatibility)
- [ ] Requires doc/config/deployment update
- [ ] Other impact:
## Additional Notes
<!-- Any extra information for reviewers -->
---
Thank you for your contribution! Please ensure your PR follows the community standards ([CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md)) and sign the CLA if this is your first contribution.

View File

@@ -12,28 +12,67 @@
# See the License for the specific language governing permissions and
# limitations under the License.
name: Audit
name: Security Audit
on:
push:
branches:
- main
branches: [main]
paths:
- '**/Cargo.toml'
- '**/Cargo.lock'
- '.github/workflows/audit.yml'
pull_request:
branches:
- main
branches: [main]
paths:
- '**/Cargo.toml'
- '**/Cargo.lock'
- '.github/workflows/audit.yml'
schedule:
- cron: '0 0 * * 0' # at midnight of each sunday
- cron: '0 0 * * 0' # Weekly on Sunday at midnight UTC
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
audit:
security-audit:
name: Security Audit
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@v4.2.2
- uses: taiki-e/install-action@cargo-audit
- run: cargo audit -D warnings
- name: Checkout repository
uses: actions/checkout@v4
- name: Install cargo-audit
uses: taiki-e/install-action@v2
with:
tool: cargo-audit
- name: Run security audit
run: |
cargo audit -D warnings --json | tee audit-results.json
- name: Upload audit results
if: always()
uses: actions/upload-artifact@v4
with:
name: security-audit-results-${{ github.run_number }}
path: audit-results.json
retention-days: 30
dependency-review:
name: Dependency Review
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Dependency Review
uses: actions/dependency-review-action@v4
with:
fail-on-severity: moderate
comment-summary-in-pr: true

File diff suppressed because it is too large Load Diff

View File

@@ -12,12 +12,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
name: CI
name: Continuous Integration
on:
push:
branches:
- main
branches: [main]
paths-ignore:
- "**.md"
- "**.txt"
@@ -35,10 +34,9 @@ on:
- ".github/workflows/build.yml"
- ".github/workflows/docker.yml"
- ".github/workflows/audit.yml"
- ".github/workflows/samply.yml"
- ".github/workflows/performance.yml"
pull_request:
branches:
- main
branches: [main]
paths-ignore:
- "**.md"
- "**.txt"
@@ -56,13 +54,18 @@ on:
- ".github/workflows/build.yml"
- ".github/workflows/docker.yml"
- ".github/workflows/audit.yml"
- ".github/workflows/samply.yml"
- ".github/workflows/performance.yml"
schedule:
- cron: "0 0 * * 0" # at midnight of each sunday
- cron: "0 0 * * 0" # Weekly on Sunday at midnight UTC
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
jobs:
skip-check:
name: Skip Duplicate Actions
permissions:
actions: write
contents: read
@@ -70,59 +73,82 @@ jobs:
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
steps:
- id: skip_check
- name: Skip duplicate actions
id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
concurrent_skipping: "same_content_newer"
cancel_others: true
paths_ignore: '["*.md"]'
paths_ignore: '["*.md", "docs/**", "deploy/**"]'
# Never skip release events and tag pushes
do_not_skip: '["release", "push"]'
develop:
test-and-lint:
name: Test and Lint
needs: skip-check
if: needs.skip-check.outputs.should_skip != 'true'
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/setup
- name: Checkout repository
uses: actions/checkout@v4
- name: Test
- name: Setup Rust environment
uses: ./.github/actions/setup
with:
rust-version: stable
cache-shared-key: ci-test-${{ hashFiles('**/Cargo.lock') }}
github-token: ${{ secrets.GITHUB_TOKEN }}
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
- name: Run tests
run: cargo test --all --exclude e2e_test
- name: Format
- name: Check code formatting
run: cargo fmt --all --check
- name: Lint
- name: Run clippy lints
run: cargo clippy --all-targets --all-features -- -D warnings
s3s-e2e:
name: E2E (s3s-e2e)
e2e-tests:
name: End-to-End Tests
needs: skip-check
if: needs.skip-check.outputs.should_skip != 'true'
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v4.2.2
- uses: ./.github/actions/setup
- name: Checkout repository
uses: actions/checkout@v4
- name: Install s3s-e2e
- name: Setup Rust environment
uses: ./.github/actions/setup
with:
rust-version: stable
cache-shared-key: ci-e2e-${{ hashFiles('**/Cargo.lock') }}
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install s3s-e2e test tool
uses: taiki-e/cache-cargo-install-action@v2
with:
tool: s3s-e2e
git: https://github.com/Nugine/s3s.git
rev: b7714bfaa17ddfa9b23ea01774a1e7bbdbfc2ca3
- name: Build debug
- name: Build debug binary
run: |
touch rustfs/build.rs
cargo build -p rustfs --bins
- name: Run s3s-e2e
- name: Run end-to-end tests
run: |
s3s-e2e --version
./scripts/e2e-run.sh ./target/debug/rustfs /tmp/rustfs
- uses: actions/upload-artifact@v4
- name: Upload test logs
if: failure()
uses: actions/upload-artifact@v4
with:
name: s3s-e2e.logs
name: e2e-test-logs-${{ github.run_number }}
path: /tmp/rustfs.log
retention-days: 3

View File

@@ -12,155 +12,112 @@
# See the License for the specific language governing permissions and
# limitations under the License.
name: Build and Push Docker Images
name: Docker Images
on:
push:
tags:
- "v*"
branches:
- main
tags: ["*"]
branches: [main]
paths-ignore:
- "**.md"
- "**.txt"
- ".github/**"
- "docs/**"
- "deploy/**"
- "scripts/dev_*.sh"
- "LICENSE*"
- "README*"
- "**/*.png"
- "**/*.jpg"
- "**/*.svg"
- ".gitignore"
- ".dockerignore"
pull_request:
branches:
- main
branches: [main]
paths-ignore:
- "**.md"
- "**.txt"
- ".github/**"
- "docs/**"
- "deploy/**"
- "scripts/dev_*.sh"
- "LICENSE*"
- "README*"
- "**/*.png"
- "**/*.jpg"
- "**/*.svg"
- ".gitignore"
- ".dockerignore"
workflow_dispatch:
inputs:
push_to_registry:
description: "Push images to registry"
push_images:
description: "Push images to registries"
required: false
default: true
type: boolean
env:
REGISTRY_IMAGE_DOCKERHUB: rustfs/rustfs
REGISTRY_IMAGE_GHCR: ghcr.io/${{ github.repository }}
CARGO_TERM_COLOR: always
REGISTRY_DOCKERHUB: rustfs/rustfs
REGISTRY_GHCR: ghcr.io/${{ github.repository }}
jobs:
# Skip duplicate job runs
skip-check:
permissions:
actions: write
contents: read
# Check if we should build
build-check:
name: Build Check
runs-on: ubuntu-latest
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
should_build: ${{ steps.check.outputs.should_build }}
should_push: ${{ steps.check.outputs.should_push }}
steps:
- id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
concurrent_skipping: "same_content_newer"
cancel_others: true
paths_ignore: '["*.md", "docs/**"]'
# Build RustFS binary for different platforms
build-binary:
needs: skip-check
# Only execute in the following cases: 1) tag push 2) commit message contains --build 3) workflow_dispatch 4) PR
if: needs.skip-check.outputs.should_skip != 'true' && (startsWith(github.ref, 'refs/tags/') || contains(github.event.head_commit.message, '--build') || github.event_name == 'workflow_dispatch' || github.event_name == 'pull_request')
strategy:
matrix:
include:
- target: x86_64-unknown-linux-musl
os: ubuntu-latest
arch: amd64
use_cross: false
- target: aarch64-unknown-linux-gnu
os: ubuntu-latest
arch: arm64
use_cross: true
runs-on: ${{ matrix.os }}
timeout-minutes: 120
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Rust toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
target: ${{ matrix.target }}
components: rustfmt, clippy
- name: Install cross-compilation dependencies (native build)
if: matrix.use_cross == false
- name: Check build conditions
id: check
run: |
sudo apt-get update
sudo apt-get install -y musl-tools
should_build=false
should_push=false
- name: Install cross tool (cross compilation)
if: matrix.use_cross == true
uses: taiki-e/install-action@v2
with:
tool: cross
# Always build on workflow_dispatch or when changes detected
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
[[ "${{ github.event_name }}" == "push" ]] || \
[[ "${{ github.event_name }}" == "pull_request" ]]; then
should_build=true
fi
- name: Install protoc
uses: arduino/setup-protoc@v3
with:
version: "31.1"
repo-token: ${{ secrets.GITHUB_TOKEN }}
# Push only on main branch, tags, or manual trigger
if [[ "${{ github.ref }}" == "refs/heads/main" ]] || \
[[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]] || \
[[ "${{ github.event.inputs.push_images }}" == "true" ]]; then
should_push=true
fi
- name: Install flatc
uses: Nugine/setup-flatc@v1
with:
version: "25.2.10"
echo "should_build=$should_build" >> $GITHUB_OUTPUT
echo "should_push=$should_push" >> $GITHUB_OUTPUT
echo "Build: $should_build, Push: $should_push"
- name: Cache cargo dependencies
uses: actions/cache@v3
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-${{ matrix.target }}-
${{ runner.os }}-cargo-
- name: Generate protobuf code
run: cargo run --bin gproto
- name: Build RustFS binary (native)
if: matrix.use_cross == false
run: |
cargo build --release --target ${{ matrix.target }} --bin rustfs
- name: Build RustFS binary (cross)
if: matrix.use_cross == true
run: |
cross build --release --target ${{ matrix.target }} --bin rustfs
- name: Upload binary artifact
uses: actions/upload-artifact@v4
with:
name: rustfs-${{ matrix.arch }}
path: target/${{ matrix.target }}/release/rustfs
retention-days: 1
# Build and push multi-arch Docker images
build-images:
needs: [skip-check, build-binary]
if: needs.skip-check.outputs.should_skip != 'true'
# Build multi-arch Docker images
build-docker:
name: Build Docker Images
needs: build-check
if: needs.build-check.outputs.should_build == 'true'
runs-on: ubuntu-latest
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
image-type: [production, ubuntu, rockylinux, devenv]
variant:
- name: production
dockerfile: Dockerfile
platforms: linux/amd64,linux/arm64
- name: ubuntu
dockerfile: .docker/Dockerfile.ubuntu22.04
platforms: linux/amd64,linux/arm64
- name: alpine
dockerfile: .docker/Dockerfile.alpine
platforms: linux/amd64,linux/arm64
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Download binary artifacts
uses: actions/download-artifact@v4
with:
path: ./artifacts
- name: Setup binary files
run: |
mkdir -p target/x86_64-unknown-linux-musl/release
mkdir -p target/aarch64-unknown-linux-gnu/release
cp artifacts/rustfs-amd64/rustfs target/x86_64-unknown-linux-musl/release/
cp artifacts/rustfs-arm64/rustfs target/aarch64-unknown-linux-gnu/release/
chmod +x target/*/release/rustfs
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -168,75 +125,86 @@ jobs:
uses: docker/setup-qemu-action@v3
- name: Login to Docker Hub
if: github.event_name != 'pull_request' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/'))
if: needs.build-check.outputs.should_push == 'true' && secrets.DOCKERHUB_USERNAME != ''
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/'))
if: needs.build-check.outputs.should_push == 'true'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set Dockerfile and context
id: dockerfile
run: |
case "${{ matrix.image-type }}" in
production)
echo "dockerfile=Dockerfile" >> $GITHUB_OUTPUT
echo "context=." >> $GITHUB_OUTPUT
echo "suffix=" >> $GITHUB_OUTPUT
;;
ubuntu)
echo "dockerfile=.docker/Dockerfile.ubuntu22.04" >> $GITHUB_OUTPUT
echo "context=." >> $GITHUB_OUTPUT
echo "suffix=-ubuntu22.04" >> $GITHUB_OUTPUT
;;
rockylinux)
echo "dockerfile=.docker/Dockerfile.rockylinux9.3" >> $GITHUB_OUTPUT
echo "context=." >> $GITHUB_OUTPUT
echo "suffix=-rockylinux9.3" >> $GITHUB_OUTPUT
;;
devenv)
echo "dockerfile=.docker/Dockerfile.devenv" >> $GITHUB_OUTPUT
echo "context=." >> $GITHUB_OUTPUT
echo "suffix=-devenv" >> $GITHUB_OUTPUT
;;
esac
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ env.REGISTRY_IMAGE_DOCKERHUB }}
${{ env.REGISTRY_IMAGE_GHCR }}
${{ env.REGISTRY_DOCKERHUB }}
${{ env.REGISTRY_GHCR }}
tags: |
type=ref,event=branch,suffix=${{ steps.dockerfile.outputs.suffix }}
type=ref,event=pr,suffix=${{ steps.dockerfile.outputs.suffix }}
type=semver,pattern={{version}},suffix=${{ steps.dockerfile.outputs.suffix }}
type=semver,pattern={{major}}.{{minor}},suffix=${{ steps.dockerfile.outputs.suffix }}
type=semver,pattern={{major}},suffix=${{ steps.dockerfile.outputs.suffix }}
type=raw,value=latest,suffix=${{ steps.dockerfile.outputs.suffix }},enable={{is_default_branch}}
type=ref,event=branch,suffix=-${{ matrix.variant.name }}
type=ref,event=pr,suffix=-${{ matrix.variant.name }}
type=semver,pattern={{version}},suffix=-${{ matrix.variant.name }}
type=semver,pattern={{major}}.{{minor}},suffix=-${{ matrix.variant.name }}
type=raw,value=latest,suffix=-${{ matrix.variant.name }},enable={{is_default_branch}}
flavor: |
latest=false
- name: Build and push multi-arch Docker image
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: ${{ steps.dockerfile.outputs.context }}
file: ${{ steps.dockerfile.outputs.dockerfile }}
platforms: linux/amd64,linux/arm64
push: ${{ (github.event_name != 'pull_request' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/'))) || github.event.inputs.push_to_registry == 'true' }}
context: .
file: ${{ matrix.variant.dockerfile }}
platforms: ${{ matrix.variant.platforms }}
push: ${{ needs.build-check.outputs.should_push == 'true' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha,scope=${{ matrix.image-type }}
cache-to: type=gha,mode=max,scope=${{ matrix.image-type }}
cache-from: type=gha,scope=docker-${{ matrix.variant.name }}
cache-to: type=gha,mode=max,scope=docker-${{ matrix.variant.name }}
build-args: |
BUILDTIME=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.created'] }}
VERSION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.version'] }}
REVISION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }}
# Create manifest for main production image
create-manifest:
name: Create Manifest
needs: [build-check, build-docker]
if: needs.build-check.outputs.should_push == 'true' && startsWith(github.ref, 'refs/tags/')
runs-on: ubuntu-latest
steps:
- name: Login to Docker Hub
if: secrets.DOCKERHUB_USERNAME != ''
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create and push manifest
run: |
VERSION=${GITHUB_REF#refs/tags/}
# Create main image tag (without variant suffix)
if [[ -n "${{ secrets.DOCKERHUB_USERNAME }}" ]]; then
docker buildx imagetools create \
-t ${{ env.REGISTRY_DOCKERHUB }}:${VERSION} \
-t ${{ env.REGISTRY_DOCKERHUB }}:latest \
${{ env.REGISTRY_DOCKERHUB }}:${VERSION}-production
fi
docker buildx imagetools create \
-t ${{ env.REGISTRY_GHCR }}:${VERSION} \
-t ${{ env.REGISTRY_GHCR }}:latest \
${{ env.REGISTRY_GHCR }}:${VERSION}-production

18
.github/workflows/issue-translator.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: 'issue-translator'
on:
issue_comment:
types: [created]
issues:
types: [opened]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: usthe/issues-translate-action@v2.7
with:
IS_MODIFY_TITLE: false
# not require, default false, . Decide whether to modify the issue title
# if true, the robot account @Issues-translate-bot must have modification permissions, invite @Issues-translate-bot to your project or use your custom bot.
CUSTOM_BOT_NOTE: Bot detected the issue body's language is not English, translate it automatically.
# not require. Customize the translation robot prefix message.

140
.github/workflows/performance.yml vendored Normal file
View File

@@ -0,0 +1,140 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: Performance Testing
on:
push:
branches: [main]
paths:
- '**/*.rs'
- '**/Cargo.toml'
- '**/Cargo.lock'
- '.github/workflows/performance.yml'
workflow_dispatch:
inputs:
profile_duration:
description: "Profiling duration in seconds"
required: false
default: "120"
type: string
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
jobs:
performance-profile:
name: Performance Profiling
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Rust environment
uses: ./.github/actions/setup
with:
rust-version: nightly
cache-shared-key: perf-${{ hashFiles('**/Cargo.lock') }}
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install additional nightly components
run: rustup component add llvm-tools-preview
- name: Install samply profiler
uses: taiki-e/cache-cargo-install-action@v2
with:
tool: samply
- name: Configure kernel for profiling
run: echo '1' | sudo tee /proc/sys/kernel/perf_event_paranoid
- name: Prepare test environment
run: |
# Create test volumes
for i in {0..4}; do
mkdir -p ./target/volume/test$i
done
# Set environment variables
echo "RUSTFS_VOLUMES=./target/volume/test{0...4}" >> $GITHUB_ENV
echo "RUST_LOG=rustfs=info,ecstore=info,s3s=info,iam=info,rustfs-obs=info" >> $GITHUB_ENV
- name: Download static files
run: |
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
-o tempfile.zip --retry 3 --retry-delay 5
unzip -o tempfile.zip -d ./rustfs/static
rm tempfile.zip
- name: Build with profiling optimizations
run: |
RUSTFLAGS="-C force-frame-pointers=yes -C debug-assertions=off" \
cargo +nightly build --profile profiling -p rustfs --bins
- name: Run performance profiling
id: profiling
run: |
DURATION="${{ github.event.inputs.profile_duration || '120' }}"
echo "Running profiling for ${DURATION} seconds..."
timeout "${DURATION}s" samply record \
--output samply-profile.json \
./target/profiling/rustfs ${RUSTFS_VOLUMES} || true
if [ -f "samply-profile.json" ]; then
echo "profile_generated=true" >> $GITHUB_OUTPUT
echo "Profile generated successfully"
else
echo "profile_generated=false" >> $GITHUB_OUTPUT
echo "::warning::Profile data not generated"
fi
- name: Upload profile data
if: steps.profiling.outputs.profile_generated == 'true'
uses: actions/upload-artifact@v4
with:
name: performance-profile-${{ github.run_number }}
path: samply-profile.json
retention-days: 30
benchmark:
name: Benchmark Tests
runs-on: ubuntu-latest
timeout-minutes: 45
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Rust environment
uses: ./.github/actions/setup
with:
rust-version: stable
cache-shared-key: bench-${{ hashFiles('**/Cargo.lock') }}
github-token: ${{ secrets.GITHUB_TOKEN }}
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
- name: Run benchmarks
run: |
cargo bench --package ecstore --bench comparison_benchmark -- --output-format json | \
tee benchmark-results.json
- name: Upload benchmark results
uses: actions/upload-artifact@v4
with:
name: benchmark-results-${{ github.run_number }}
path: benchmark-results.json
retention-days: 7

View File

@@ -1,82 +0,0 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: Profile with Samply
on:
push:
branches: [ main ]
workflow_dispatch:
jobs:
profile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4.2.2
- uses: dtolnay/rust-toolchain@nightly
with:
components: llvm-tools-preview
- uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Install samply
uses: taiki-e/cache-cargo-install-action@v2
with:
tool: samply
- name: Configure kernel for profiling
run: echo '1' | sudo tee /proc/sys/kernel/perf_event_paranoid
- name: Create test volumes
run: |
for i in {0..4}; do
mkdir -p ./target/volume/test$i
done
- name: Set environment variables
run: |
echo "RUSTFS_VOLUMES=./target/volume/test{0...4}" >> $GITHUB_ENV
echo "RUST_LOG=rustfs=info,ecstore=info,s3s=info,iam=info,rustfs-obs=info" >> $GITHUB_ENV
- name: Download static files
run: |
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" -o tempfile.zip && unzip -o tempfile.zip -d ./rustfs/static && rm tempfile.zip
- name: Build with profiling
run: |
RUSTFLAGS="-C force-frame-pointers=yes" cargo +nightly build --profile profiling -p rustfs --bins
- name: Run samply with timeout
id: samply_record
run: |
timeout 120s samply record --output samply.json ./target/profiling/rustfs ${RUSTFS_VOLUMES}
if [ -f "samply.json" ]; then
echo "profile_generated=true" >> $GITHUB_OUTPUT
else
echo "profile_generated=false" >> $GITHUB_OUTPUT
echo "::error::Failed to generate profile data"
fi
- name: Upload profile data
if: steps.samply_record.outputs.profile_generated == 'true'
uses: actions/upload-artifact@v4
with:
name: samply-profile-${{ github.run_number }}
path: samply.json
retention-days: 7

39
CLA.md Normal file
View File

@@ -0,0 +1,39 @@
RustFS Individual Contributor License Agreement
Thank you for your interest in contributing documentation and related software code to a project hosted or managed by RustFS. In order to clarify the intellectual property license granted with Contributions from any person or entity, RustFS must have a Contributor License Agreement (“CLA”) on file that has been signed by each Contributor, indicating agreement to the license terms below. This version of the Contributor License Agreement allows an individual to submit Contributions to the applicable project. If you are making a submission on behalf of a legal entity, then you should sign the separate Corporate Contributor License Agreement.
You accept and agree to the following terms and conditions for Your present and future Contributions submitted to RustFS. You hereby irrevocably assign and transfer to RustFS all right, title, and interest in and to Your Contributions, including all copyrights and other intellectual property rights therein.
Definitions
“You” (or “Your”) shall mean the copyright owner or legal entity authorized by the copyright owner that is making this Agreement with RustFS. For legal entities, the entity making a Contribution and all other entities that control, are controlled by, or are under common control with that entity are considered to be a single Contributor. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
“Contribution” shall mean any original work of authorship, including any modifications or additions to an existing work, that is intentionally submitted by You to RustFS for inclusion in, or documentation of, any of the products or projects owned or managed by RustFS (the “Work”), including without limitation any Work described in Schedule A. For the purposes of this definition, “submitted” means any form of electronic or written communication sent to RustFS or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, RustFS for the purpose of discussing and improving the Work.
Assignment of Copyright
Subject to the terms and conditions of this Agreement, You hereby irrevocably assign and transfer to RustFS all right, title, and interest in and to Your Contributions, including all copyrights and other intellectual property rights therein, for the entire term of such rights, including all renewals and extensions. You agree to execute all documents and take all actions as may be reasonably necessary to vest in RustFS the ownership of Your Contributions and to assist RustFS in perfecting, maintaining, and enforcing its rights in Your Contributions.
Grant of Patent License
Subject to the terms and conditions of this Agreement, You hereby grant to RustFS and to recipients of documentation and software distributed by RustFS a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by You that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Work to which such Contribution(s) was submitted. If any entity institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted to that entity under this Agreement for that Contribution or Work shall terminate as of the date such litigation is filed.
You represent that you are legally entitled to grant the above assignment and license.
You represent that each of Your Contributions is Your original creation (see section 7 for submissions on behalf of others). You represent that Your Contribution submissions include complete details of any third-party license or other restriction (including, but not limited to, related patents and trademarks) of which you are personally aware and which are associated with any part of Your Contributions.
You are not expected to provide support for Your Contributions, except to the extent You desire to provide support. You may provide support for free, for a fee, or not at all. Unless required by applicable law or agreed to in writing, You provide Your Contributions on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON- INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
Should You wish to submit work that is not Your original creation, You may submit it to RustFS separately from any Contribution, identifying the complete details of its source and of any license or other restriction (including, but not limited to, related patents, trademarks, and license agreements) of which you are personally aware, and conspicuously marking the work as “Submitted on behalf of a third-party: [named here]”.
You agree to notify RustFS of any facts or circumstances of which you become aware that would make these representations inaccurate in any respect.
Modification of CLA
RustFS reserves the right to update or modify this CLA in the future. Any updates or modifications to this CLA shall apply only to Contributions made after the effective date of the revised CLA. Contributions made prior to the update shall remain governed by the version of the CLA that was in effect at the time of submission. It is not necessary for all Contributors to re-sign the CLA when the CLA is updated or modified.
Governing Law and Dispute Resolution
This Agreement will be governed by and construed in accordance with the laws of the Peoples Republic of China excluding that body of laws known as conflict of laws. The parties expressly agree that the United Nations Convention on Contracts for the International Sale of Goods will not apply. Any legal action or proceeding arising under this Agreement will be brought exclusively in the courts located in Beijing, China, and the parties hereby irrevocably consent to the personal jurisdiction and venue therein.
For your reading convenience, this Agreement is written in parallel English and Chinese sections. To the extent there is a conflict between the English and Chinese sections, the English sections shall govern.

128
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
hello@rustfs.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

View File

@@ -11,21 +11,25 @@
Before every commit, you **MUST**:
1. **Format your code**:
```bash
cargo fmt --all
```
2. **Verify formatting**:
```bash
cargo fmt --all --check
```
3. **Pass clippy checks**:
```bash
cargo clippy --all-targets --all-features -- -D warnings
```
4. **Ensure compilation**:
```bash
cargo check --all-targets
```
@@ -136,6 +140,7 @@ Install the `rust-analyzer` extension and add to your `settings.json`:
#### Other IDEs
Configure your IDE to:
- Use the project's `rustfmt.toml` configuration
- Format on save
- Run clippy checks

88
Cargo.lock generated
View File

@@ -472,9 +472,9 @@ dependencies = [
[[package]]
name = "async-channel"
version = "2.3.1"
version = "2.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89b47800b0be77592da0afd425cc03468052844aff33b84e33cc696f64e77b6a"
checksum = "16c74e56284d2188cabb6ad99603d1ace887a5d7e7b695d01b728155ed9ed427"
dependencies = [
"concurrent-queue",
"event-listener-strategy",
@@ -733,9 +733,9 @@ dependencies = [
[[package]]
name = "aws-sdk-s3"
version = "1.95.0"
version = "1.96.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a316e3c4c38837084dfbf87c0fc6ea016b3dc3e1f867d9d7f5eddfe47e5cae37"
checksum = "6e25d24de44b34dcdd5182ac4e4c6f07bcec2661c505acef94c0d293b65505fe"
dependencies = [
"aws-credential-types",
"aws-runtime",
@@ -2058,7 +2058,6 @@ dependencies = [
"ciborium",
"clap",
"criterion-plot",
"futures",
"is-terminal",
"itertools 0.10.5",
"num-traits",
@@ -2071,7 +2070,6 @@ dependencies = [
"serde_derive",
"serde_json",
"tinytemplate",
"tokio",
"walkdir",
]
@@ -3471,7 +3469,7 @@ checksum = "92773504d58c093f6de2459af4af33faa518c13451eb8f2b5698ed3d36e7c813"
[[package]]
name = "e2e_test"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"bytes",
"flatbuffers 25.2.10",
@@ -4948,6 +4946,17 @@ version = "3.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8bb03732005da905c88227371639bf1ad885cc712789c011c31c5fb3ab3ccf02"
[[package]]
name = "io-uring"
version = "0.7.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b86e202f00093dcba4275d4636b93ef9dd75d025ae560d2521b45ea28ab49013"
dependencies = [
"bitflags 2.9.1",
"cfg-if",
"libc",
]
[[package]]
name = "ipnet"
version = "2.11.0"
@@ -5625,9 +5634,9 @@ checksum = "490cc448043f947bae3cbee9c203358d62dbee0db12107a74be5c30ccfd09771"
[[package]]
name = "memchr"
version = "2.7.4"
version = "2.7.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78ca9ab1a0babb1e7d5695e3530886289c18cf2f87ec19a575a0abdce112e3a3"
checksum = "32a282da65faaf38286cf3be983213fcf1d2e2a58700e808f83f4ea9a4804bc0"
[[package]]
name = "memoffset"
@@ -7830,7 +7839,7 @@ dependencies = [
[[package]]
name = "rustfs"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"async-trait",
"atoi",
@@ -7899,7 +7908,7 @@ dependencies = [
[[package]]
name = "rustfs-appauth"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"base64-simd",
"rsa",
@@ -7909,7 +7918,7 @@ dependencies = [
[[package]]
name = "rustfs-common"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"lazy_static",
"tokio",
@@ -7918,7 +7927,7 @@ dependencies = [
[[package]]
name = "rustfs-config"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"const-str",
"serde",
@@ -7927,7 +7936,7 @@ dependencies = [
[[package]]
name = "rustfs-crypto"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"aes-gcm",
"argon2",
@@ -7945,7 +7954,7 @@ dependencies = [
[[package]]
name = "rustfs-ecstore"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"async-channel",
"async-trait",
@@ -8020,7 +8029,7 @@ dependencies = [
[[package]]
name = "rustfs-filemeta"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"byteorder",
"bytes",
@@ -8041,7 +8050,7 @@ dependencies = [
[[package]]
name = "rustfs-gui"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"chrono",
"dioxus",
@@ -8062,7 +8071,7 @@ dependencies = [
[[package]]
name = "rustfs-iam"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"arc-swap",
"async-trait",
@@ -8086,7 +8095,7 @@ dependencies = [
[[package]]
name = "rustfs-lock"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"async-trait",
"lazy_static",
@@ -8103,7 +8112,7 @@ dependencies = [
[[package]]
name = "rustfs-madmin"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"chrono",
"humantime",
@@ -8115,7 +8124,7 @@ dependencies = [
[[package]]
name = "rustfs-notify"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"async-trait",
"axum",
@@ -8144,7 +8153,7 @@ dependencies = [
[[package]]
name = "rustfs-obs"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"async-trait",
"chrono",
@@ -8177,7 +8186,7 @@ dependencies = [
[[package]]
name = "rustfs-policy"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"base64-simd",
"ipnetwork",
@@ -8196,7 +8205,7 @@ dependencies = [
[[package]]
name = "rustfs-protos"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"flatbuffers 25.2.10",
"prost",
@@ -8207,12 +8216,11 @@ dependencies = [
[[package]]
name = "rustfs-rio"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"aes-gcm",
"bytes",
"crc32fast",
"criterion",
"futures",
"http 1.3.1",
"md-5",
@@ -8256,7 +8264,7 @@ dependencies = [
[[package]]
name = "rustfs-s3select-api"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"async-trait",
"bytes",
@@ -8280,7 +8288,7 @@ dependencies = [
[[package]]
name = "rustfs-s3select-query"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"async-recursion",
"async-trait",
@@ -8298,21 +8306,25 @@ dependencies = [
[[package]]
name = "rustfs-signer"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"bytes",
"http 1.3.1",
"hyper 1.6.0",
"lazy_static",
"rand 0.9.1",
"rustfs-utils",
"s3s",
"serde",
"serde_urlencoded",
"tempfile",
"time",
"tracing",
]
[[package]]
name = "rustfs-utils"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"base64-simd",
"blake3",
@@ -8356,7 +8368,7 @@ dependencies = [
[[package]]
name = "rustfs-workers"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"tokio",
"tracing",
@@ -8364,7 +8376,7 @@ dependencies = [
[[package]]
name = "rustfs-zip"
version = "0.0.1"
version = "0.0.3"
dependencies = [
"async-compression",
"tokio",
@@ -8552,8 +8564,9 @@ checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
[[package]]
name = "s3s"
version = "0.12.0-dev"
source = "git+https://github.com/Nugine/s3s.git?rev=4733cdfb27b2713e832967232cbff413bb768c10#4733cdfb27b2713e832967232cbff413bb768c10"
version = "0.12.0-minio-preview.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b630a6b9051328a0c185cacf723180ccd7936d08f1fda0b932a60b1b9cd860d"
dependencies = [
"arrayvec",
"async-trait",
@@ -9834,17 +9847,19 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
[[package]]
name = "tokio"
version = "1.45.1"
version = "1.46.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75ef51a33ef1da925cea3e4eb122833cb377c61439ca401b770f54902b806779"
checksum = "1140bb80481756a8cbe10541f37433b459c5aa1e727b4c020fbfebdc25bf3ec4"
dependencies = [
"backtrace",
"bytes",
"io-uring",
"libc",
"mio",
"parking_lot",
"pin-project-lite",
"signal-hook-registry",
"slab",
"socket2",
"tokio-macros",
"tracing",
@@ -10085,6 +10100,7 @@ dependencies = [
"futures-util",
"http 1.3.1",
"http-body 1.0.1",
"http-body-util",
"iri-string",
"pin-project-lite",
"tokio",

View File

@@ -44,7 +44,7 @@ edition = "2024"
license = "Apache-2.0"
repository = "https://github.com/rustfs/rustfs"
rust-version = "1.85"
version = "0.0.1"
version = "0.0.3"
[workspace.lints.rust]
unsafe_code = "deny"
@@ -53,37 +53,37 @@ unsafe_code = "deny"
all = "warn"
[workspace.dependencies]
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.1" }
rustfs-appauth = { path = "crates/appauth", version = "0.0.1" }
rustfs-common = { path = "crates/common", version = "0.0.1" }
rustfs-crypto = { path = "crates/crypto", version = "0.0.1" }
rustfs-ecstore = { path = "crates/ecstore", version = "0.0.1" }
rustfs-iam = { path = "crates/iam", version = "0.0.1" }
rustfs-lock = { path = "crates/lock", version = "0.0.1" }
rustfs-madmin = { path = "crates/madmin", version = "0.0.1" }
rustfs-policy = { path = "crates/policy", version = "0.0.1" }
rustfs-protos = { path = "crates/protos", version = "0.0.1" }
rustfs-s3select-query = { path = "crates/s3select-query", version = "0.0.1" }
rustfs = { path = "./rustfs", version = "0.0.1" }
rustfs-zip = { path = "./crates/zip", version = "0.0.1" }
rustfs-config = { path = "./crates/config", version = "0.0.1" }
rustfs-obs = { path = "crates/obs", version = "0.0.1" }
rustfs-notify = { path = "crates/notify", version = "0.0.1" }
rustfs-utils = { path = "crates/utils", version = "0.0.1" }
rustfs-rio = { path = "crates/rio", version = "0.0.1" }
rustfs-filemeta = { path = "crates/filemeta", version = "0.0.1" }
rustfs-signer = { path = "crates/signer", version = "0.0.1" }
rustfs-workers = { path = "crates/workers", version = "0.0.1" }
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.3" }
rustfs-appauth = { path = "crates/appauth", version = "0.0.3" }
rustfs-common = { path = "crates/common", version = "0.0.3" }
rustfs-crypto = { path = "crates/crypto", version = "0.0.3" }
rustfs-ecstore = { path = "crates/ecstore", version = "0.0.3" }
rustfs-iam = { path = "crates/iam", version = "0.0.3" }
rustfs-lock = { path = "crates/lock", version = "0.0.3" }
rustfs-madmin = { path = "crates/madmin", version = "0.0.3" }
rustfs-policy = { path = "crates/policy", version = "0.0.3" }
rustfs-protos = { path = "crates/protos", version = "0.0.3" }
rustfs-s3select-query = { path = "crates/s3select-query", version = "0.0.3" }
rustfs = { path = "./rustfs", version = "0.0.3" }
rustfs-zip = { path = "./crates/zip", version = "0.0.3" }
rustfs-config = { path = "./crates/config", version = "0.0.3" }
rustfs-obs = { path = "crates/obs", version = "0.0.3" }
rustfs-notify = { path = "crates/notify", version = "0.0.3" }
rustfs-utils = { path = "crates/utils", version = "0.0.3" }
rustfs-rio = { path = "crates/rio", version = "0.0.3" }
rustfs-filemeta = { path = "crates/filemeta", version = "0.0.3" }
rustfs-signer = { path = "crates/signer", version = "0.0.3" }
rustfs-workers = { path = "crates/workers", version = "0.0.3" }
aes-gcm = { version = "0.10.3", features = ["std"] }
arc-swap = "1.7.1"
argon2 = { version = "0.5.3", features = ["std"] }
atoi = "2.0.0"
async-channel = "2.3.1"
async-channel = "2.4.0"
async-recursion = "1.1.1"
async-trait = "0.1.88"
async-compression = { version = "0.4.0" }
atomic_enum = "0.3.0"
aws-sdk-s3 = "1.95.0"
aws-sdk-s3 = "1.96.0"
axum = "0.8.4"
axum-extra = "0.10.1"
axum-server = { version = "0.7.2", features = ["tls-rustls"] }
@@ -107,7 +107,7 @@ dioxus = { version = "0.6.3", features = ["router"] }
dirs = "6.0.0"
enumset = "1.1.6"
flatbuffers = "25.2.10"
flate2 = "1.1.1"
flate2 = "1.1.2"
flexi_logger = { version = "0.31.2", features = ["trc", "dont_minimize_extra_stacks"] }
form_urlencoded = "1.2.1"
futures = "0.3.31"
@@ -124,7 +124,7 @@ hyper-util = { version = "0.1.14", features = [
"server-auto",
"server-graceful",
] }
hyper-rustls = "0.27.5"
hyper-rustls = "0.27.7"
http = "1.3.1"
http-body = "1.0.1"
humantime = "2.2.0"
@@ -195,12 +195,12 @@ rmp-serde = "1.3.0"
rsa = "0.9.8"
rumqttc = { version = "0.24" }
rust-embed = { version = "8.7.2" }
rust-i18n = { version = "3.1.4" }
rust-i18n = { version = "3.1.5" }
rustfs-rsc = "2025.506.1"
rustls = { version = "0.23.28" }
rustls-pki-types = "1.12.0"
rustls-pemfile = "2.2.0"
s3s = { git = "https://github.com/Nugine/s3s.git", rev = "4733cdfb27b2713e832967232cbff413bb768c10" }
s3s = { version = "0.12.0-minio-preview.1" }
shadow-rs = { version = "1.2.0", default-features = false }
serde = { version = "1.0.219", features = ["derive"] }
serde_json = { version = "1.0.140", features = ["raw_value"] }
@@ -225,7 +225,7 @@ time = { version = "0.3.41", features = [
"macros",
"serde",
] }
tokio = { version = "1.45.1", features = ["fs", "rt-multi-thread"] }
tokio = { version = "1.46.0", features = ["fs", "rt-multi-thread"] }
tokio-rustls = { version = "0.26.2", default-features = false }
tokio-stream = { version = "0.1.17" }
tokio-tar = "0.3.1"
@@ -251,7 +251,7 @@ uuid = { version = "1.17.0", features = [
wildmatch = { version = "2.4.0", features = ["serde"] }
winapi = { version = "0.3.9" }
xxhash-rust = { version = "0.8.15", features = ["xxh64", "xxh3"] }
zip = "2.2.0"
zip = "2.4.2"
zstd = "0.13.3"
[profile.wasm-dev]

View File

@@ -12,36 +12,40 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM alpine:latest
FROM alpine:3.18 AS builder
# Install runtime dependencies
RUN apk add --no-cache \
RUN apk add -U --no-cache \
ca-certificates \
tzdata \
&& rm -rf /var/cache/apk/*
curl \
bash \
unzip
# Create rustfs user and group
RUN addgroup -g 1000 rustfs && \
adduser -D -s /bin/sh -u 1000 -G rustfs rustfs
# Create data directories
RUN mkdir -p /data/rustfs && \
chown -R rustfs:rustfs /data
RUN curl -Lo /tmp/rustfs.zip https://dl.rustfs.com/artifacts/rustfs/rustfs-release-x86_64-unknown-linux-musl.latest.zip && \
unzip -o /tmp/rustfs.zip -d /tmp && \
tar -xzf /tmp/rustfs-x86_64-unknown-linux-musl.tar.gz -C /tmp && \
mv /tmp/rustfs-x86_64-unknown-linux-musl/bin/rustfs /rustfs && \
chmod +x /rustfs && \
rm -rf /tmp/*
# Copy binary based on target architecture
COPY --chown=rustfs:rustfs \
target/*/release/rustfs \
/usr/local/bin/rustfs
FROM alpine:3.18
RUN chmod +x /usr/local/bin/rustfs
RUN apk add -U --no-cache \
ca-certificates \
bash
# Switch to non-root user
USER rustfs
COPY --from=builder /rustfs /usr/local/bin/rustfs
ENV RUSTFS_ACCESS_KEY=rustfsadmin \
RUSTFS_SECRET_KEY=rustfsadmin \
RUSTFS_ADDRESS=":9000" \
RUSTFS_CONSOLE_ADDRESS=":9001" \
RUSTFS_CONSOLE_ENABLE=true \
RUST_LOG=warn
# Expose ports
EXPOSE 9000 9001
RUN mkdir -p /data
VOLUME /data
# Set default command
CMD ["rustfs", "/data"]

View File

@@ -1,14 +1,12 @@
[![RustFS](https://github.com/user-attachments/assets/547d72f7-d1f4-4763-b9a8-6040bad9251a)](https://rustfs.com)
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
<p align="center">RustFS is a high-performance distributed object storage software built using Rust</p>
<p align="center">
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
<img alt="Github Contributors" src="https://img.shields.io/github/contributors/rustfs/rustfs"/>
<img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/rustfs/rustfs"/>
<img alt="Discord" src="https://img.shields.io/discord/1107178041848909847?label=discord"/>
</p>
<p align="center">
@@ -19,7 +17,15 @@
</p>
<p align="center">
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a>
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a> |
<!-- Keep these links. Translations will automatically update with the README. -->
<a href="https://readme-i18n.com/rustfs/rustfs?lang=de">Deutsch</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=es">Español</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=fr">français</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ja">日本語</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ko">한국어</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=pt">Português</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ru">Русский</a>
</p>
RustFS is a high-performance distributed object storage software built using Rust, one of the most popular languages worldwide. Along with MinIO, it shares a range of advantages such as simplicity, S3 compatibility, open-source nature, support for data lakes, AI, and big data. Furthermore, it has a better and more user-friendly open-source license in comparison to other storage systems, being constructed under the Apache license. As Rust serves as its foundation, RustFS provides faster speed and safer distributed features for high-performance object storage.
@@ -63,14 +69,20 @@ Stress test server parameters
To get started with RustFS, follow these steps:
1. **Install RustFS**: Download the latest release from our [GitHub Releases](https://github.com/rustfs/rustfs/releases).
2. **Run RustFS**: Use the provided binary to start the server.
1. **One-click installation script (Option 1)**
```bash
./rustfs /data
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
```
3. **Access the Console**: Open your web browser and navigate to `http://localhost:9001` to access the RustFS console.
2. **Docker Quick Start (Option 2)**
```bash
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
```
3. **Access the Console**: Open your web browser and navigate to `http://localhost:9001` to access the RustFS console, default username and password is `rustfsadmin` .
4. **Create a Bucket**: Use the console to create a new bucket for your objects.
5. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs to interact with your RustFS instance.

View File

@@ -1,14 +1,12 @@
[![RustFS](https://github.com/user-attachments/assets/547d72f7-d1f4-4763-b9a8-6040bad9251a)](https://rustfs.com)
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
<p align="center">RustFS 是一个使用 Rust 构建的高性能分布式对象存储软件</p >
<p align="center">
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
<img alt="Github Contributors" src="https://img.shields.io/github/contributors/rustfs/rustfs"/>
<img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/rustfs/rustfs"/>
<img alt="Discord" src="https://img.shields.io/discord/1107178041848909847?label=discord"/>
</p >
<p align="center">
@@ -63,14 +61,20 @@ RustFS 是一个使用 Rust全球最受欢迎的编程语言之一构建
要开始使用 RustFS请按照以下步骤操作
1. **安装 RustFS**:从我们的 [GitHub Releases](https://github.com/rustfs/rustfs/releases) 下载最新版本。
2. **运行 RustFS**:使用提供的二进制文件启动服务器。
1. **一键脚本快速启动 (方案一)**
```bash
./rustfs /data
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
```
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9001` 以访问 RustFS 控制台。
2. **Docker快速启动方案二**
```bash
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
```
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9001` 以访问 RustFS 控制台,默认的用户名和密码是 `rustfsadmin` 。
4. **创建存储桶**:使用控制台为您的对象创建新的存储桶。
5. **上传对象**:您可以直接通过控制台上传文件,或使用 S3 兼容的 API 与您的 RustFS 实例交互。

18
SECURITY.md Normal file
View File

@@ -0,0 +1,18 @@
# Security Policy
## Supported Versions
Use this section to tell people about which versions of your project are
currently being supported with security updates.
| Version | Supported |
| ------- | ------------------ |
| 1.x.x | :white_check_mark: |
## Reporting a Vulnerability
Use this section to tell people how to report a vulnerability.
Tell them where to go, how often they can expect to get an update on a
reported vulnerability, what to expect if the vulnerability is accepted or
declined, etc.

68
TODO.md
View File

@@ -1,68 +0,0 @@
# TODO LIST
## 基础存储
- [x] EC 可用读写数量判断 Read/WriteQuorum
- [ ] 优化后台并发执行,可中断,传引用?
- [x] 小文件存储到 metafile, inlinedata
- [x] 完善 bucketmeta
- [x] 对象锁
- [x] 边读写边 hash实现 reader 嵌套
- [x] 远程 rpc
- [x] 错误类型判断,程序中判断错误类型,如何统一错误
- [x] 优化 xlmeta, 自定义 msg 数据结构
- [ ] 优化 io.reader 参考 GetObjectNInfo 方便 io copy 如果 异步写,再平衡
- [ ] 代码优化 使用范型?
- [ ] 抽象出 metafile 存储
## 基础功能
- [ ] 桶操作
- [x] 创建 CreateBucket
- [x] 列表 ListBuckets
- [ ] 桶下面的文件列表 ListObjects
- [x] 简单实现功能
- [ ] 优化并发读取
- [ ] 删除
- [x] 详情 HeadBucket
- [ ] 文件操作
- [x] 上传 PutObject
- [x] 大文件上传
- [x] 创建分片上传 CreateMultipartUpload
- [x] 上传分片 PubObjectPart
- [x] 提交完成 CompleteMultipartUpload
- [x] 取消上传 AbortMultipartUpload
- [x] 下载 GetObject
- [x] 删除 DeleteObjects
- [ ] 版本控制
- [ ] 对象锁
- [ ] 复制 CopyObject
- [ ] 详情 HeadObject
- [ ] 对象预先签名get、put、head、post
## 扩展功能
- [ ] 用户管理
- [ ] Policy 管理
- [ ] AK/SK分配管理
- [ ] data scanner 统计和对象修复
- [ ] 桶配额
- [ ] 桶只读
- [ ] 桶复制
- [ ] 桶事件通知
- [ ] 桶公开、桶私有
- [ ] 对象生命周期管理
- [ ] prometheus 对接
- [ ] 日志收集和日志外发
- [ ] 对象压缩
- [ ] STS
- [ ] 分层阿里云、腾讯云、S3 远程对接)
## 性能优化
- [ ] bitrot impl AsyncRead/AsyncWrite
- [ ] erasure 并发读写
- [x] 完善删除逻辑,并发处理,先移动到回收站,
- [ ] 空间不足时清空回收站
- [ ] list_object 使用 reader 传输

37
crates/appauth/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS AppAuth - Application Authentication
<p align="center">
<strong>Application-level authentication and authorization module for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS AppAuth** provides application-level authentication and authorization capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- JWT-based authentication with secure token management
- RBAC (Role-Based Access Control) for fine-grained permissions
- Multi-tenant application isolation and management
- OAuth 2.0 and OpenID Connect integration
- API key management and rotation
- Session management with configurable expiration
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

37
crates/common/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Common - Shared Components
<p align="center">
<strong>Shared components and common utilities module for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Common** provides shared components and common utilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Shared data structures and type definitions
- Common error handling and result types
- Utility functions used across modules
- Configuration structures and validation
- Logging and tracing infrastructure
- Cross-platform compatibility helpers
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

37
crates/config/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Config - Configuration Management
<p align="center">
<strong>Configuration management and validation module for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Config** provides configuration management and validation capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Multi-format configuration support (TOML, YAML, JSON, ENV)
- Environment variable integration and override
- Configuration validation and type safety
- Hot-reload capabilities for dynamic updates
- Default value management and fallbacks
- Secure credential handling and encryption
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

37
crates/crypto/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Crypto - Cryptographic Operations
<p align="center">
<strong>High-performance cryptographic operations module for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Crypto** provides high-performance cryptographic operations for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- AES-GCM encryption with hardware acceleration
- RSA and ECDSA digital signature support
- Secure hash functions (SHA-256, BLAKE3)
- Key derivation and management utilities
- Stream ciphers for large data encryption
- Hardware security module integration
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

64
crates/ecstore/README.md Normal file
View File

@@ -0,0 +1,64 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS ECStore - Erasure Coding Storage
<p align="center">
<strong>High-performance erasure coding storage engine for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS ECStore** provides erasure coding storage capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Reed-Solomon erasure coding implementation
- Configurable redundancy levels (N+K schemes)
- Automatic data healing and reconstruction
- Multi-drive support with intelligent placement
- Parallel encoding/decoding for performance
- Efficient disk space utilization
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
```
Copyright 2024 RustFS Team
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
---
<p align="center">
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
All other trademarks are the property of their respective owners.
</p>
<p align="center">
Made with ❤️ by the RustFS Storage Team
</p>

View File

@@ -1,103 +1,19 @@
# ECStore - Erasure Coding Storage
ECStore provides erasure coding functionality for the RustFS project, using high-performance Reed-Solomon SIMD
implementation for optimal performance.
ECStore provides erasure coding functionality for the RustFS project, using high-performance Reed-Solomon SIMD implementation for optimal performance.
## Reed-Solomon Implementation
## Features
### SIMD Backend (Only)
- **Reed-Solomon Implementation**: High-performance SIMD-optimized erasure coding
- **Cross-Platform Compatibility**: Support for x86_64, aarch64, and other architectures
- **Performance Optimized**: SIMD instructions for maximum throughput
- **Thread Safety**: Safe concurrent access with caching optimizations
- **Scalable**: Excellent performance for high-throughput scenarios
- **Performance**: Uses SIMD optimization for high-performance encoding/decoding
- **Compatibility**: Works with any shard size through SIMD implementation
- **Reliability**: High-performance SIMD implementation for large data processing
- **Use case**: Optimized for maximum performance in large data processing scenarios
## Documentation
### Usage Example
For complete documentation, examples, and usage information, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
```rust
use rustfs_ecstore::erasure_coding::Erasure;
## License
// Create erasure coding instance
// 4 data shards, 2 parity shards, 1KB block size
let erasure = Erasure::new(4, 2, 1024);
// Encode data
let data = b"hello world from rustfs erasure coding";
let shards = erasure.encode_data(data) ?;
// Simulate loss of one shard
let mut shards_opt: Vec<Option<Vec<u8> > > = shards
.iter()
.map( | b| Some(b.to_vec()))
.collect();
shards_opt[2] = None; // Lose shard 2
// Reconstruct missing data
erasure.decode_data( & mut shards_opt) ?;
// Recover original data
let mut recovered = Vec::new();
for shard in shards_opt.iter().take(4) { // Only data shards
recovered.extend_from_slice(shard.as_ref().unwrap());
}
recovered.truncate(data.len());
assert_eq!(&recovered, data);
```
## Performance Considerations
### SIMD Implementation Benefits
- **High Throughput**: Optimized for large block sizes (>= 1KB recommended)
- **CPU Optimization**: Leverages modern CPU SIMD instructions
- **Scalability**: Excellent performance for high-throughput scenarios
### Implementation Details
#### `reed-solomon-simd`
- **Instance Caching**: Encoder/decoder instances are cached and reused for optimal performance
- **Thread Safety**: Thread-safe with RwLock-based caching
- **SIMD Optimization**: Leverages CPU SIMD instructions for maximum performance
- **Reset Capability**: Cached instances are reset for different parameters, avoiding unnecessary allocations
### Performance Tips
1. **Batch Operations**: When possible, batch multiple small operations into larger blocks
2. **Block Size Optimization**: Use block sizes that are multiples of 64 bytes for optimal SIMD performance
3. **Memory Allocation**: Pre-allocate buffers when processing multiple blocks
4. **Cache Warming**: Initial operations may be slower due to cache setup, subsequent operations benefit from caching
## Cross-Platform Compatibility
The SIMD implementation supports:
- x86_64 with advanced SIMD instructions (AVX2, SSE)
- aarch64 (ARM64) with NEON SIMD optimizations
- Other architectures with fallback implementations
The implementation automatically selects the best available SIMD instructions for the target platform, providing optimal
performance across different architectures.
## Testing and Benchmarking
Run performance benchmarks:
```bash
# Run erasure coding benchmarks
cargo bench --bench erasure_benchmark
# Run comparison benchmarks
cargo bench --bench comparison_benchmark
# Generate benchmark reports
./run_benchmarks.sh
```
## Error Handling
All operations return `Result` types with comprehensive error information:
- Encoding errors: Invalid parameters, insufficient memory
- Decoding errors: Too many missing shards, corrupted data
- Configuration errors: Invalid shard counts, unsupported parameters
This project is licensed under the Apache License, Version 2.0.

View File

@@ -1,4 +1,3 @@
#![allow(unused_imports)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -12,6 +11,7 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
@@ -41,7 +41,7 @@ const ERR_LIFECYCLE_DUPLICATE_ID: &str = "Rule ID must be unique. Found same ID
const _ERR_XML_NOT_WELL_FORMED: &str =
"The XML you provided was not well-formed or did not validate against our published schema";
const ERR_LIFECYCLE_BUCKET_LOCKED: &str =
"ExpiredObjectAllVersions element and DelMarkerExpiration action cannot be used on an object locked bucket";
"ExpiredObjectAllVersions element and DelMarkerExpiration action cannot be used on an retention bucket";
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum IlmAction {
@@ -102,30 +102,30 @@ impl RuleValidate for LifecycleRule {
}
fn validate_status(&self) -> Result<()> {
if self.Status.len() == 0 {
return errEmptyRuleStatus;
if self.status.len() == 0 {
return ErrEmptyRuleStatus;
}
if self.Status != Enabled && self.Status != Disabled {
return errInvalidRuleStatus;
if self.status != Enabled && self.status != Disabled {
return ErrInvalidRuleStatus;
}
Ok(())
}
fn validate_expiration(&self) -> Result<()> {
self.Expiration.Validate();
self.expiration.validate();
}
fn validate_noncurrent_expiration(&self) -> Result<()> {
self.NoncurrentVersionExpiration.Validate()
self.noncurrent_version_expiration.validate()
}
fn validate_prefix_and_filter(&self) -> Result<()> {
if !self.Prefix.set && self.Filter.IsEmpty() || self.Prefix.set && !self.Filter.IsEmpty() {
return errXMLNotWellFormed;
if !self.prefix.set && self.Filter.isempty() || self.prefix.set && !self.filter.isempty() {
return ErrXMLNotWellFormed;
}
if !self.Prefix.set {
return self.Filter.Validate();
if !self.prefix.set {
return self.filter.validate();
}
Ok(())
}
@@ -267,7 +267,7 @@ impl Lifecycle for BucketLifecycleConfiguration {
r.validate()?;
if let Some(expiration) = r.expiration.as_ref() {
if let Some(expired_object_delete_marker) = expiration.expired_object_delete_marker {
if lr_retention && (!expired_object_delete_marker) {
if lr_retention && (expired_object_delete_marker) {
return Err(std::io::Error::other(ERR_LIFECYCLE_BUCKET_LOCKED));
}
}

View File

@@ -20,12 +20,12 @@
#![allow(clippy::all)]
use lazy_static::lazy_static;
use rustfs_utils::HashAlgorithm;
use std::collections::HashMap;
use crate::client::{api_put_object::PutObjectOptions, api_s3_datatypes::ObjectPart};
use crate::{disk::DiskAPI, store_api::GetObjectReader};
use rustfs_utils::crypto::{base64_decode, base64_encode};
use rustfs_utils::hasher::{Hasher, Sha256};
use s3s::header::{
X_AMZ_CHECKSUM_ALGORITHM, X_AMZ_CHECKSUM_CRC32, X_AMZ_CHECKSUM_CRC32C, X_AMZ_CHECKSUM_SHA1, X_AMZ_CHECKSUM_SHA256,
};
@@ -133,7 +133,7 @@ impl ChecksumMode {
}
}
pub fn hasher(&self) -> Result<Box<dyn Hasher>, std::io::Error> {
pub fn hasher(&self) -> Result<HashAlgorithm, std::io::Error> {
match /*C_ChecksumMask & **/self {
/*ChecksumMode::ChecksumCRC32 => {
return Ok(Box::new(crc32fast::Hasher::new()));
@@ -145,7 +145,7 @@ impl ChecksumMode {
return Ok(Box::new(sha1::new()));
}*/
ChecksumMode::ChecksumSHA256 => {
return Ok(Box::new(Sha256::new()));
return Ok(HashAlgorithm::SHA256);
}
/*ChecksumMode::ChecksumCRC64NVME => {
return Ok(Box::new(crc64nvme.New());
@@ -170,8 +170,8 @@ impl ChecksumMode {
return Ok("".to_string());
}
let mut h = self.hasher()?;
h.write(b);
Ok(base64_encode(h.sum().as_bytes()))
let hash = h.hash_encode(b);
Ok(base64_encode(hash.as_ref()))
}
pub fn to_string(&self) -> String {
@@ -201,15 +201,15 @@ impl ChecksumMode {
}
}
pub fn check_sum_reader(&self, r: GetObjectReader) -> Result<Checksum, std::io::Error> {
let mut h = self.hasher()?;
Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
}
// pub fn check_sum_reader(&self, r: GetObjectReader) -> Result<Checksum, std::io::Error> {
// let mut h = self.hasher()?;
// Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
// }
pub fn check_sum_bytes(&self, b: &[u8]) -> Result<Checksum, std::io::Error> {
let mut h = self.hasher()?;
Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
}
// pub fn check_sum_bytes(&self, b: &[u8]) -> Result<Checksum, std::io::Error> {
// let mut h = self.hasher()?;
// Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
// }
pub fn composite_checksum(&self, p: &mut [ObjectPart]) -> Result<Checksum, std::io::Error> {
if !self.can_composite() {
@@ -227,10 +227,10 @@ impl ChecksumMode {
let c = self.base();
let crc_bytes = Vec::<u8>::with_capacity(p.len() * self.raw_byte_len() as usize);
let mut h = self.hasher()?;
h.write(&crc_bytes);
let hash = h.hash_encode(crc_bytes.as_ref());
Ok(Checksum {
checksum_type: self.clone(),
r: h.sum().as_bytes().to_vec(),
r: hash.as_ref().to_vec(),
computed: false,
})
}

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -0,0 +1,184 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
#![allow(unused_must_use)]
#![allow(clippy::all)]
use bytes::Bytes;
use http::{HeaderMap, HeaderValue};
use s3s::dto::Owner;
use std::collections::HashMap;
use std::io::Cursor;
use tokio::io::BufReader;
use crate::client::{
api_error_response::{err_invalid_argument, http_resp_to_error_response},
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
};
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
#[derive(Clone, Debug, Default, serde::Serialize, serde::Deserialize)]
pub struct Grantee {
pub id: String,
pub display_name: String,
pub uri: String,
}
#[derive(Clone, Debug, Default, serde::Serialize, serde::Deserialize)]
pub struct Grant {
pub grantee: Grantee,
pub permission: String,
}
#[derive(Debug, Default, serde::Serialize, serde::Deserialize)]
pub struct AccessControlList {
pub grant: Vec<Grant>,
pub permission: String,
}
#[derive(Debug, Default, serde::Deserialize)]
pub struct AccessControlPolicy {
#[serde(skip)]
owner: Owner,
pub access_control_list: AccessControlList,
}
impl TransitionClient {
pub async fn get_object_acl(&self, bucket_name: &str, object_name: &str) -> Result<ObjectInfo, std::io::Error> {
let mut url_values = HashMap::new();
url_values.insert("acl".to_string(), "".to_string());
let mut resp = self
.execute_method(
http::Method::GET,
&mut RequestMetadata {
bucket_name: bucket_name.to_string(),
object_name: object_name.to_string(),
query_values: url_values,
custom_header: HeaderMap::new(),
content_sha256_hex: EMPTY_STRING_SHA256_HASH.to_string(),
content_body: ReaderImpl::Body(Bytes::new()),
content_length: 0,
content_md5_base64: "".to_string(),
stream_sha256: false,
trailer: HeaderMap::new(),
pre_sign_url: Default::default(),
add_crc: Default::default(),
extra_pre_sign_header: Default::default(),
bucket_location: Default::default(),
expires: Default::default(),
},
)
.await?;
if resp.status() != http::StatusCode::OK {
let b = resp.body().bytes().expect("err").to_vec();
return Err(std::io::Error::other(http_resp_to_error_response(resp, b, bucket_name, object_name)));
}
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
let mut res = match serde_xml_rs::from_str::<AccessControlPolicy>(&String::from_utf8(b).unwrap()) {
Ok(result) => result,
Err(err) => {
return Err(std::io::Error::other(err.to_string()));
}
};
let mut obj_info = self
.stat_object(bucket_name, object_name, &GetObjectOptions::default())
.await?;
obj_info.owner.display_name = res.owner.display_name.clone();
obj_info.owner.id = res.owner.id.clone();
//obj_info.grant.extend(res.access_control_list.grant);
let canned_acl = get_canned_acl(&res);
if canned_acl != "" {
obj_info
.metadata
.insert("X-Amz-Acl", HeaderValue::from_str(&canned_acl).unwrap());
return Ok(obj_info);
}
let grant_acl = get_amz_grant_acl(&res);
/*for (k, v) in grant_acl {
obj_info.metadata.insert(HeaderName::from_bytes(k.as_bytes()).unwrap(), HeaderValue::from_str(&v.to_string()).unwrap());
}*/
Ok(obj_info)
}
}
fn get_canned_acl(ac_policy: &AccessControlPolicy) -> String {
let grants = ac_policy.access_control_list.grant.clone();
if grants.len() == 1 {
if grants[0].grantee.uri == "" && grants[0].permission == "FULL_CONTROL" {
return "private".to_string();
}
} else if grants.len() == 2 {
for g in grants {
if g.grantee.uri == "http://acs.amazonaws.com/groups/global/AuthenticatedUsers" && &g.permission == "READ" {
return "authenticated-read".to_string();
}
if g.grantee.uri == "http://acs.amazonaws.com/groups/global/AllUsers" && &g.permission == "READ" {
return "public-read".to_string();
}
if g.permission == "READ" && g.grantee.id == ac_policy.owner.id.clone().unwrap() {
return "bucket-owner-read".to_string();
}
}
} else if grants.len() == 3 {
for g in grants {
if g.grantee.uri == "http://acs.amazonaws.com/groups/global/AllUsers" && g.permission == "WRITE" {
return "public-read-write".to_string();
}
}
}
"".to_string()
}
pub fn get_amz_grant_acl(ac_policy: &AccessControlPolicy) -> HashMap<String, Vec<String>> {
let grants = ac_policy.access_control_list.grant.clone();
let mut res = HashMap::<String, Vec<String>>::new();
for g in grants {
let mut id = "id=".to_string();
id.push_str(&g.grantee.id);
let permission: &str = &g.permission;
match permission {
"READ" => {
res.entry("X-Amz-Grant-Read".to_string()).or_insert(vec![]).push(id);
}
"WRITE" => {
res.entry("X-Amz-Grant-Write".to_string()).or_insert(vec![]).push(id);
}
"READ_ACP" => {
res.entry("X-Amz-Grant-Read-Acp".to_string()).or_insert(vec![]).push(id);
}
"WRITE_ACP" => {
res.entry("X-Amz-Grant-Write-Acp".to_string()).or_insert(vec![]).push(id);
}
"FULL_CONTROL" => {
res.entry("X-Amz-Grant-Full-Control".to_string()).or_insert(vec![]).push(id);
}
_ => (),
}
}
res
}

View File

@@ -0,0 +1,244 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
#![allow(unused_must_use)]
#![allow(clippy::all)]
use bytes::Bytes;
use http::{HeaderMap, HeaderValue};
use std::collections::HashMap;
use std::io::Cursor;
use time::OffsetDateTime;
use tokio::io::BufReader;
use crate::client::constants::{GET_OBJECT_ATTRIBUTES_MAX_PARTS, GET_OBJECT_ATTRIBUTES_TAGS, ISO8601_DATEFORMAT};
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
use s3s::header::{
X_AMZ_DELETE_MARKER, X_AMZ_MAX_PARTS, X_AMZ_METADATA_DIRECTIVE, X_AMZ_OBJECT_ATTRIBUTES, X_AMZ_PART_NUMBER_MARKER,
X_AMZ_REQUEST_CHARGED, X_AMZ_RESTORE, X_AMZ_VERSION_ID,
};
use s3s::{Body, dto::Owner};
use crate::client::{
api_error_response::err_invalid_argument,
api_get_object_acl::AccessControlPolicy,
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
};
pub struct ObjectAttributesOptions {
pub max_parts: i64,
pub version_id: String,
pub part_number_marker: i64,
//server_side_encryption: encrypt::ServerSide,
}
pub struct ObjectAttributes {
pub version_id: String,
pub last_modified: OffsetDateTime,
pub object_attributes_response: ObjectAttributesResponse,
}
impl ObjectAttributes {
fn new() -> Self {
Self {
version_id: "".to_string(),
last_modified: OffsetDateTime::now_utc(),
object_attributes_response: ObjectAttributesResponse::new(),
}
}
}
#[derive(Debug, Default, serde::Deserialize)]
pub struct Checksum {
checksum_crc32: String,
checksum_crc32c: String,
checksum_sha1: String,
checksum_sha256: String,
}
impl Checksum {
fn new() -> Self {
Self {
checksum_crc32: "".to_string(),
checksum_crc32c: "".to_string(),
checksum_sha1: "".to_string(),
checksum_sha256: "".to_string(),
}
}
}
#[derive(Debug, Default, serde::Deserialize)]
pub struct ObjectParts {
pub parts_count: i64,
pub part_number_marker: i64,
pub next_part_number_marker: i64,
pub max_parts: i64,
is_truncated: bool,
parts: Vec<ObjectAttributePart>,
}
impl ObjectParts {
fn new() -> Self {
Self {
parts_count: 0,
part_number_marker: 0,
next_part_number_marker: 0,
max_parts: 0,
is_truncated: false,
parts: Vec::new(),
}
}
}
#[derive(Debug, Default, serde::Deserialize)]
pub struct ObjectAttributesResponse {
pub etag: String,
pub storage_class: String,
pub object_size: i64,
pub checksum: Checksum,
pub object_parts: ObjectParts,
}
impl ObjectAttributesResponse {
fn new() -> Self {
Self {
etag: "".to_string(),
storage_class: "".to_string(),
object_size: 0,
checksum: Checksum::new(),
object_parts: ObjectParts::new(),
}
}
}
#[derive(Debug, Default, serde::Deserialize)]
struct ObjectAttributePart {
checksum_crc32: String,
checksum_crc32c: String,
checksum_sha1: String,
checksum_sha256: String,
part_number: i64,
size: i64,
}
impl ObjectAttributes {
pub async fn parse_response(&mut self, resp: &mut http::Response<Body>) -> Result<(), std::io::Error> {
let h = resp.headers();
let mod_time = OffsetDateTime::parse(h.get("Last-Modified").unwrap().to_str().unwrap(), ISO8601_DATEFORMAT).unwrap(); //RFC7231Time
self.last_modified = mod_time;
self.version_id = h.get(X_AMZ_VERSION_ID).unwrap().to_str().unwrap().to_string();
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
let mut response = match serde_xml_rs::from_str::<ObjectAttributesResponse>(&String::from_utf8(b).unwrap()) {
Ok(result) => result,
Err(err) => {
return Err(std::io::Error::other(err.to_string()));
}
};
self.object_attributes_response = response;
Ok(())
}
}
impl TransitionClient {
pub async fn get_object_attributes(
&self,
bucket_name: &str,
object_name: &str,
opts: ObjectAttributesOptions,
) -> Result<ObjectAttributes, std::io::Error> {
let mut url_values = HashMap::new();
url_values.insert("attributes".to_string(), "".to_string());
if opts.version_id != "" {
url_values.insert("versionId".to_string(), opts.version_id);
}
let mut headers = HeaderMap::new();
headers.insert(X_AMZ_OBJECT_ATTRIBUTES, HeaderValue::from_str(GET_OBJECT_ATTRIBUTES_TAGS).unwrap());
if opts.part_number_marker > 0 {
headers.insert(
X_AMZ_PART_NUMBER_MARKER,
HeaderValue::from_str(&opts.part_number_marker.to_string()).unwrap(),
);
}
if opts.max_parts > 0 {
headers.insert(X_AMZ_MAX_PARTS, HeaderValue::from_str(&opts.max_parts.to_string()).unwrap());
} else {
headers.insert(
X_AMZ_MAX_PARTS,
HeaderValue::from_str(&GET_OBJECT_ATTRIBUTES_MAX_PARTS.to_string()).unwrap(),
);
}
/*if opts.server_side_encryption.is_some() {
opts.server_side_encryption.Marshal(headers);
}*/
let mut resp = self
.execute_method(
http::Method::HEAD,
&mut RequestMetadata {
bucket_name: bucket_name.to_string(),
object_name: object_name.to_string(),
query_values: url_values,
custom_header: headers,
content_sha256_hex: EMPTY_STRING_SHA256_HASH.to_string(),
content_md5_base64: "".to_string(),
content_body: ReaderImpl::Body(Bytes::new()),
content_length: 0,
stream_sha256: false,
trailer: HeaderMap::new(),
pre_sign_url: Default::default(),
add_crc: Default::default(),
extra_pre_sign_header: Default::default(),
bucket_location: Default::default(),
expires: Default::default(),
},
)
.await?;
let h = resp.headers();
let has_etag = h.get("ETag").unwrap().to_str().unwrap();
if !has_etag.is_empty() {
return Err(std::io::Error::other(
"get_object_attributes is not supported by the current endpoint version",
));
}
if resp.status() != http::StatusCode::OK {
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
let err_body = String::from_utf8(b).unwrap();
let mut er = match serde_xml_rs::from_str::<AccessControlPolicy>(&err_body) {
Ok(result) => result,
Err(err) => {
return Err(std::io::Error::other(err.to_string()));
}
};
return Err(std::io::Error::other(er.access_control_list.permission));
}
let mut oa = ObjectAttributes::new();
oa.parse_response(&mut resp).await?;
Ok(oa)
}
}

View File

@@ -0,0 +1,147 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
#![allow(unused_must_use)]
#![allow(clippy::all)]
use bytes::Bytes;
use http::HeaderMap;
use std::io::Cursor;
#[cfg(not(windows))]
use std::os::unix::fs::MetadataExt;
#[cfg(not(windows))]
use std::os::unix::fs::OpenOptionsExt;
#[cfg(not(windows))]
use std::os::unix::fs::PermissionsExt;
#[cfg(windows)]
use std::os::windows::fs::MetadataExt;
use tokio::io::BufReader;
use crate::client::{
api_error_response::err_invalid_argument,
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
};
impl TransitionClient {
pub async fn fget_object(
&self,
bucket_name: &str,
object_name: &str,
file_path: &str,
opts: GetObjectOptions,
) -> Result<(), std::io::Error> {
match std::fs::metadata(file_path) {
Ok(file_path_stat) => {
let ft = file_path_stat.file_type();
if ft.is_dir() {
return Err(std::io::Error::other(err_invalid_argument("filename is a directory.")));
}
}
Err(err) => {
return Err(std::io::Error::other(err));
}
}
let path = std::path::Path::new(file_path);
if let Some(parent) = path.parent() {
if let Some(object_dir) = parent.file_name() {
match std::fs::create_dir_all(object_dir) {
Ok(_) => {
let dir = std::path::Path::new(object_dir);
if let Ok(dir_stat) = dir.metadata() {
#[cfg(not(windows))]
dir_stat.permissions().set_mode(0o700);
}
}
Err(err) => {
return Err(std::io::Error::other(err));
}
}
}
}
let object_stat = match self.stat_object(bucket_name, object_name, &opts).await {
Ok(object_stat) => object_stat,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
let mut file_part_path = file_path.to_string();
file_part_path.push_str("" /*sum_sha256_hex(object_stat.etag.as_bytes())*/);
file_part_path.push_str(".part.rustfs");
#[cfg(not(windows))]
let file_part = match std::fs::OpenOptions::new().mode(0o600).open(file_part_path.clone()) {
Ok(file_part) => file_part,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
#[cfg(windows)]
let file_part = match std::fs::OpenOptions::new().open(file_part_path.clone()) {
Ok(file_part) => file_part,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
let mut close_and_remove = true;
/*defer(|| {
if close_and_remove {
_ = file_part.close();
let _ = std::fs::remove(file_part_path);
}
});*/
let st = match file_part.metadata() {
Ok(st) => st,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
let mut opts = opts;
#[cfg(windows)]
if st.file_size() > 0 {
opts.set_range(st.file_size() as i64, 0);
}
let object_reader = match self.get_object(bucket_name, object_name, &opts) {
Ok(object_reader) => object_reader,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
/*if let Err(err) = std::fs::copy(file_part, object_reader) {
return Err(std::io::Error::other(err));
}*/
close_and_remove = false;
/*if let Err(err) = file_part.close() {
return Err(std::io::Error::other(err));
}*/
if let Err(err) = std::fs::rename(file_part_path, file_path) {
return Err(std::io::Error::other(err));
}
Ok(())
}
}

View File

@@ -29,9 +29,9 @@ use crate::client::api_error_response::err_invalid_argument;
#[derive(Default)]
#[allow(dead_code)]
pub struct AdvancedGetOptions {
replication_deletemarker: bool,
is_replication_ready_for_deletemarker: bool,
replication_proxy_request: String,
pub replication_delete_marker: bool,
pub is_replication_ready_for_delete_marker: bool,
pub replication_proxy_request: String,
}
pub struct GetObjectOptions {

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -25,7 +24,6 @@ use std::{collections::HashMap, sync::Arc};
use time::{Duration, OffsetDateTime, macros::format_description};
use tracing::{error, info, warn};
use rustfs_utils::hasher::Hasher;
use s3s::dto::{ObjectLockLegalHoldStatus, ObjectLockRetentionMode, ReplicationStatus};
use s3s::header::{
X_AMZ_OBJECT_LOCK_LEGAL_HOLD, X_AMZ_OBJECT_LOCK_MODE, X_AMZ_OBJECT_LOCK_RETAIN_UNTIL_DATE, X_AMZ_REPLICATION_STATUS,
@@ -364,18 +362,14 @@ impl TransitionClient {
if opts.send_content_md5 {
let mut md5_hasher = self.md5_hasher.lock().unwrap();
let hash = md5_hasher.as_mut().expect("err");
hash.write(&buf[..length]);
md5_base64 = base64_encode(hash.sum().as_bytes());
let hash = hash.hash_encode(&buf[..length]);
md5_base64 = base64_encode(hash.as_ref());
} else {
let csum;
{
let mut crc = opts.auto_checksum.hasher()?;
crc.reset();
crc.write(&buf[..length]);
csum = crc.sum();
}
let mut crc = opts.auto_checksum.hasher()?;
let csum = crc.hash_encode(&buf[..length]);
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key().as_bytes()) {
custom_header.insert(header_name, base64_encode(csum.as_bytes()).parse().unwrap());
custom_header.insert(header_name, base64_encode(csum.as_ref()).parse().expect("err"));
} else {
warn!("Invalid header name: {}", opts.auto_checksum.key());
}

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -31,7 +30,6 @@ use tracing::{error, info};
use url::form_urlencoded::Serializer;
use uuid::Uuid;
use rustfs_utils::hasher::Hasher;
use s3s::header::{X_AMZ_EXPIRATION, X_AMZ_VERSION_ID};
use s3s::{Body, dto::StreamingBlob};
//use crate::disk::{Reader, BufferReader};
@@ -117,8 +115,8 @@ impl TransitionClient {
let length = buf.len();
for (k, v) in hash_algos.iter_mut() {
v.write(&buf[..length]);
hash_sums.insert(k.to_string(), Vec::try_from(v.sum().as_bytes()).unwrap());
let hash = v.hash_encode(&buf[..length]);
hash_sums.insert(k.to_string(), hash.as_ref().to_vec());
}
//let rd = newHook(bytes.NewReader(buf[..length]), opts.progress);
@@ -134,15 +132,11 @@ impl TransitionClient {
sha256_hex = hex_simd::encode_to_string(hash_sums["sha256"].clone(), hex_simd::AsciiCase::Lower);
//}
if hash_sums.len() == 0 {
let csum;
{
let mut crc = opts.auto_checksum.hasher()?;
crc.reset();
crc.write(&buf[..length]);
csum = crc.sum();
}
let mut crc = opts.auto_checksum.hasher()?;
let csum = crc.hash_encode(&buf[..length]);
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key().as_bytes()) {
custom_header.insert(header_name, base64_encode(csum.as_bytes()).parse().expect("err"));
custom_header.insert(header_name, base64_encode(csum.as_ref()).parse().expect("err"));
} else {
warn!("Invalid header name: {}", opts.auto_checksum.key());
}
@@ -297,8 +291,6 @@ impl TransitionClient {
};
let resp = self.execute_method(http::Method::PUT, &mut req_metadata).await?;
//defer closeResponse(resp)
//if resp.is_none() {
if resp.status() != StatusCode::OK {
return Err(std::io::Error::other(http_resp_to_error_response(
resp,
@@ -366,13 +358,13 @@ impl TransitionClient {
bucket_name: bucket_name.to_string(),
object_name: object_name.to_string(),
query_values: url_values,
custom_header: headers,
content_body: ReaderImpl::Body(complete_multipart_upload_buffer),
content_length: 100, //complete_multipart_upload_bytes.len(),
content_sha256_hex: "".to_string(), //hex_simd::encode_to_string(complete_multipart_upload_bytes, hex_simd::AsciiCase::Lower),
custom_header: headers,
content_md5_base64: "".to_string(),
stream_sha256: Default::default(),
trailer: Default::default(),
content_md5_base64: "".to_string(),
pre_sign_url: Default::default(),
add_crc: Default::default(),
extra_pre_sign_header: Default::default(),

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -40,7 +39,7 @@ use crate::client::{
constants::ISO8601_DATEFORMAT,
transition_api::{ReaderImpl, RequestMetadata, TransitionClient, UploadInfo},
};
use rustfs_utils::hasher::Hasher;
use rustfs_utils::{crypto::base64_encode, path::trim_etag};
use s3s::header::{X_AMZ_EXPIRATION, X_AMZ_VERSION_ID};
@@ -153,21 +152,16 @@ impl TransitionClient {
if opts.send_content_md5 {
let mut md5_hasher = self.md5_hasher.lock().unwrap();
let md5_hash = md5_hasher.as_mut().expect("err");
md5_hash.reset();
md5_hash.write(&buf[..length]);
md5_base64 = base64_encode(md5_hash.sum().as_bytes());
let hash = md5_hash.hash_encode(&buf[..length]);
md5_base64 = base64_encode(hash.as_ref());
} else {
let csum;
{
let mut crc = opts.auto_checksum.hasher()?;
crc.reset();
crc.write(&buf[..length]);
csum = crc.sum();
}
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key_capitalized().as_bytes()) {
custom_header.insert(header_name, HeaderValue::from_str(&base64_encode(csum.as_bytes())).expect("err"));
let mut crc = opts.auto_checksum.hasher()?;
let csum = crc.hash_encode(&buf[..length]);
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key().as_bytes()) {
custom_header.insert(header_name, base64_encode(csum.as_ref()).parse().expect("err"));
} else {
warn!("Invalid header name: {}", opts.auto_checksum.key_capitalized());
warn!("Invalid header name: {}", opts.auto_checksum.key());
}
}
@@ -308,17 +302,11 @@ impl TransitionClient {
let mut custom_header = HeaderMap::new();
if !opts.send_content_md5 {
let csum;
{
let mut crc = opts.auto_checksum.hasher()?;
crc.reset();
crc.write(&buf[..length]);
csum = crc.sum();
}
let mut crc = opts.auto_checksum.hasher()?;
let csum = crc.hash_encode(&buf[..length]);
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key().as_bytes()) {
if let Ok(header_value) = HeaderValue::from_str(&base64_encode(csum.as_bytes())) {
custom_header.insert(header_name, header_value);
}
custom_header.insert(header_name, base64_encode(csum.as_ref()).parse().expect("err"));
} else {
warn!("Invalid header name: {}", opts.auto_checksum.key());
}
@@ -334,8 +322,8 @@ impl TransitionClient {
if opts.send_content_md5 {
let mut md5_hasher = clone_self.md5_hasher.lock().unwrap();
let md5_hash = md5_hasher.as_mut().expect("err");
md5_hash.write(&buf[..length]);
md5_base64 = base64_encode(md5_hash.sum().as_bytes());
let hash = md5_hash.hash_encode(&buf[..length]);
md5_base64 = base64_encode(hash.as_ref());
}
//defer wg.Done()

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -21,6 +20,7 @@
use bytes::Bytes;
use http::{HeaderMap, HeaderValue, Method, StatusCode};
use rustfs_utils::{HashAlgorithm, crypto::base64_encode};
use s3s::S3ErrorCode;
use s3s::dto::ReplicationStatus;
use s3s::header::X_AMZ_BYPASS_GOVERNANCE_RETENTION;
@@ -38,7 +38,6 @@ use crate::{
store_api::{GetObjectReader, ObjectInfo, StorageAPI},
};
use rustfs_utils::hash::EMPTY_STRING_SHA256_HASH;
use rustfs_utils::hasher::{sum_md5_base64, sum_sha256_hex};
pub struct RemoveBucketOptions {
_forced_elete: bool,
@@ -330,8 +329,8 @@ impl TransitionClient {
query_values: url_values.clone(),
content_body: ReaderImpl::Body(Bytes::from(remove_bytes.clone())),
content_length: remove_bytes.len() as i64,
content_md5_base64: sum_md5_base64(&remove_bytes),
content_sha256_hex: sum_sha256_hex(&remove_bytes),
content_md5_base64: base64_encode(&HashAlgorithm::Md5.hash_encode(&remove_bytes).as_ref()),
content_sha256_hex: base64_encode(&HashAlgorithm::SHA256.hash_encode(&remove_bytes).as_ref()),
custom_header: headers,
object_name: "".to_string(),
stream_sha256: false,

View File

@@ -0,0 +1,172 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
#![allow(unused_must_use)]
#![allow(clippy::all)]
use bytes::Bytes;
use http::HeaderMap;
use std::collections::HashMap;
use std::io::Cursor;
use tokio::io::BufReader;
use crate::client::{
api_error_response::{err_invalid_argument, http_resp_to_error_response},
api_get_object_acl::AccessControlList,
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
};
const TIER_STANDARD: &str = "Standard";
const TIER_BULK: &str = "Bulk";
const TIER_EXPEDITED: &str = "Expedited";
#[derive(Debug, Default, serde::Serialize)]
pub struct GlacierJobParameters {
pub tier: String,
}
#[derive(Debug, Default, serde::Serialize, serde::Deserialize)]
pub struct Encryption {
pub encryption_type: String,
pub kms_context: String,
pub kms_key_id: String,
}
#[derive(Debug, Default, serde::Serialize, serde::Deserialize)]
pub struct MetadataEntry {
pub name: String,
pub value: String,
}
#[derive(Debug, Default, serde::Serialize)]
pub struct S3 {
pub access_control_list: AccessControlList,
pub bucket_name: String,
pub prefix: String,
pub canned_acl: String,
pub encryption: Encryption,
pub storage_class: String,
//tagging: Tags,
pub user_metadata: MetadataEntry,
}
#[derive(Debug, Default, serde::Serialize)]
pub struct SelectParameters {
pub expression_type: String,
pub expression: String,
//input_serialization: SelectObjectInputSerialization,
//output_serialization: SelectObjectOutputSerialization,
}
#[derive(Debug, Default, serde::Serialize)]
pub struct OutputLocation(pub S3);
#[derive(Debug, Default, serde::Serialize)]
pub struct RestoreRequest {
pub restore_type: String,
pub tier: String,
pub days: i64,
pub glacier_job_parameters: GlacierJobParameters,
pub description: String,
pub select_parameters: SelectParameters,
pub output_location: OutputLocation,
}
impl RestoreRequest {
fn set_days(&mut self, v: i64) {
self.days = v;
}
fn set_glacier_job_parameters(&mut self, v: GlacierJobParameters) {
self.glacier_job_parameters = v;
}
fn set_type(&mut self, v: &str) {
self.restore_type = v.to_string();
}
fn set_tier(&mut self, v: &str) {
self.tier = v.to_string();
}
fn set_description(&mut self, v: &str) {
self.description = v.to_string();
}
fn set_select_parameters(&mut self, v: SelectParameters) {
self.select_parameters = v;
}
fn set_output_location(&mut self, v: OutputLocation) {
self.output_location = v;
}
}
impl TransitionClient {
pub async fn restore_object(
&self,
bucket_name: &str,
object_name: &str,
version_id: &str,
restore_req: &RestoreRequest,
) -> Result<(), std::io::Error> {
let restore_request = match serde_xml_rs::to_string(restore_req) {
Ok(buf) => buf,
Err(e) => {
return Err(std::io::Error::other(e));
}
};
let restore_request_bytes = restore_request.as_bytes().to_vec();
let mut url_values = HashMap::new();
url_values.insert("restore".to_string(), "".to_string());
if version_id != "" {
url_values.insert("versionId".to_string(), version_id.to_string());
}
let restore_request_buffer = Bytes::from(restore_request_bytes.clone());
let resp = self
.execute_method(
http::Method::HEAD,
&mut RequestMetadata {
bucket_name: bucket_name.to_string(),
object_name: object_name.to_string(),
query_values: url_values,
custom_header: HeaderMap::new(),
content_sha256_hex: "".to_string(), //sum_sha256_hex(&restore_request_bytes),
content_md5_base64: "".to_string(), //sum_md5_base64(&restore_request_bytes),
content_body: ReaderImpl::Body(restore_request_buffer),
content_length: restore_request_bytes.len() as i64,
stream_sha256: false,
trailer: HeaderMap::new(),
pre_sign_url: Default::default(),
add_crc: Default::default(),
extra_pre_sign_header: Default::default(),
bucket_location: Default::default(),
expires: Default::default(),
},
)
.await?;
let b = resp.body().bytes().expect("err").to_vec();
if resp.status() != http::StatusCode::ACCEPTED && resp.status() != http::StatusCode::OK {
return Err(std::io::Error::other(http_resp_to_error_response(resp, b, bucket_name, "")));
}
Ok(())
}
}

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -0,0 +1,166 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
#![allow(unused_must_use)]
#![allow(clippy::all)]
use bytes::Bytes;
use http::{HeaderMap, HeaderValue};
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
use std::{collections::HashMap, str::FromStr};
use tokio::io::BufReader;
use uuid::Uuid;
use crate::client::{
api_error_response::{ErrorResponse, err_invalid_argument, http_resp_to_error_response},
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
};
use s3s::header::{X_AMZ_DELETE_MARKER, X_AMZ_VERSION_ID};
impl TransitionClient {
pub async fn bucket_exists(&self, bucket_name: &str) -> Result<bool, std::io::Error> {
let resp = self
.execute_method(
http::Method::HEAD,
&mut RequestMetadata {
bucket_name: bucket_name.to_string(),
object_name: "".to_string(),
query_values: HashMap::new(),
custom_header: HeaderMap::new(),
content_sha256_hex: EMPTY_STRING_SHA256_HASH.to_string(),
content_md5_base64: "".to_string(),
content_body: ReaderImpl::Body(Bytes::new()),
content_length: 0,
stream_sha256: false,
trailer: HeaderMap::new(),
pre_sign_url: Default::default(),
add_crc: Default::default(),
extra_pre_sign_header: Default::default(),
bucket_location: Default::default(),
expires: Default::default(),
},
)
.await;
if let Ok(resp) = resp {
let b = resp.body().bytes().expect("err").to_vec();
let resperr = http_resp_to_error_response(resp, b, bucket_name, "");
/*if to_error_response(resperr).code == "NoSuchBucket" {
return Ok(false);
}
if resp.status_code() != http::StatusCode::OK {
return Ok(false);
}*/
}
Ok(true)
}
pub async fn stat_object(
&self,
bucket_name: &str,
object_name: &str,
opts: &GetObjectOptions,
) -> Result<ObjectInfo, std::io::Error> {
let mut headers = opts.header();
if opts.internal.replication_delete_marker {
headers.insert("X-Source-DeleteMarker", HeaderValue::from_str("true").unwrap());
}
if opts.internal.is_replication_ready_for_delete_marker {
headers.insert("X-Check-Replication-Ready", HeaderValue::from_str("true").unwrap());
}
let resp = self
.execute_method(
http::Method::HEAD,
&mut RequestMetadata {
bucket_name: bucket_name.to_string(),
object_name: object_name.to_string(),
query_values: opts.to_query_values(),
custom_header: headers,
content_sha256_hex: EMPTY_STRING_SHA256_HASH.to_string(),
content_md5_base64: "".to_string(),
content_body: ReaderImpl::Body(Bytes::new()),
content_length: 0,
stream_sha256: false,
trailer: HeaderMap::new(),
pre_sign_url: Default::default(),
add_crc: Default::default(),
extra_pre_sign_header: Default::default(),
bucket_location: Default::default(),
expires: Default::default(),
},
)
.await;
match resp {
Ok(resp) => {
let h = resp.headers();
let delete_marker = if let Some(x_amz_delete_marker) = h.get(X_AMZ_DELETE_MARKER.as_str()) {
x_amz_delete_marker.to_str().unwrap() == "true"
} else {
false
};
let replication_ready = if let Some(x_amz_delete_marker) = h.get("X-Replication-Ready") {
x_amz_delete_marker.to_str().unwrap() == "true"
} else {
false
};
if resp.status() != http::StatusCode::OK && resp.status() != http::StatusCode::PARTIAL_CONTENT {
if resp.status() == http::StatusCode::METHOD_NOT_ALLOWED && opts.version_id != "" && delete_marker {
let err_resp = ErrorResponse {
status_code: resp.status(),
code: s3s::S3ErrorCode::MethodNotAllowed,
message: "the specified method is not allowed against this resource.".to_string(),
bucket_name: bucket_name.to_string(),
key: object_name.to_string(),
..Default::default()
};
return Ok(ObjectInfo {
version_id: match Uuid::from_str(h.get(X_AMZ_VERSION_ID).unwrap().to_str().unwrap()) {
Ok(v) => v,
Err(e) => {
return Err(std::io::Error::other(e));
}
},
is_delete_marker: delete_marker,
..Default::default()
});
//err_resp
}
return Ok(ObjectInfo {
version_id: match Uuid::from_str(h.get(X_AMZ_VERSION_ID).unwrap().to_str().unwrap()) {
Ok(v) => v,
Err(e) => {
return Err(std::io::Error::other(e));
}
},
is_delete_marker: delete_marker,
replication_ready: replication_ready,
..Default::default()
});
//http_resp_to_error_response(resp, bucket_name, object_name)
}
Ok(to_object_info(bucket_name, object_name, h).unwrap())
}
Err(err) => {
return Err(std::io::Error::other(err));
}
}
}
}

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -31,7 +30,6 @@ use crate::client::{
transition_api::{Document, TransitionClient},
};
use rustfs_utils::hash::EMPTY_STRING_SHA256_HASH;
use rustfs_utils::hasher::{Hasher, Sha256};
use s3s::Body;
use s3s::S3ErrorCode;
@@ -125,9 +123,11 @@ impl TransitionClient {
url_str = target_url.to_string();
}
let mut req_builder = Request::builder().method(http::Method::GET).uri(url_str);
let Ok(mut req) = Request::builder().method(http::Method::GET).uri(url_str).body(Body::empty()) else {
return Err(std::io::Error::other("create request error"));
};
self.set_user_agent(&mut req_builder);
self.set_user_agent(&mut req);
let value;
{
@@ -154,22 +154,12 @@ impl TransitionClient {
}
if signer_type == SignatureType::SignatureAnonymous {
let req = match req_builder.body(Body::empty()) {
Ok(req) => return Ok(req),
Err(err) => {
return Err(std::io::Error::other(err));
}
};
return Ok(req);
}
if signer_type == SignatureType::SignatureV2 {
let req_builder = rustfs_signer::sign_v2(req_builder, 0, &access_key_id, &secret_access_key, is_virtual_style);
let req = match req_builder.body(Body::empty()) {
Ok(req) => return Ok(req),
Err(err) => {
return Err(std::io::Error::other(err));
}
};
let req = rustfs_signer::sign_v2(req, 0, &access_key_id, &secret_access_key, is_virtual_style);
return Ok(req);
}
let mut content_sha256 = EMPTY_STRING_SHA256_HASH.to_string();
@@ -177,17 +167,10 @@ impl TransitionClient {
content_sha256 = UNSIGNED_PAYLOAD.to_string();
}
req_builder
.headers_mut()
.expect("err")
req.headers_mut()
.insert("X-Amz-Content-Sha256", content_sha256.parse().unwrap());
let req_builder = rustfs_signer::sign_v4(req_builder, 0, &access_key_id, &secret_access_key, &session_token, "us-east-1");
let req = match req_builder.body(Body::empty()) {
Ok(req) => return Ok(req),
Err(err) => {
return Err(std::io::Error::other(err));
}
};
let req = rustfs_signer::sign_v4(req, 0, &access_key_id, &secret_access_key, &session_token, "us-east-1");
Ok(req)
}
}

View File

@@ -16,6 +16,9 @@ pub mod admin_handler_utils;
pub mod api_bucket_policy;
pub mod api_error_response;
pub mod api_get_object;
pub mod api_get_object_acl;
pub mod api_get_object_attributes;
pub mod api_get_object_file;
pub mod api_get_options;
pub mod api_list;
pub mod api_put_object;
@@ -23,7 +26,9 @@ pub mod api_put_object_common;
pub mod api_put_object_multipart;
pub mod api_put_object_streaming;
pub mod api_remove;
pub mod api_restore;
pub mod api_s3_datatypes;
pub mod api_stat;
pub mod bucket_cache;
pub mod constants;
pub mod credentials;

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -28,8 +27,12 @@ use http::{
};
use hyper_rustls::{ConfigBuilderExt, HttpsConnector};
use hyper_util::{client::legacy::Client, client::legacy::connect::HttpConnector, rt::TokioExecutor};
use md5::Digest;
use md5::Md5;
use rand::Rng;
use rustfs_utils::HashAlgorithm;
use serde::{Deserialize, Serialize};
use sha2::Sha256;
use std::io::Cursor;
use std::pin::Pin;
use std::sync::atomic::{AtomicI32, Ordering};
@@ -60,7 +63,6 @@ use crate::client::{
};
use crate::{checksum::ChecksumMode, store_api::GetObjectReader};
use rustfs_rio::HashReader;
use rustfs_utils::hasher::{MD5, Sha256};
use rustfs_utils::{
net::get_endpoint_url,
retry::{MAX_RETRY, new_retry_timer},
@@ -69,7 +71,6 @@ use s3s::S3ErrorCode;
use s3s::dto::ReplicationStatus;
use s3s::{Body, dto::Owner};
const _C_USER_AGENT_PREFIX: &str = "RustFS (linux; x86)";
const C_USER_AGENT: &str = "RustFS (linux; x86)";
const SUCCESS_STATUS: [StatusCode; 3] = [StatusCode::OK, StatusCode::NO_CONTENT, StatusCode::PARTIAL_CONTENT];
@@ -90,22 +91,18 @@ pub struct TransitionClient {
pub endpoint_url: Url,
pub creds_provider: Arc<Mutex<Credentials<Static>>>,
pub override_signer_type: SignatureType,
/*app_info: TODO*/
pub secure: bool,
pub http_client: Client<HttpsConnector<HttpConnector>, Body>,
//pub http_trace: Httptrace.ClientTrace,
pub bucket_loc_cache: Arc<Mutex<BucketLocationCache>>,
pub is_trace_enabled: Arc<Mutex<bool>>,
pub trace_errors_only: Arc<Mutex<bool>>,
//pub trace_output: io.Writer,
pub s3_accelerate_endpoint: Arc<Mutex<String>>,
pub s3_dual_stack_enabled: Arc<Mutex<bool>>,
pub region: String,
pub random: u64,
pub lookup: BucketLookupType,
//pub lookupFn: func(u url.URL, bucketName string) BucketLookupType,
pub md5_hasher: Arc<Mutex<Option<MD5>>>,
pub sha256_hasher: Option<Sha256>,
pub md5_hasher: Arc<Mutex<Option<HashAlgorithm>>>,
pub sha256_hasher: Option<HashAlgorithm>,
pub health_status: AtomicI32,
pub trailing_header_support: bool,
pub max_retries: i64,
@@ -115,15 +112,11 @@ pub struct TransitionClient {
pub struct Options {
pub creds: Credentials<Static>,
pub secure: bool,
//pub transport: http.RoundTripper,
//pub trace: *httptrace.ClientTrace,
pub region: String,
pub bucket_lookup: BucketLookupType,
//pub custom_region_via_url: func(u url.URL) string,
//pub bucket_lookup_via_url: func(u url.URL, bucketName string) BucketLookupType,
pub trailing_headers: bool,
pub custom_md5: Option<MD5>,
pub custom_sha256: Option<Sha256>,
pub custom_md5: Option<HashAlgorithm>,
pub custom_sha256: Option<HashAlgorithm>,
pub max_retries: i64,
}
@@ -145,8 +138,6 @@ impl TransitionClient {
async fn private_new(endpoint: &str, opts: Options) -> Result<TransitionClient, std::io::Error> {
let endpoint_url = get_endpoint_url(endpoint, opts.secure)?;
//let jar = cookiejar.New(cookiejar.Options{PublicSuffixList: publicsuffix.List})?;
//#[cfg(feature = "ring")]
//let _ = rustls::crypto::ring::default_provider().install_default();
//#[cfg(feature = "aws-lc-rs")]
@@ -154,9 +145,6 @@ impl TransitionClient {
let scheme = endpoint_url.scheme();
let client;
//if scheme == "https" {
// client = Client::builder(TokioExecutor::new()).build_http();
//} else {
let tls = rustls::ClientConfig::builder().with_native_roots()?.with_no_client_auth();
let https = hyper_rustls::HttpsConnectorBuilder::new()
.with_tls_config(tls)
@@ -164,7 +152,6 @@ impl TransitionClient {
.enable_http1()
.build();
client = Client::builder(TokioExecutor::new()).build(https);
//}
let mut clnt = TransitionClient {
endpoint_url,
@@ -190,11 +177,11 @@ impl TransitionClient {
{
let mut md5_hasher = clnt.md5_hasher.lock().unwrap();
if md5_hasher.is_none() {
*md5_hasher = Some(MD5::new());
*md5_hasher = Some(HashAlgorithm::Md5);
}
}
if clnt.sha256_hasher.is_none() {
clnt.sha256_hasher = Some(Sha256::new());
clnt.sha256_hasher = Some(HashAlgorithm::SHA256);
}
clnt.trailing_header_support = opts.trailing_headers && clnt.override_signer_type == SignatureType::SignatureV4;
@@ -210,13 +197,6 @@ impl TransitionClient {
self.endpoint_url.clone()
}
fn set_appinfo(&self, app_name: &str, app_version: &str) {
/*if app_name != "" && app_version != "" {
self.appInfo.app_name = app_name
self.appInfo.app_version = app_version
}*/
}
fn trace_errors_only_off(&self) {
let mut trace_errors_only = self.trace_errors_only.lock().unwrap();
*trace_errors_only = false;
@@ -241,8 +221,8 @@ impl TransitionClient {
&self,
is_md5_requested: bool,
is_sha256_requested: bool,
) -> (HashMap<String, MD5>, HashMap<String, Vec<u8>>) {
todo!();
) -> (HashMap<String, HashAlgorithm>, HashMap<String, Vec<u8>>) {
todo!()
}
fn is_online(&self) -> bool {
@@ -265,6 +245,7 @@ impl TransitionClient {
fn dump_http(&self, req: &http::Request<Body>, resp: &http::Response<Body>) -> Result<(), std::io::Error> {
let mut resp_trace: Vec<u8>;
//info!("{}{}", self.trace_output, "---------BEGIN-HTTP---------");
//info!("{}{}", self.trace_output, "---------END-HTTP---------");
Ok(())
@@ -335,7 +316,7 @@ impl TransitionClient {
//let mut retry_timer = RetryTimer::new();
//while let Some(v) = retry_timer.next().await {
for _ in [1; 1]
/*new_retry_timer(req_retry, DefaultRetryUnit, DefaultRetryCap, MaxJitter)*/
/*new_retry_timer(req_retry, default_retry_unit, default_retry_cap, max_jitter)*/
{
let req = self.new_request(method, metadata).await?;
@@ -406,7 +387,13 @@ impl TransitionClient {
&metadata.query_values,
)?;
let mut req_builder = Request::builder().method(method).uri(target_url.to_string());
let Ok(mut req) = Request::builder()
.method(method)
.uri(target_url.to_string())
.body(Body::empty())
else {
return Err(std::io::Error::other("create request error"));
};
let value;
{
@@ -430,30 +417,25 @@ impl TransitionClient {
if metadata.expires != 0 && metadata.pre_sign_url {
if signer_type == SignatureType::SignatureAnonymous {
return Err(std::io::Error::other(err_invalid_argument(
"Presigned URLs cannot be generated with anonymous credentials.",
"presigned urls cannot be generated with anonymous credentials.",
)));
}
if metadata.extra_pre_sign_header.is_some() {
if signer_type == SignatureType::SignatureV2 {
return Err(std::io::Error::other(err_invalid_argument(
"Extra signed headers for Presign with Signature V2 is not supported.",
"extra signed headers for presign with signature v2 is not supported.",
)));
}
let headers = req.headers_mut();
for (k, v) in metadata.extra_pre_sign_header.as_ref().unwrap() {
req_builder = req_builder.header(k, v);
headers.insert(k, v.clone());
}
}
if signer_type == SignatureType::SignatureV2 {
req_builder = rustfs_signer::pre_sign_v2(
req_builder,
&access_key_id,
&secret_access_key,
metadata.expires,
is_virtual_host,
);
req = rustfs_signer::pre_sign_v2(req, &access_key_id, &secret_access_key, metadata.expires, is_virtual_host);
} else if signer_type == SignatureType::SignatureV4 {
req_builder = rustfs_signer::pre_sign_v4(
req_builder,
req = rustfs_signer::pre_sign_v4(
req,
&access_key_id,
&secret_access_key,
&session_token,
@@ -462,57 +444,38 @@ impl TransitionClient {
OffsetDateTime::now_utc(),
);
}
let req = match req_builder.body(Body::empty()) {
Ok(req) => req,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
return Ok(req);
}
self.set_user_agent(&mut req_builder);
self.set_user_agent(&mut req);
for (k, v) in metadata.custom_header.clone() {
req_builder.headers_mut().expect("err").insert(k.expect("err"), v);
req.headers_mut().insert(k.expect("err"), v);
}
//req.content_length = metadata.content_length;
if metadata.content_length <= -1 {
let chunked_value = HeaderValue::from_str(&vec!["chunked"].join(",")).expect("err");
req_builder
.headers_mut()
.expect("err")
.insert(http::header::TRANSFER_ENCODING, chunked_value);
req.headers_mut().insert(http::header::TRANSFER_ENCODING, chunked_value);
}
if metadata.content_md5_base64.len() > 0 {
let md5_value = HeaderValue::from_str(&metadata.content_md5_base64).expect("err");
req_builder.headers_mut().expect("err").insert("Content-Md5", md5_value);
req.headers_mut().insert("Content-Md5", md5_value);
}
if signer_type == SignatureType::SignatureAnonymous {
let req = match req_builder.body(Body::empty()) {
Ok(req) => req,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
return Ok(req);
}
if signer_type == SignatureType::SignatureV2 {
req_builder =
rustfs_signer::sign_v2(req_builder, metadata.content_length, &access_key_id, &secret_access_key, is_virtual_host);
req = rustfs_signer::sign_v2(req, metadata.content_length, &access_key_id, &secret_access_key, is_virtual_host);
} else if metadata.stream_sha256 && !self.secure {
if metadata.trailer.len() > 0 {
//req.Trailer = metadata.trailer;
for (_, v) in &metadata.trailer {
req_builder = req_builder.header(http::header::TRAILER, v.clone());
req.headers_mut().insert(http::header::TRAILER, v.clone());
}
}
//req_builder = rustfs_signer::streaming_sign_v4(req_builder, &access_key_id,
// &secret_access_key, &session_token, &location, metadata.content_length, OffsetDateTime::now_utc(), self.sha256_hasher());
} else {
let mut sha_header = UNSIGNED_PAYLOAD.to_string();
if metadata.content_sha256_hex != "" {
@@ -523,11 +486,11 @@ impl TransitionClient {
} else if metadata.trailer.len() > 0 {
sha_header = UNSIGNED_PAYLOAD_TRAILER.to_string();
}
req_builder = req_builder
.header::<HeaderName, HeaderValue>("X-Amz-Content-Sha256".parse().unwrap(), sha_header.parse().expect("err"));
req.headers_mut()
.insert("X-Amz-Content-Sha256".parse::<HeaderName>().unwrap(), sha_header.parse().expect("err"));
req_builder = rustfs_signer::sign_v4_trailer(
req_builder,
req = rustfs_signer::sign_v4_trailer(
req,
&access_key_id,
&secret_access_key,
&session_token,
@@ -536,33 +499,23 @@ impl TransitionClient {
);
}
let req;
if metadata.content_length == 0 {
req = req_builder.body(Body::empty());
} else {
if metadata.content_length > 0 {
match &mut metadata.content_body {
ReaderImpl::Body(content_body) => {
req = req_builder.body(Body::from(content_body.clone()));
*req.body_mut() = Body::from(content_body.clone());
}
ReaderImpl::ObjectBody(content_body) => {
req = req_builder.body(Body::from(content_body.read_all().await?));
*req.body_mut() = Body::from(content_body.read_all().await?);
}
}
//req = req_builder.body(s3s::Body::from(metadata.content_body.read_all().await?));
}
match req {
Ok(req) => Ok(req),
Err(err) => Err(std::io::Error::other(err)),
}
Ok(req)
}
pub fn set_user_agent(&self, req: &mut Builder) {
let headers = req.headers_mut().expect("err");
pub fn set_user_agent(&self, req: &mut Request<Body>) {
let headers = req.headers_mut();
headers.insert("User-Agent", C_USER_AGENT.parse().expect("err"));
/*if self.app_info.app_name != "" && self.app_info.app_version != "" {
headers.insert("User-Agent", C_USER_AGENT+" "+self.app_info.app_name+"/"+self.app_info.app_version);
}*/
}
fn make_target_url(
@@ -945,7 +898,7 @@ pub struct ObjectMultipartInfo {
pub key: String,
pub size: i64,
pub upload_id: String,
//pub err error,
//pub err: Error,
}
pub struct UploadInfo {

View File

@@ -178,6 +178,16 @@ pub async fn remove_bucket_target(bucket: &str, arn_str: &str) {
}
}
pub async fn list_bucket_targets(bucket: &str) -> Result<BucketTargets, BucketRemoteTargetNotFound> {
if let Some(sys) = GLOBAL_Bucket_Target_Sys.get() {
sys.list_bucket_targets(bucket).await
} else {
Err(BucketRemoteTargetNotFound {
bucket: bucket.to_string(),
})
}
}
impl Default for BucketTargetSys {
fn default() -> Self {
Self::new()

View File

@@ -28,7 +28,7 @@
//! ## Example
//!
//! ```rust
//! use ecstore::erasure_coding::Erasure;
//! use rustfs_ecstore::erasure_coding::Erasure;
//!
//! let erasure = Erasure::new(4, 2, 1024); // 4 data shards, 2 parity shards, 1KB block size
//! let data = b"hello world";
@@ -263,7 +263,7 @@ impl ReedSolomonEncoder {
///
/// # Example
/// ```
/// use ecstore::erasure_coding::Erasure;
/// use rustfs_ecstore::erasure_coding::Erasure;
/// let erasure = Erasure::new(4, 2, 8);
/// let data = b"hello world";
/// let shards = erasure.encode_data(data).unwrap();

View File

@@ -20,17 +20,18 @@ use std::{
path::{Path, PathBuf},
pin::Pin,
sync::{
Arc,
Arc, OnceLock,
atomic::{AtomicBool, AtomicU32, AtomicU64, Ordering},
},
time::{Duration, SystemTime},
};
use time::{self, OffsetDateTime};
use tokio_util::sync::CancellationToken;
use super::{
data_scanner_metric::{ScannerMetric, ScannerMetrics, globalScannerMetrics},
data_usage::{DATA_USAGE_BLOOM_NAME_PATH, store_data_usage_in_backend},
data_usage::{DATA_USAGE_BLOOM_NAME_PATH, DataUsageInfo, store_data_usage_in_backend},
data_usage_cache::{DataUsageCache, DataUsageEntry, DataUsageHash},
heal_commands::{HEAL_DEEP_SCAN, HEAL_NORMAL_SCAN, HealScanMode},
};
@@ -103,7 +104,7 @@ use tokio::{
},
time::sleep,
};
use tracing::{error, info};
use tracing::{debug, error, info};
const DATA_SCANNER_SLEEP_PER_FOLDER: Duration = Duration::from_millis(1); // Time to wait between folders.
const DATA_USAGE_UPDATE_DIR_CYCLES: u32 = 16; // Visit all folders every n cycles.
@@ -127,6 +128,8 @@ lazy_static! {
pub static ref globalHealConfig: Arc<RwLock<Config>> = Arc::new(RwLock::new(Config::default()));
}
static GLOBAL_SCANNER_CANCEL_TOKEN: OnceLock<CancellationToken> = OnceLock::new();
struct DynamicSleeper {
factor: f64,
max_sleep: Duration,
@@ -195,36 +198,66 @@ fn new_dynamic_sleeper(factor: f64, max_wait: Duration, is_scanner: bool) -> Dyn
/// - Minimum sleep duration to avoid excessive CPU usage
/// - Proper error handling and logging
///
/// # Returns
/// A CancellationToken that can be used to gracefully shutdown the scanner
///
/// # Architecture
/// 1. Initialize with random seed for sleep intervals
/// 2. Run scanner cycles in a loop
/// 3. Use randomized sleep between cycles to avoid thundering herd
/// 4. Ensure minimum sleep duration to prevent CPU thrashing
pub async fn init_data_scanner() {
pub async fn init_data_scanner() -> CancellationToken {
info!("Initializing data scanner background task");
let cancel_token = CancellationToken::new();
GLOBAL_SCANNER_CANCEL_TOKEN
.set(cancel_token.clone())
.expect("Scanner already initialized");
let cancel_clone = cancel_token.clone();
tokio::spawn(async move {
info!("Data scanner background task started");
loop {
// Run the data scanner
run_data_scanner().await;
tokio::select! {
_ = cancel_clone.cancelled() => {
info!("Data scanner received shutdown signal, exiting gracefully");
break;
}
_ = run_data_scanner_cycle() => {
// Calculate randomized sleep duration
let random_factor = {
let mut rng = rand::rng();
rng.random_range(1.0..10.0)
};
let base_cycle_duration = SCANNER_CYCLE.load(Ordering::SeqCst) as f64;
let sleep_duration_secs = random_factor * base_cycle_duration;
// Calculate randomized sleep duration
// Use random factor (0.0 to 1.0) multiplied by the scanner cycle duration
let random_factor = {
let mut rng = rand::rng();
rng.random_range(1.0..10.0)
};
let base_cycle_duration = SCANNER_CYCLE.load(Ordering::SeqCst) as f64;
let sleep_duration_secs = random_factor * base_cycle_duration;
let sleep_duration = Duration::from_secs_f64(sleep_duration_secs);
let sleep_duration = Duration::from_secs_f64(sleep_duration_secs);
debug!(
duration_secs = sleep_duration.as_secs(),
"Data scanner sleeping before next cycle"
);
info!(duration_secs = sleep_duration.as_secs(), "Data scanner sleeping before next cycle");
// Sleep with the calculated duration
sleep(sleep_duration).await;
// Interruptible sleep
tokio::select! {
_ = cancel_clone.cancelled() => {
info!("Data scanner received shutdown signal during sleep, exiting");
break;
}
_ = sleep(sleep_duration) => {
// Continue to next cycle
}
}
}
}
}
info!("Data scanner background task stopped gracefully");
});
cancel_token
}
/// Run a single data scanner cycle
@@ -239,8 +272,8 @@ pub async fn init_data_scanner() {
/// - Gracefully handles missing object layer
/// - Continues operation even if individual steps fail
/// - Logs errors appropriately without terminating the scanner
async fn run_data_scanner() {
info!("Starting data scanner cycle");
async fn run_data_scanner_cycle() {
debug!("Starting data scanner cycle");
// Get the object layer, return early if not available
let Some(store) = new_object_layer_fn() else {
@@ -248,6 +281,14 @@ async fn run_data_scanner() {
return;
};
// Check for cancellation before starting expensive operations
if let Some(token) = GLOBAL_SCANNER_CANCEL_TOKEN.get() {
if token.is_cancelled() {
debug!("Scanner cancelled before starting cycle");
return;
}
}
// Load current cycle information from persistent storage
let buf = read_config(store.clone(), &DATA_USAGE_BLOOM_NAME_PATH)
.await
@@ -293,7 +334,7 @@ async fn run_data_scanner() {
}
// Set up data usage storage channel
let (tx, rx) = mpsc::channel(100);
let (tx, rx) = mpsc::channel::<DataUsageInfo>(100);
tokio::spawn(async move {
let _ = store_data_usage_in_backend(rx).await;
});
@@ -308,8 +349,8 @@ async fn run_data_scanner() {
"Starting namespace scanner"
);
// Run the namespace scanner
match store.clone().ns_scanner(tx, cycle_info.current as usize, scan_mode).await {
// Run the namespace scanner with cancellation support
match execute_namespace_scan(&store, tx, cycle_info.current, scan_mode).await {
Ok(_) => {
info!(cycle = cycle_info.current, "Namespace scanner completed successfully");
@@ -349,6 +390,28 @@ async fn run_data_scanner() {
stop_fn(&scan_result);
}
/// Execute namespace scan with cancellation support
async fn execute_namespace_scan(
store: &Arc<ECStore>,
tx: Sender<DataUsageInfo>,
cycle: u64,
scan_mode: HealScanMode,
) -> Result<()> {
let cancel_token = GLOBAL_SCANNER_CANCEL_TOKEN
.get()
.ok_or_else(|| Error::other("Scanner not initialized"))?;
tokio::select! {
result = store.ns_scanner(tx, cycle as usize, scan_mode) => {
result.map_err(|e| Error::other(format!("Namespace scan failed: {e}")))
}
_ = cancel_token.cancelled() => {
info!("Namespace scan cancelled");
Err(Error::other("Scan cancelled"))
}
}
}
#[derive(Debug, Serialize, Deserialize)]
struct BackgroundHealInfo {
bitrot_start_time: SystemTime,
@@ -404,7 +467,7 @@ async fn get_cycle_scan_mode(current_cycle: u64, bitrot_start_cycle: u64, bitrot
return HEAL_DEEP_SCAN;
}
if bitrot_start_time.duration_since(SystemTime::now()).unwrap() > bitrot_cycle {
if SystemTime::now().duration_since(bitrot_start_time).unwrap_or_default() > bitrot_cycle {
return HEAL_DEEP_SCAN;
}
@@ -741,13 +804,18 @@ impl ScannerItem {
// Create a mutable clone if you need to modify fields
let mut oi = oi.clone();
oi.replication_status = ReplicationStatusType::from(
oi.user_defined
.get("x-amz-bucket-replication-status")
.unwrap_or(&"PENDING".to_string()),
);
info!("apply status is: {:?}", oi.replication_status);
self.heal_replication(&oi, _size_s).await;
let versioned = BucketVersioningSys::prefix_enabled(&oi.bucket, &oi.name).await;
if versioned {
oi.replication_status = ReplicationStatusType::from(
oi.user_defined
.get("x-amz-bucket-replication-status")
.unwrap_or(&"PENDING".to_string()),
);
debug!("apply status is: {:?}", oi.replication_status);
self.heal_replication(&oi, _size_s).await;
}
done();
if action.delete_all() {

View File

@@ -4099,6 +4099,8 @@ impl ObjectIO for SetDisks {
}
}
drop(writers); // drop writers to close all files, this is to prevent FileAccessDenied errors when renaming data
let (online_disks, _, op_old_dir) = Self::rename_data(
&shuffle_disks,
RUSTFS_META_TMP_BUCKET,
@@ -5039,6 +5041,8 @@ impl StorageAPI for SetDisks {
let fi_buff = fi.marshal_msg()?;
drop(writers); // drop writers to close all files
let part_path = format!("{}/{}/{}", upload_id_path, fi.data_dir.unwrap_or_default(), part_suffix);
let _ = Self::rename_part(
&disks,

View File

@@ -1372,7 +1372,8 @@ impl StorageAPI for ECStore {
}
if let Err(err) = self.peer_sys.make_bucket(bucket, opts).await {
if !is_err_bucket_exists(&err.into()) {
let err = err.into();
if !is_err_bucket_exists(&err) {
let _ = self
.delete_bucket(
bucket,
@@ -1384,6 +1385,8 @@ impl StorageAPI for ECStore {
)
.await;
}
return Err(err);
};
let mut meta = BucketMetadata::new(bucket);

View File

@@ -1,4 +1,3 @@
#![allow(unused_imports)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -12,6 +11,7 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]

View File

@@ -95,7 +95,6 @@ impl WarmBackendS3 {
..Default::default()
};
let client = TransitionClient::new(&u.host().expect("err").to_string(), opts).await?;
//client.set_appinfo(format!("s3-tier-{}", tier), ReleaseTag);
let client = Arc::new(client);
let core = TransitionCore(Arc::clone(&client));

View File

@@ -1,238 +1,37 @@
# RustFS FileMeta
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
A high-performance Rust implementation of xl-storage-format-v2, providing complete compatibility with S3-compatible metadata format while offering enhanced performance and safety.
# RustFS FileMeta - File Metadata Management
## Overview
<p align="center">
<strong>Advanced file metadata management and indexing module for RustFS distributed object storage</strong>
</p>
This crate implements the XL (Erasure Coded) metadata format used for distributed object storage. It provides:
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
- **Full S3 Compatibility**: 100% compatible with xl.meta file format
- **High Performance**: Optimized for speed with sub-microsecond parsing times
- **Memory Safety**: Written in safe Rust with comprehensive error handling
- **Comprehensive Testing**: Extensive test suite with real metadata validation
- **Cross-Platform**: Supports multiple CPU architectures (x86_64, aarch64)
---
## Features
## 📖 Overview
### Core Functionality
- ✅ XL v2 file format parsing and serialization
- ✅ MessagePack-based metadata encoding/decoding
- ✅ Version management with modification time sorting
- ✅ Erasure coding information storage
- ✅ Inline data support for small objects
- ✅ CRC32 integrity verification using xxHash64
- ✅ Delete marker handling
- ✅ Legacy version support
**RustFS FileMeta** provides advanced file metadata management and indexing capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
### Advanced Features
- ✅ Signature calculation for version integrity
- ✅ Metadata validation and compatibility checking
- ✅ Version statistics and analytics
- ✅ Async I/O support with tokio
- ✅ Comprehensive error handling
- ✅ Performance benchmarking
## Features
## Performance
- High-performance metadata storage and retrieval
- Advanced indexing with full-text search capabilities
- File attribute management and custom metadata
- Version tracking and history management
- Distributed metadata replication
- Real-time metadata synchronization
Based on our benchmarks:
## 📚 Documentation
| Operation | Time | Description |
|-----------|------|-------------|
| Parse Real xl.meta | ~255 ns | Parse authentic xl metadata |
| Parse Complex xl.meta | ~1.1 µs | Parse multi-version metadata |
| Serialize Real xl.meta | ~659 ns | Serialize to xl format |
| Round-trip Real xl.meta | ~1.3 µs | Parse + serialize cycle |
| Version Statistics | ~5.2 ns | Calculate version stats |
| Integrity Validation | ~7.8 ns | Validate metadata integrity |
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## Usage
## 📄 License
### Basic Usage
```rust
use rustfs_filemeta::file_meta::FileMeta;
// Load metadata from bytes
let metadata = FileMeta::load(&xl_meta_bytes)?;
// Access version information
for version in &metadata.versions {
println!("Version ID: {:?}", version.header.version_id);
println!("Mod Time: {:?}", version.header.mod_time);
}
// Serialize back to bytes
let serialized = metadata.marshal_msg()?;
```
### Advanced Usage
```rust
use rustfs_filemeta::file_meta::FileMeta;
// Load with validation
let mut metadata = FileMeta::load(&xl_meta_bytes)?;
// Validate integrity
metadata.validate_integrity()?;
// Check xl format compatibility
if metadata.is_compatible_with_meta() {
println!("Compatible with xl format");
}
// Get version statistics
let stats = metadata.get_version_stats();
println!("Total versions: {}", stats.total_versions);
println!("Object versions: {}", stats.object_versions);
println!("Delete markers: {}", stats.delete_markers);
```
### Working with FileInfo
```rust
use rustfs_filemeta::fileinfo::FileInfo;
use rustfs_filemeta::file_meta::FileMetaVersion;
// Convert FileInfo to metadata version
let file_info = FileInfo::new("bucket", "object.txt");
let meta_version = FileMetaVersion::from(file_info);
// Add version to metadata
metadata.add_version(file_info)?;
```
## Data Structures
### FileMeta
The main metadata container that holds all versions and inline data:
```rust
pub struct FileMeta {
pub versions: Vec<FileMetaShallowVersion>,
pub data: InlineData,
pub meta_ver: u8,
}
```
### FileMetaVersion
Represents a single object version:
```rust
pub struct FileMetaVersion {
pub version_type: VersionType,
pub object: Option<MetaObject>,
pub delete_marker: Option<MetaDeleteMarker>,
pub write_version: u64,
}
```
### MetaObject
Contains object-specific metadata including erasure coding information:
```rust
pub struct MetaObject {
pub version_id: Option<Uuid>,
pub data_dir: Option<Uuid>,
pub erasure_algorithm: ErasureAlgo,
pub erasure_m: usize,
pub erasure_n: usize,
// ... additional fields
}
```
## File Format Compatibility
This implementation is fully compatible with xl-storage-format-v2:
- **Header Format**: XL2 v1 format with proper version checking
- **Serialization**: MessagePack encoding identical to standard format
- **Checksums**: xxHash64-based CRC validation
- **Version Types**: Support for Object, Delete, and Legacy versions
- **Inline Data**: Compatible inline data storage for small objects
## Testing
The crate includes comprehensive tests with real xl metadata:
```bash
# Run all tests
cargo test
# Run benchmarks
cargo bench
# Run with coverage
cargo test --features coverage
```
### Test Coverage
- ✅ Real xl.meta file compatibility
- ✅ Complex multi-version scenarios
- ✅ Error handling and recovery
- ✅ Inline data processing
- ✅ Signature calculation
- ✅ Round-trip serialization
- ✅ Performance benchmarks
- ✅ Edge cases and boundary conditions
## Architecture
The crate follows a modular design:
```
src/
├── file_meta.rs # Core metadata structures and logic
├── file_meta_inline.rs # Inline data handling
├── fileinfo.rs # File information structures
├── test_data.rs # Test data generation
└── lib.rs # Public API exports
```
## Error Handling
Comprehensive error handling with detailed error messages:
```rust
use rustfs_filemeta::error::Error;
match FileMeta::load(&invalid_data) {
Ok(metadata) => { /* process metadata */ },
Err(Error::InvalidFormat(msg)) => {
eprintln!("Invalid format: {}", msg);
},
Err(Error::CorruptedData(msg)) => {
eprintln!("Corrupted data: {}", msg);
},
Err(e) => {
eprintln!("Other error: {}", e);
}
}
```
## Dependencies
- `rmp` - MessagePack serialization
- `uuid` - UUID handling
- `time` - Date/time operations
- `xxhash-rust` - Fast hashing
- `tokio` - Async runtime (optional)
- `criterion` - Benchmarking (dev dependency)
## Contributing
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass
5. Submit a pull request
## License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
## Acknowledgments
- Original xl-storage-format-v2 implementation contributors
- Rust community for excellent crates and tooling
- Contributors and testers who helped improve this implementation
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

37
crates/iam/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS IAM - Identity & Access Management
<p align="center">
<strong>Identity and access management system for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS IAM** provides identity and access management capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- User and group management with RBAC
- Service account and API key authentication
- Policy engine with fine-grained permissions
- LDAP/Active Directory integration
- Multi-factor authentication support
- Session management and token validation
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

37
crates/lock/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Lock - Distributed Locking
<p align="center">
<strong>High-performance distributed locking system for RustFS object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Lock** provides distributed locking capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Distributed lock management across cluster nodes
- Read-write lock support with concurrent readers
- Lock timeout and automatic lease renewal
- Deadlock detection and prevention
- High-availability with leader election
- Performance-optimized locking algorithms
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

37
crates/madmin/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS MadAdmin - Administrative Interface
<p align="center">
<strong>Advanced administrative interface and management tools for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS MadAdmin** provides advanced administrative interface and management tools for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Comprehensive cluster management and monitoring
- Real-time performance metrics and analytics
- Automated backup and disaster recovery tools
- User and permission management interface
- System health monitoring and alerting
- Configuration management and deployment tools
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

37
crates/notify/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Notify - Event Notification System
<p align="center">
<strong>Real-time event notification and messaging system for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Notify** provides real-time event notification and messaging capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Real-time event streaming and notifications
- Multiple notification targets (HTTP, Kafka, Redis, Email)
- Event filtering and routing based on criteria
- Message queuing with guaranteed delivery
- Event replay and auditing capabilities
- High-throughput messaging with batching support
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

37
crates/obs/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Obs - Observability & Monitoring
<p align="center">
<strong>Comprehensive observability and monitoring system for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Obs** provides comprehensive observability and monitoring capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- OpenTelemetry integration for distributed tracing
- Prometheus metrics collection and exposition
- Structured logging with configurable levels
- Performance profiling and analytics
- Real-time health checks and status monitoring
- Custom dashboards and alerting integration
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

37
crates/policy/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Policy - Policy Engine
<p align="center">
<strong>Advanced policy engine and access control system for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Policy** provides advanced policy engine and access control capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- AWS-compatible bucket policy engine
- Fine-grained resource-based access control
- Condition-based policy evaluation
- Policy validation and syntax checking
- Role-based access control integration
- Dynamic policy evaluation with context
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

37
crates/protos/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Protos - Protocol Buffer Definitions
<p align="center">
<strong>Protocol buffer definitions and gRPC services for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Protos** provides protocol buffer definitions and gRPC services for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Comprehensive gRPC service definitions
- Cross-language compatibility with Protocol Buffers
- Efficient binary serialization for network communication
- Versioned API schemas with backward compatibility
- Type-safe message definitions
- Code generation for multiple programming languages
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -40,5 +40,5 @@ serde_json.workspace = true
md-5 = { workspace = true }
[dev-dependencies]
criterion = { version = "0.5.1", features = ["async", "async_tokio", "tokio"] }
#criterion = { version = "0.5.1", features = ["async", "async_tokio", "tokio"] }
tokio-test = "0.4"

37
crates/rio/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Rio - High-Performance I/O
<p align="center">
<strong>High-performance asynchronous I/O operations for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Rio** provides high-performance asynchronous I/O operations for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Zero-copy streaming I/O operations
- Hardware-accelerated encryption/decryption
- Multi-algorithm compression support
- Efficient buffer management and pooling
- Vectored I/O for improved throughput
- Real-time data integrity verification
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS S3Select API - SQL Query Interface
<p align="center">
<strong>AWS S3 Select compatible SQL query API for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS S3Select API** provides AWS S3 Select compatible SQL query capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Standard SQL query support (SELECT, WHERE, GROUP BY, ORDER BY)
- Multiple data format support (CSV, JSON, Parquet, Arrow)
- Streaming processing for large files
- AWS S3 Select API compatibility
- Parallel query execution
- Predicate pushdown optimization
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS S3Select Query - SQL Query Engine
<p align="center">
<strong>Apache DataFusion-powered SQL query engine for RustFS S3 Select implementation</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS S3Select Query** provides Apache DataFusion-powered SQL query engine capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Apache DataFusion integration for high-performance queries
- Vectorized processing with SIMD acceleration
- Parallel query execution across multiple threads
- Cost-based query optimization
- Support for complex SQL operations (joins, subqueries, window functions)
- Multiple data format support (Parquet, CSV, JSON, Arrow)
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -27,10 +27,14 @@ bytes = { workspace = true }
http.workspace = true
time.workspace = true
hyper.workspace = true
serde.workspace = true
serde_urlencoded.workspace = true
rustfs-utils = { workspace = true, features = ["full"] }
s3s.workspace = true
[dev-dependencies]
tempfile = { workspace = true }
rand = { workspace = true }
[lints]
workspace = true

37
crates/signer/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Signer - Request Signing & Authentication
<p align="center">
<strong>AWS-compatible request signing and authentication for RustFS object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Signer** provides AWS-compatible request signing and authentication capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- AWS Signature Version 4 (SigV4) implementation
- Pre-signed URL generation and validation
- Multiple authentication methods (access key, STS token, IAM role)
- Streaming upload signature support
- Hardware-accelerated cryptographic operations
- Multi-region signature support
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,7 @@ use time::{OffsetDateTime, macros::format_description};
use super::request_signature_v4::{SERVICE_TYPE_S3, get_scope, get_signature, get_signing_key};
use rustfs_utils::hash::EMPTY_STRING_SHA256_HASH;
use s3s::Body;
const STREAMING_SIGN_ALGORITHM: &str = "STREAMING-AWS4-HMAC-SHA256-PAYLOAD";
const STREAMING_SIGN_TRAILER_ALGORITHM: &str = "STREAMING-AWS4-HMAC-SHA256-PAYLOAD-TRAILER";
@@ -68,7 +69,7 @@ fn _build_chunk_signature(
#[allow(clippy::too_many_arguments)]
pub fn streaming_sign_v4(
mut req: request::Builder,
mut req: request::Request<Body>,
_access_key_id: &str,
_secret_access_key: &str,
session_token: &str,
@@ -76,8 +77,8 @@ pub fn streaming_sign_v4(
data_len: i64,
req_time: OffsetDateTime, /*, sh256: md5simd::Hasher*/
trailer: HeaderMap,
) -> request::Builder {
let headers = req.headers_mut().expect("err");
) -> request::Request<Body> {
let headers = req.headers_mut();
if trailer.is_empty() {
headers.append("X-Amz-Content-Sha256", HeaderValue::from_str(STREAMING_SIGN_ALGORITHM).expect("err"));

View File

@@ -15,13 +15,15 @@
use http::{HeaderValue, request};
use time::{OffsetDateTime, macros::format_description};
use s3s::Body;
pub fn streaming_unsigned_v4(
mut req: request::Builder,
mut req: request::Request<Body>,
session_token: &str,
_data_len: i64,
req_time: OffsetDateTime,
) -> request::Builder {
let headers = req.headers_mut().expect("err");
) -> request::Request<Body> {
let headers = req.headers_mut();
let chunked_value = HeaderValue::from_str(&["aws-chunked"].join(",")).expect("err");
headers.insert(http::header::TRANSFER_ENCODING, chunked_value);

View File

@@ -21,23 +21,22 @@ use time::{OffsetDateTime, format_description};
use super::utils::get_host_addr;
use rustfs_utils::crypto::{base64_encode, hex, hmac_sha1};
use s3s::Body;
const _SIGN_V4_ALGORITHM: &str = "AWS4-HMAC-SHA256";
const SIGN_V2_ALGORITHM: &str = "AWS";
fn encode_url2path(req: &request::Builder, _virtual_host: bool) -> String {
//path = serde_urlencoded::to_string(req.uri_ref().unwrap().path().unwrap()).unwrap();
req.uri_ref().unwrap().path().to_string()
fn encode_url2path(req: &request::Request<Body>, _virtual_host: bool) -> String {
req.uri().path().to_string()
}
pub fn pre_sign_v2(
mut req: request::Builder,
mut req: request::Request<Body>,
access_key_id: &str,
secret_access_key: &str,
expires: i64,
virtual_host: bool,
) -> request::Builder {
) -> request::Request<Body> {
if access_key_id.is_empty() || secret_access_key.is_empty() {
return req;
}
@@ -46,7 +45,7 @@ pub fn pre_sign_v2(
let d = d.replace_time(time::Time::from_hms(0, 0, 0).unwrap());
let epoch_expires = d.unix_timestamp() + expires;
let headers = req.headers_mut().expect("headers_mut err");
let headers = req.headers_mut();
let expires_str = headers.get("Expires");
if expires_str.is_none() {
headers.insert("Expires", format!("{epoch_expires:010}").parse().unwrap());
@@ -55,7 +54,7 @@ pub fn pre_sign_v2(
let string_to_sign = pre_string_to_sign_v2(&req, virtual_host);
let signature = hex(hmac_sha1(secret_access_key, string_to_sign));
let result = serde_urlencoded::from_str::<HashMap<String, String>>(req.uri_ref().unwrap().query().unwrap());
let result = serde_urlencoded::from_str::<HashMap<String, String>>(req.uri().query().unwrap());
let mut query = result.unwrap_or_default();
if get_host_addr(&req).contains(".storage.googleapis.com") {
query.insert("GoogleAccessId".to_string(), access_key_id.to_string());
@@ -65,15 +64,17 @@ pub fn pre_sign_v2(
query.insert("Expires".to_string(), format!("{epoch_expires:010}"));
let uri = req.uri_ref().unwrap().clone();
let mut parts = req.uri_ref().unwrap().clone().into_parts();
let uri = req.uri().clone();
let mut parts = req.uri().clone().into_parts();
parts.path_and_query = Some(
format!("{}?{}&Signature={}", uri.path(), serde_urlencoded::to_string(&query).unwrap(), signature)
.parse()
.unwrap(),
);
req.uri(Uri::from_parts(parts).unwrap())
*req.uri_mut() = Uri::from_parts(parts).unwrap();
req
}
fn _post_pre_sign_signature_v2(policy_base64: &str, secret_access_key: &str) -> String {
@@ -81,12 +82,12 @@ fn _post_pre_sign_signature_v2(policy_base64: &str, secret_access_key: &str) ->
}
pub fn sign_v2(
mut req: request::Builder,
mut req: request::Request<Body>,
_content_len: i64,
access_key_id: &str,
secret_access_key: &str,
virtual_host: bool,
) -> request::Builder {
) -> request::Request<Body> {
if access_key_id.is_empty() || secret_access_key.is_empty() {
return req;
}
@@ -95,7 +96,7 @@ pub fn sign_v2(
let d2 = d.replace_time(time::Time::from_hms(0, 0, 0).unwrap());
let string_to_sign = string_to_sign_v2(&req, virtual_host);
let headers = req.headers_mut().expect("err");
let headers = req.headers_mut();
let date = headers.get("Date").unwrap();
if date.to_str().unwrap() == "" {
@@ -117,7 +118,7 @@ pub fn sign_v2(
req
}
fn pre_string_to_sign_v2(req: &request::Builder, virtual_host: bool) -> String {
fn pre_string_to_sign_v2(req: &request::Request<Body>, virtual_host: bool) -> String {
let mut buf = BytesMut::new();
write_pre_sign_v2_headers(&mut buf, req);
write_canonicalized_headers(&mut buf, req);
@@ -125,18 +126,18 @@ fn pre_string_to_sign_v2(req: &request::Builder, virtual_host: bool) -> String {
String::from_utf8(buf.to_vec()).unwrap()
}
fn write_pre_sign_v2_headers(buf: &mut BytesMut, req: &request::Builder) {
let _ = buf.write_str(req.method_ref().unwrap().as_str());
fn write_pre_sign_v2_headers(buf: &mut BytesMut, req: &request::Request<Body>) {
let _ = buf.write_str(req.method().as_str());
let _ = buf.write_char('\n');
let _ = buf.write_str(req.headers_ref().unwrap().get("Content-Md5").unwrap().to_str().unwrap());
let _ = buf.write_str(req.headers().get("Content-Md5").unwrap().to_str().unwrap());
let _ = buf.write_char('\n');
let _ = buf.write_str(req.headers_ref().unwrap().get("Content-Type").unwrap().to_str().unwrap());
let _ = buf.write_str(req.headers().get("Content-Type").unwrap().to_str().unwrap());
let _ = buf.write_char('\n');
let _ = buf.write_str(req.headers_ref().unwrap().get("Expires").unwrap().to_str().unwrap());
let _ = buf.write_str(req.headers().get("Expires").unwrap().to_str().unwrap());
let _ = buf.write_char('\n');
}
fn string_to_sign_v2(req: &request::Builder, virtual_host: bool) -> String {
fn string_to_sign_v2(req: &request::Request<Body>, virtual_host: bool) -> String {
let mut buf = BytesMut::new();
write_sign_v2_headers(&mut buf, req);
write_canonicalized_headers(&mut buf, req);
@@ -144,27 +145,27 @@ fn string_to_sign_v2(req: &request::Builder, virtual_host: bool) -> String {
String::from_utf8(buf.to_vec()).unwrap()
}
fn write_sign_v2_headers(buf: &mut BytesMut, req: &request::Builder) {
let _ = buf.write_str(req.method_ref().unwrap().as_str());
fn write_sign_v2_headers(buf: &mut BytesMut, req: &request::Request<Body>) {
let headers = req.headers();
let _ = buf.write_str(req.method().as_str());
let _ = buf.write_char('\n');
let _ = buf.write_str(req.headers_ref().unwrap().get("Content-Md5").unwrap().to_str().unwrap());
let _ = buf.write_str(headers.get("Content-Md5").unwrap().to_str().unwrap());
let _ = buf.write_char('\n');
let _ = buf.write_str(req.headers_ref().unwrap().get("Content-Type").unwrap().to_str().unwrap());
let _ = buf.write_str(headers.get("Content-Type").unwrap().to_str().unwrap());
let _ = buf.write_char('\n');
let _ = buf.write_str(req.headers_ref().unwrap().get("Date").unwrap().to_str().unwrap());
let _ = buf.write_str(headers.get("Date").unwrap().to_str().unwrap());
let _ = buf.write_char('\n');
}
fn write_canonicalized_headers(buf: &mut BytesMut, req: &request::Builder) {
fn write_canonicalized_headers(buf: &mut BytesMut, req: &request::Request<Body>) {
let mut proto_headers = Vec::<String>::new();
let mut vals = HashMap::<String, Vec<String>>::new();
for k in req.headers_ref().expect("err").keys() {
for k in req.headers().keys() {
let lk = k.as_str().to_lowercase();
if lk.starts_with("x-amz") {
proto_headers.push(lk.clone());
let vv = req
.headers_ref()
.expect("err")
.headers()
.get_all(k)
.iter()
.map(|e| e.to_str().unwrap().to_string())
@@ -210,12 +211,12 @@ const INCLUDED_QUERY: &[&str] = &[
"website",
];
fn write_canonicalized_resource(buf: &mut BytesMut, req: &request::Builder, virtual_host: bool) {
let request_url = req.uri_ref().unwrap();
fn write_canonicalized_resource(buf: &mut BytesMut, req: &request::Request<Body>, virtual_host: bool) {
let request_url = req.uri();
let _ = buf.write_str(&encode_url2path(req, virtual_host));
if request_url.query().unwrap() != "" {
let mut n: i64 = 0;
let result = serde_urlencoded::from_str::<HashMap<String, Vec<String>>>(req.uri_ref().unwrap().query().unwrap());
let result = serde_urlencoded::from_str::<HashMap<String, Vec<String>>>(req.uri().query().unwrap());
let vals = result.unwrap_or_default();
for resource in INCLUDED_QUERY {
let vv = &vals[*resource];

View File

@@ -26,6 +26,7 @@ use super::constants::UNSIGNED_PAYLOAD;
use super::request_signature_streaming_unsigned_trailer::streaming_unsigned_v4;
use super::utils::{get_host_addr, sign_v4_trim_all};
use rustfs_utils::crypto::{hex, hex_sha256, hmac_sha256};
use s3s::Body;
pub const SIGN_V4_ALGORITHM: &str = "AWS4-HMAC-SHA256";
pub const SERVICE_TYPE_S3: &str = "s3";
@@ -76,8 +77,8 @@ fn get_credential(access_key_id: &str, location: &str, t: OffsetDateTime, servic
s
}
fn get_hashed_payload(req: &request::Builder) -> String {
let headers = req.headers_ref().unwrap();
fn get_hashed_payload(req: &request::Request<Body>) -> String {
let headers = req.headers();
let mut hashed_payload = "";
if let Some(payload) = headers.get("X-Amz-Content-Sha256") {
hashed_payload = payload.to_str().unwrap();
@@ -88,17 +89,16 @@ fn get_hashed_payload(req: &request::Builder) -> String {
hashed_payload.to_string()
}
fn get_canonical_headers(req: &request::Builder, ignored_headers: &HashMap<String, bool>) -> String {
fn get_canonical_headers(req: &request::Request<Body>, ignored_headers: &HashMap<String, bool>) -> String {
let mut headers = Vec::<String>::new();
let mut vals = HashMap::<String, Vec<String>>::new();
for k in req.headers_ref().expect("err").keys() {
for k in req.headers().keys() {
if ignored_headers.get(&k.to_string()).is_some() {
continue;
}
headers.push(k.as_str().to_lowercase());
let vv = req
.headers_ref()
.expect("err")
.headers()
.get_all(k)
.iter()
.map(|e| e.to_str().unwrap().to_string())
@@ -146,9 +146,9 @@ fn header_exists(key: &str, headers: &[String]) -> bool {
false
}
fn get_signed_headers(req: &request::Builder, ignored_headers: &HashMap<String, bool>) -> String {
fn get_signed_headers(req: &request::Request<Body>, ignored_headers: &HashMap<String, bool>) -> String {
let mut headers = Vec::<String>::new();
let headers_ref = req.headers_ref().expect("err");
let headers_ref = req.headers();
debug!("get_signed_headers headers: {:?}", headers_ref);
for (k, _) in headers_ref {
if ignored_headers.get(&k.to_string()).is_some() {
@@ -163,9 +163,9 @@ fn get_signed_headers(req: &request::Builder, ignored_headers: &HashMap<String,
headers.join(";")
}
fn get_canonical_request(req: &request::Builder, ignored_headers: &HashMap<String, bool>, hashed_payload: &str) -> String {
fn get_canonical_request(req: &request::Request<Body>, ignored_headers: &HashMap<String, bool>, hashed_payload: &str) -> String {
let mut canonical_query_string = "".to_string();
if let Some(q) = req.uri_ref().unwrap().query() {
if let Some(q) = req.uri().query() {
// Parse query string into key-value pairs
let mut query_params: Vec<(String, String)> = Vec::new();
for param in q.split('&') {
@@ -187,8 +187,8 @@ fn get_canonical_request(req: &request::Builder, ignored_headers: &HashMap<Strin
}
let canonical_request = [
req.method_ref().unwrap().to_string(),
req.uri_ref().unwrap().path().to_string(),
req.method().to_string(),
req.uri().path().to_string(),
canonical_query_string,
get_canonical_headers(req, ignored_headers),
get_signed_headers(req, ignored_headers),
@@ -210,14 +210,14 @@ fn get_string_to_sign_v4(t: OffsetDateTime, location: &str, canonical_request: &
}
pub fn pre_sign_v4(
req: request::Builder,
req: request::Request<Body>,
access_key_id: &str,
secret_access_key: &str,
session_token: &str,
location: &str,
expires: i64,
t: OffsetDateTime,
) -> request::Builder {
) -> request::Request<Body> {
if access_key_id.is_empty() || secret_access_key.is_empty() {
return req;
}
@@ -226,7 +226,7 @@ pub fn pre_sign_v4(
let signed_headers = get_signed_headers(&req, &v4_ignored_headers);
let mut query = <Vec<(String, String)>>::new();
if let Some(q) = req.uri_ref().unwrap().query() {
if let Some(q) = req.uri().query() {
let result = serde_urlencoded::from_str::<Vec<(String, String)>>(q);
query = result.unwrap_or_default();
}
@@ -240,14 +240,15 @@ pub fn pre_sign_v4(
query.push(("X-Amz-Security-Token".to_string(), session_token.to_string()));
}
let uri = req.uri_ref().unwrap().clone();
let mut parts = req.uri_ref().unwrap().clone().into_parts();
let uri = req.uri().clone();
let mut parts = req.uri().clone().into_parts();
parts.path_and_query = Some(
format!("{}?{}", uri.path(), serde_urlencoded::to_string(&query).unwrap())
.parse()
.unwrap(),
);
let req = req.uri(Uri::from_parts(parts).unwrap());
let mut req = req;
*req.uri_mut() = Uri::from_parts(parts).unwrap();
let canonical_request = get_canonical_request(&req, &v4_ignored_headers, &get_hashed_payload(&req));
let string_to_sign = get_string_to_sign_v4(t, location, &canonical_request, SERVICE_TYPE_S3);
@@ -256,8 +257,8 @@ pub fn pre_sign_v4(
let signing_key = get_signing_key(secret_access_key, location, t, SERVICE_TYPE_S3);
let signature = get_signature(signing_key, &string_to_sign);
let uri = req.uri_ref().unwrap().clone();
let mut parts = req.uri_ref().unwrap().clone().into_parts();
let uri = req.uri().clone();
let mut parts = req.uri().clone().into_parts();
parts.path_and_query = Some(
format!(
"{}?{}&X-Amz-Signature={}",
@@ -269,7 +270,9 @@ pub fn pre_sign_v4(
.unwrap(),
);
req.uri(Uri::from_parts(parts).unwrap())
*req.uri_mut() = Uri::from_parts(parts).unwrap();
req
}
fn _post_pre_sign_signature_v4(policy_base64: &str, t: OffsetDateTime, secret_access_key: &str, location: &str) -> String {
@@ -278,13 +281,18 @@ fn _post_pre_sign_signature_v4(policy_base64: &str, t: OffsetDateTime, secret_ac
get_signature(signing_key, policy_base64)
}
fn _sign_v4_sts(req: request::Builder, access_key_id: &str, secret_access_key: &str, location: &str) -> request::Builder {
fn _sign_v4_sts(
req: request::Request<Body>,
access_key_id: &str,
secret_access_key: &str,
location: &str,
) -> request::Request<Body> {
sign_v4_inner(req, 0, access_key_id, secret_access_key, "", location, SERVICE_TYPE_STS, HeaderMap::new())
}
#[allow(clippy::too_many_arguments)]
fn sign_v4_inner(
mut req: request::Builder,
mut req: request::Request<Body>,
content_len: i64,
access_key_id: &str,
secret_access_key: &str,
@@ -292,7 +300,7 @@ fn sign_v4_inner(
location: &str,
service_type: &str,
trailer: HeaderMap,
) -> request::Builder {
) -> request::Request<Body> {
if access_key_id.is_empty() || secret_access_key.is_empty() {
return req;
}
@@ -300,7 +308,7 @@ fn sign_v4_inner(
let t = OffsetDateTime::now_utc();
let t2 = t.replace_time(time::Time::from_hms(0, 0, 0).unwrap());
let headers = req.headers_mut().expect("err");
let headers = req.headers_mut();
let format = format_description!("[year][month][day]T[hour][minute][second]Z");
headers.insert("X-Amz-Date", t.format(&format).unwrap().to_string().parse().unwrap());
@@ -330,7 +338,7 @@ fn sign_v4_inner(
let signature = get_signature(signing_key, &string_to_sign);
//debug!("\n\ncanonical_request: \n{}\nstring_to_sign: \n{}\nsignature: \n{}\n\n", &canonical_request, &string_to_sign, &signature);
let headers = req.headers_mut().expect("err");
let headers = req.headers_mut();
let auth = format!("{SIGN_V4_ALGORITHM} Credential={credential}, SignedHeaders={signed_headers}, Signature={signature}");
headers.insert("Authorization", auth.parse().unwrap());
@@ -345,14 +353,14 @@ fn sign_v4_inner(
req
}
fn _unsigned_trailer(mut req: request::Builder, content_len: i64, trailer: HeaderMap) {
fn _unsigned_trailer(mut req: request::Request<Body>, content_len: i64, trailer: HeaderMap) {
if !trailer.is_empty() {
return;
}
let t = OffsetDateTime::now_utc();
let t = t.replace_time(time::Time::from_hms(0, 0, 0).unwrap());
let headers = req.headers_mut().expect("err");
let headers = req.headers_mut();
let format = format_description!("[year][month][day]T[hour][minute][second]Z");
headers.insert("X-Amz-Date", t.format(&format).unwrap().to_string().parse().unwrap());
@@ -372,13 +380,13 @@ fn _unsigned_trailer(mut req: request::Builder, content_len: i64, trailer: Heade
}
pub fn sign_v4(
req: request::Builder,
req: request::Request<Body>,
content_len: i64,
access_key_id: &str,
secret_access_key: &str,
session_token: &str,
location: &str,
) -> request::Builder {
) -> request::Request<Body> {
sign_v4_inner(
req,
content_len,
@@ -392,13 +400,13 @@ pub fn sign_v4(
}
pub fn sign_v4_trailer(
req: request::Builder,
req: request::Request<Body>,
access_key_id: &str,
secret_access_key: &str,
session_token: &str,
location: &str,
trailer: HeaderMap,
) -> request::Builder {
) -> request::Request<Body> {
sign_v4_inner(
req,
0,

View File

@@ -14,9 +14,11 @@
use http::request;
pub fn get_host_addr(req: &request::Builder) -> String {
let host = req.headers_ref().expect("err").get("host");
let uri = req.uri_ref().unwrap();
use s3s::Body;
pub fn get_host_addr(req: &request::Request<Body>) -> String {
let host = req.headers().get("host");
let uri = req.uri();
let req_host;
if let Some(port) = uri.port() {
req_host = format!("{}:{}", uri.host().unwrap(), port);

37
crates/utils/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Utils - Utility Functions
<p align="center">
<strong>Essential utility functions and common tools for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Utils** provides essential utility functions and common tools for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Cross-platform system operations and monitoring
- File system utilities with atomic operations
- Multi-algorithm compression and encoding support
- Cryptographic utilities and secure key generation
- Network utilities and protocol helpers
- Certificate handling and validation tools
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -1,198 +0,0 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use md5::{Digest as Md5Digest, Md5};
use sha2::{
Sha256 as sha_sha256,
digest::{Reset, Update},
};
pub trait Hasher {
fn write(&mut self, bytes: &[u8]);
fn reset(&mut self);
fn sum(&mut self) -> String;
fn size(&self) -> usize;
fn block_size(&self) -> usize;
}
#[derive(Default)]
pub enum HashType {
#[default]
Undefined,
Uuid(Uuid),
Md5(MD5),
Sha256(Sha256),
}
impl Hasher for HashType {
fn write(&mut self, bytes: &[u8]) {
match self {
HashType::Md5(md5) => md5.write(bytes),
HashType::Sha256(sha256) => sha256.write(bytes),
HashType::Uuid(uuid) => uuid.write(bytes),
HashType::Undefined => (),
}
}
fn reset(&mut self) {
match self {
HashType::Md5(md5) => md5.reset(),
HashType::Sha256(sha256) => sha256.reset(),
HashType::Uuid(uuid) => uuid.reset(),
HashType::Undefined => (),
}
}
fn sum(&mut self) -> String {
match self {
HashType::Md5(md5) => md5.sum(),
HashType::Sha256(sha256) => sha256.sum(),
HashType::Uuid(uuid) => uuid.sum(),
HashType::Undefined => "".to_owned(),
}
}
fn size(&self) -> usize {
match self {
HashType::Md5(md5) => md5.size(),
HashType::Sha256(sha256) => sha256.size(),
HashType::Uuid(uuid) => uuid.size(),
HashType::Undefined => 0,
}
}
fn block_size(&self) -> usize {
match self {
HashType::Md5(md5) => md5.block_size(),
HashType::Sha256(sha256) => sha256.block_size(),
HashType::Uuid(uuid) => uuid.block_size(),
HashType::Undefined => 64,
}
}
}
#[derive(Debug)]
pub struct Sha256 {
hasher: sha_sha256,
}
impl Sha256 {
pub fn new() -> Self {
Self {
hasher: sha_sha256::new(),
}
}
}
impl Default for Sha256 {
fn default() -> Self {
Self::new()
}
}
impl Hasher for Sha256 {
fn write(&mut self, bytes: &[u8]) {
Update::update(&mut self.hasher, bytes);
}
fn reset(&mut self) {
Reset::reset(&mut self.hasher);
}
fn sum(&mut self) -> String {
hex_simd::encode_to_string(self.hasher.clone().finalize(), hex_simd::AsciiCase::Lower)
}
fn size(&self) -> usize {
32
}
fn block_size(&self) -> usize {
64
}
}
#[derive(Debug)]
pub struct MD5 {
hasher: Md5,
}
impl MD5 {
pub fn new() -> Self {
Self { hasher: Md5::new() }
}
}
impl Default for MD5 {
fn default() -> Self {
Self::new()
}
}
impl Hasher for MD5 {
fn write(&mut self, bytes: &[u8]) {
Md5Digest::update(&mut self.hasher, bytes);
}
fn reset(&mut self) {}
fn sum(&mut self) -> String {
hex_simd::encode_to_string(self.hasher.clone().finalize(), hex_simd::AsciiCase::Lower)
}
fn size(&self) -> usize {
32
}
fn block_size(&self) -> usize {
64
}
}
pub struct Uuid {
id: String,
}
impl Uuid {
pub fn new(id: String) -> Self {
Self { id }
}
}
impl Hasher for Uuid {
fn write(&mut self, _bytes: &[u8]) {}
fn reset(&mut self) {}
fn sum(&mut self) -> String {
self.id.clone()
}
fn size(&self) -> usize {
self.id.len()
}
fn block_size(&self) -> usize {
64
}
}
pub fn sum_sha256_hex(data: &[u8]) -> String {
let mut hash = Sha256::new();
hash.write(data);
base64_simd::URL_SAFE_NO_PAD.encode_to_string(hash.sum())
}
pub fn sum_md5_base64(data: &[u8]) -> String {
let mut hash = MD5::new();
hash.write(data);
base64_simd::URL_SAFE_NO_PAD.encode_to_string(hash.sum())
}

View File

@@ -30,8 +30,6 @@ pub mod io;
#[cfg(feature = "hash")]
pub mod hash;
pub mod hasher;
#[cfg(feature = "os")]
pub mod os;

View File

@@ -18,10 +18,11 @@ use futures::{Stream, StreamExt};
use hyper::client::conn::http2::Builder;
use hyper_util::rt::TokioExecutor;
use lazy_static::lazy_static;
use std::net::Ipv4Addr;
use std::{
collections::HashSet,
fmt::Display,
net::{IpAddr, Ipv6Addr, SocketAddr, TcpListener, ToSocketAddrs},
net::{IpAddr, SocketAddr, TcpListener, ToSocketAddrs},
};
use transform_stream::AsyncTryStream;
use url::{Host, Url};
@@ -201,7 +202,7 @@ pub fn parse_and_resolve_address(addr_str: &str) -> std::io::Result<SocketAddr>
} else {
port
};
SocketAddr::new(IpAddr::V6(Ipv6Addr::UNSPECIFIED), final_port)
SocketAddr::new(IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)), final_port)
} else {
let mut addr = check_local_server_addr(addr_str)?; // assume check_local_server_addr is available here
if addr.port() == 0 {
@@ -477,12 +478,12 @@ mod test {
fn test_parse_and_resolve_address() {
// Test port-only format
let result = parse_and_resolve_address(":8080").unwrap();
assert_eq!(result.ip(), IpAddr::V6(Ipv6Addr::UNSPECIFIED));
assert_eq!(result.ip(), IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)));
assert_eq!(result.port(), 8080);
// Test port-only format with port 0 (should get available port)
let result = parse_and_resolve_address(":0").unwrap();
assert_eq!(result.ip(), IpAddr::V6(Ipv6Addr::UNSPECIFIED));
assert_eq!(result.ip(), IpAddr::V4(Ipv4Addr::new(0, 0, 0, 0)));
assert!(result.port() > 0);
// Test localhost with port

37
crates/workers/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Workers - Background Job System
<p align="center">
<strong>Distributed background job processing system for RustFS object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Workers** provides distributed background job processing capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Distributed job execution across cluster nodes
- Priority-based job scheduling and queue management
- Built-in workers for replication, cleanup, healing, and indexing
- Automatic retry logic with exponential backoff
- Horizontal scaling with load balancing
- Real-time job monitoring and administrative interface
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

37
crates/zip/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Zip - Compression & Archiving
<p align="center">
<strong>High-performance compression and archiving for RustFS object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Zip** provides high-performance compression and archiving capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Multiple compression algorithms (Zstd, LZ4, Gzip, Brotli)
- Streaming compression for memory efficiency
- Parallel processing for large files
- Archive format support (ZIP, TAR, custom formats)
- Adaptive compression with content-type detection
- Compression analytics and performance metrics
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -17,6 +17,8 @@ version: '3.8'
services:
# RustFS main service
rustfs:
security_opt:
- "no-new-privileges:true"
image: rustfs/rustfs:latest
container_name: rustfs-server
build:

57
pr_description.md Normal file
View File

@@ -0,0 +1,57 @@
## Summary
This PR modifies the GitHub Actions workflows to ensure that **version releases never get skipped** during CI/CD execution, addressing the issue where duplicate action detection could skip important release processes.
## Changes Made
### 🔧 Core Modifications
1. **Modified skip-duplicate-actions configuration**:
- Added `skip_after_successful_duplicate: ${{ !startsWith(github.ref, 'refs/tags/') }}` parameter
- This ensures tag pushes (version releases) are never skipped due to duplicate detection
2. **Updated workflow job conditions**:
- **CI Workflow** (`ci.yml`): Modified `test-and-lint` and `e2e-tests` jobs
- **Build Workflow** (`build.yml`): Modified `build-check`, `build-rustfs`, `build-gui`, `release`, and `upload-oss` jobs
- All jobs now use condition: `startsWith(github.ref, 'refs/tags/') || needs.skip-check.outputs.should_skip != 'true'`
### 🎯 Problem Solved
- **Before**: Version releases could be skipped if there were concurrent workflows or duplicate actions
- **After**: Tag pushes always trigger complete CI/CD pipeline execution, ensuring:
- ✅ Full test suite execution
- ✅ Code quality checks (fmt, clippy)
- ✅ Multi-platform builds (Linux, macOS, Windows)
- ✅ GUI builds for releases
- ✅ Release asset creation
- ✅ OSS uploads
### 🚀 Benefits
1. **Release Quality Assurance**: Every version release undergoes complete validation
2. **Consistency**: No more uncertainty about whether release builds were properly tested
3. **Multi-platform Support**: Ensures all target platforms are built for every release
4. **Backward Compatibility**: Non-release workflows still benefit from duplicate skip optimization
## Testing
- [x] Workflow syntax validated
- [x] Logic conditions verified for both tag and non-tag scenarios
- [x] Maintains existing optimization for development builds
- [x] Follows project coding standards and commit conventions
## Related Issues
This resolves the concern about workflow skipping during version releases, ensuring complete CI/CD execution for all published versions.
## Checklist
- [x] Code follows project formatting standards
- [x] Commit message follows Conventional Commits format
- [x] Changes are backwards compatible
- [x] No breaking changes introduced
- [x] All workflow conditions properly tested
---
**Note**: This change only affects the execution logic for tag pushes (version releases). Regular development workflows continue to benefit from duplicate action skipping for efficiency.

17
rust-toolchain.toml Normal file
View File

@@ -0,0 +1,17 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[toolchain]
channel = "stable"
components = ["rustfmt", "clippy", "rust-src", "rust-analyzer"]

View File

@@ -30,13 +30,21 @@ workspace = true
[dependencies]
rustfs-zip = { workspace = true }
tokio-tar = { workspace = true }
rustfs-madmin = { workspace = true }
rustfs-s3select-api = { workspace = true }
rustfs-appauth = { workspace = true }
rustfs-ecstore = { workspace = true }
rustfs-policy = { workspace = true }
rustfs-common = { workspace = true }
rustfs-iam = { workspace = true }
rustfs-filemeta.workspace = true
rustfs-rio.workspace = true
rustfs-config = { workspace = true, features = ["constants", "notify"] }
rustfs-notify = { workspace = true }
rustfs-obs = { workspace = true }
rustfs-utils = { workspace = true, features = ["full"] }
rustfs-protos.workspace = true
rustfs-s3select-query = { workspace = true }
atoi = { workspace = true }
atomic_enum = { workspace = true }
axum.workspace = true
@@ -53,19 +61,13 @@ hyper.workspace = true
hyper-util.workspace = true
http.workspace = true
http-body.workspace = true
rustfs-iam = { workspace = true }
lazy_static.workspace = true
matchit = { workspace = true }
mime_guess = { workspace = true }
opentelemetry = { workspace = true }
percent-encoding = { workspace = true }
pin-project-lite.workspace = true
rustfs-protos.workspace = true
rustfs-s3select-query = { workspace = true }
reqwest = { workspace = true }
rustfs-config = { workspace = true, features = ["constants", "notify"] }
rustfs-notify = { workspace = true }
rustfs-obs = { workspace = true }
rustfs-utils = { workspace = true, features = ["full"] }
rustls.workspace = true
rust-embed = { workspace = true, features = ["interpolate-folder-path"] }
s3s.workspace = true
@@ -84,9 +86,9 @@ tokio = { workspace = true, features = [
"net",
"signal",
] }
tokio-rustls.workspace = true
lazy_static.workspace = true
tokio-stream.workspace = true
tokio-rustls = { workspace = true, features = ["default"] }
tokio-tar = { workspace = true }
tonic = { workspace = true }
tower.workspace = true
tower-http = { workspace = true, features = [
@@ -94,13 +96,13 @@ tower-http = { workspace = true, features = [
"compression-deflate",
"compression-gzip",
"cors",
"catch-panic",
] }
urlencoding = { workspace = true }
uuid = { workspace = true }
rustfs-filemeta.workspace = true
rustfs-rio.workspace = true
zip = { workspace = true }
[target.'cfg(target_os = "linux")'.dependencies]
libsystemd.workspace = true

View File

@@ -55,6 +55,7 @@ use s3s::stream::{ByteStream, DynByteStream};
use s3s::{Body, S3Error, S3Request, S3Response, S3Result, s3_error};
use s3s::{S3ErrorCode, StdError};
use serde::{Deserialize, Serialize};
use tracing::debug;
// use serde_json::to_vec;
use std::collections::{HashMap, HashSet};
use std::path::PathBuf;
@@ -970,7 +971,7 @@ impl Operation for ListRemoteTargetHandler {
if let Some(sys) = GLOBAL_Bucket_Target_Sys.get() {
let targets = sys.list_targets(Some(bucket), None).await;
error!("target sys len {}", targets.len());
info!("target sys len {}", targets.len());
if targets.is_empty() {
return Ok(S3Response::new((
StatusCode::NOT_FOUND,
@@ -1006,43 +1007,65 @@ pub struct RemoveRemoteTargetHandler {}
#[async_trait::async_trait]
impl Operation for RemoveRemoteTargetHandler {
async fn call(&self, _req: S3Request<Body>, _params: Params<'_, '_>) -> S3Result<S3Response<(StatusCode, Body)>> {
error!("remove remote target called");
debug!("remove remote target called");
let querys = extract_query_params(&_req.uri);
let Some(bucket) = querys.get("bucket") else {
return Ok(S3Response::new((
StatusCode::BAD_REQUEST,
Body::from("Bucket parameter is required".to_string()),
)));
};
let mut need_delete = true;
if let Some(arnstr) = querys.get("arn") {
if let Some(bucket) = querys.get("bucket") {
if bucket.is_empty() {
error!("bucket parameter is empty");
return Ok(S3Response::new((StatusCode::NOT_FOUND, Body::from("bucket not found".to_string()))));
}
let _arn = bucket_targets::ARN::parse(arnstr);
let _arn = bucket_targets::ARN::parse(arnstr);
match get_replication_config(bucket).await {
Ok((conf, _ts)) => {
for ru in conf.rules {
let encoded = percent_encode(ru.destination.bucket.as_bytes(), &COLON);
let encoded_str = encoded.to_string();
if *arnstr == encoded_str {
error!("target in use");
return Ok(S3Response::new((StatusCode::FORBIDDEN, Body::from("Ok".to_string()))));
}
info!("bucket: {} and arn str is {} ", encoded_str, arnstr);
match get_replication_config(bucket).await {
Ok((conf, _ts)) => {
for ru in conf.rules {
let encoded = percent_encode(ru.destination.bucket.as_bytes(), &COLON);
let encoded_str = encoded.to_string();
if *arnstr == encoded_str {
//error!("target in use");
//return Ok(S3Response::new((StatusCode::OK, Body::from("Ok".to_string()))));
need_delete = false;
break;
}
}
Err(err) => {
error!("get replication config err: {}", err);
return Ok(S3Response::new((StatusCode::NOT_FOUND, Body::from(err.to_string()))));
//info!("bucket: {} and arn str is {} ", encoded_str, arnstr);
}
}
//percent_decode_str(&arnstr);
Err(err) => {
error!("get replication config err: {}", err);
return Ok(S3Response::new((StatusCode::NOT_FOUND, Body::from(err.to_string()))));
}
}
if need_delete {
info!("arn {} is in use, cannot delete", arnstr);
let decoded_str = decode(arnstr).unwrap();
error!("need delete target is {}", decoded_str);
bucket_targets::remove_bucket_target(bucket, arnstr).await;
}
}
//return Err(s3_error!(InvalidArgument, "Invalid bucket name"));
//Ok(S3Response::with_headers((StatusCode::OK, Body::from()), header))
return Ok(S3Response::new((StatusCode::OK, Body::from("Ok".to_string()))));
// List bucket targets and return as JSON to client
// match bucket_targets::list_bucket_targets(bucket).await {
// Ok(targets) => {
// let json_targets = serde_json::to_string(&targets).map_err(|e| {
// error!("Serialization error: {}", e);
// S3Error::with_message(S3ErrorCode::InternalError, "Failed to serialize targets".to_string())
// })?;
// return Ok(S3Response::new((StatusCode::OK, Body::from(json_targets))));
// }
// Err(e) => {
// error!("list bucket targets failed: {:?}", e);
// return Err(S3Error::with_message(
// S3ErrorCode::InternalError,
// "list bucket targets failed".to_string(),
// ));
// }
// }
return Ok(S3Response::new((StatusCode::NO_CONTENT, Body::from("".to_string()))));
}
}

View File

@@ -1,4 +1,3 @@
#![allow(unused_variables, unused_mut, unused_must_use)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -12,6 +11,7 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_variables, unused_mut, unused_must_use)]
use http::{HeaderMap, StatusCode};
//use iam::get_global_action_cred;
@@ -461,3 +461,182 @@ impl Operation for ClearTier {
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}
/*pub struct PostRestoreObject {}
#[async_trait::async_trait]
impl Operation for PostRestoreObject {
async fn call(&self, req: S3Request<Body>, params: Params<'_, '_>) -> S3Result<S3Response<(StatusCode, Body)>> {
let query = {
if let Some(query) = req.uri.query() {
let input: PostRestoreObject =
from_bytes(query.as_bytes()).map_err(|_e| s3_error!(InvalidArgument, "get query failed"))?;
input
} else {
PostRestoreObject::default()
}
};
let bucket = params.bucket;
if let Err(e) = un_escape_path(params.object) {
warn!("post restore object failed, e: {:?}", e);
return Err(S3Error::with_message(S3ErrorCode::Custom("PostRestoreObjectFailed".into()), "post restore object failed"));
}
let Some(store) = new_object_layer_fn() else {
return Err(S3Error::with_message(S3ErrorCode::InternalError, "Not init".to_string()));
};
let get_object_info = store.get_object_info();
if Err(err) = check_request_auth_type(req, policy::RestoreObjectAction, bucket, object) {
return Err(S3Error::with_message(S3ErrorCode::Custom("PostRestoreObjectFailed".into()), "post restore object failed"));
}
if req.content_length <= 0 {
return Err(S3Error::with_message(S3ErrorCode::Custom("ErrEmptyRequestBody".into()), "post restore object failed"));
}
let Some(opts) = post_restore_opts(req, bucket, object) else {
return Err(S3Error::with_message(S3ErrorCode::Custom("ErrEmptyRequestBody".into()), "post restore object failed"));
};
let Some(obj_info) = getObjectInfo(ctx, bucket, object, opts) else {
return Err(S3Error::with_message(S3ErrorCode::Custom("ErrEmptyRequestBody".into()), "post restore object failed"));
};
if obj_info.transitioned_object.status != lifecycle::TRANSITION_COMPLETE {
return Err(S3Error::with_message(S3ErrorCode::Custom("ErrEmptyRequestBody".into()), "post restore object failed"));
}
let mut api_err;
let Some(rreq) = parsere_store_request(req.body(), req.content_length) else {
let api_err = errorCodes.ToAPIErr(ErrMalformedXML);
api_err.description = err.Error()
return Err(S3Error::with_message(S3ErrorCode::Custom("ErrEmptyRequestBody".into()), "post restore object failed"));
};
let mut status_code = http::StatusCode::OK;
let mut already_restored = false;
if Err(err) = rreq.validate(store) {
api_err = errorCodes.ToAPIErr(ErrMalformedXML)
api_err.description = err.Error()
return Err(S3Error::with_message(S3ErrorCode::Custom("ErrEmptyRequestBody".into()), "post restore object failed"));
} else {
if obj_info.restore_ongoing && rreq.Type != "SELECT" {
return Err(S3Error::with_message(S3ErrorCode::Custom("ErrObjectRestoreAlreadyInProgress".into()), "post restore object failed"));
}
if !obj_info.restore_ongoing && !obj_info.restore_expires.unix_timestamp() == 0 {
status_code = http::StatusCode::Accepted;
already_restored = true;
}
}
let restore_expiry = lifecycle::expected_expiry_time(OffsetDateTime::now_utc(), rreq.days);
let mut metadata = clone_mss(obj_info.user_defined);
if rreq.type != "SELECT" {
obj_info.metadataOnly = true;
metadata[xhttp.AmzRestoreExpiryDays] = rreq.days;
metadata[xhttp.AmzRestoreRequestDate] = OffsetDateTime::now_utc().format(http::TimeFormat);
if already_restored {
metadata[xhttp.AmzRestore] = completedRestoreObj(restore_expiry).String()
} else {
metadata[xhttp.AmzRestore] = ongoingRestoreObj().String()
}
obj_info.user_defined = metadata;
if let Err(err) = store.copy_object(bucket, object, bucket, object, obj_info, ObjectOptions {
version_id: obj_info.version_id,
}, ObjectOptions {
version_id: obj_info.version_id,
m_time: obj_info.mod_time,
}) {
return Err(S3Error::with_message(S3ErrorCode::Custom("ErrInvalidObjectState".into()), "post restore object failed"));
}
if already_restored {
return Ok(());
}
}
let restore_object = must_get_uuid();
if rreq.output_location.s3.bucket_name != "" {
w.Header()[xhttp.AmzRestoreOutputPath] = []string{pathJoin(rreq.OutputLocation.S3.BucketName, rreq.OutputLocation.S3.Prefix, restoreObject)}
}
w.WriteHeader(status_code)
send_event(EventArgs {
event_name: event::ObjectRestorePost,
bucket_name: bucket,
object: obj_info,
req_params: extract_req_params(r),
user_agent: req.user_agent(),
host: handlers::get_source_ip(r),
});
tokio::spawn(async move {
if !rreq.SelectParameters.IsEmpty() {
let actual_size = obj_info.get_actual_size();
if actual_size.is_err() {
return Err(S3Error::with_message(S3ErrorCode::Custom("ErrInvalidObjectState".into()), "post restore object failed"));
}
let object_rsc = s3select.NewObjectReadSeekCloser(
|offset int64| -> (io.ReadCloser, error) {
rs := &HTTPRangeSpec{
IsSuffixLength: false,
Start: offset,
End: -1,
}
return getTransitionedObjectReader(bucket, object, rs, r.Header,
obj_info, ObjectOptions {version_id: obj_info.version_id});
},
actual_size.unwrap(),
);
if err = rreq.SelectParameters.Open(objectRSC); err != nil {
if serr, ok := err.(s3select.SelectError); ok {
let encoded_error_response = encodeResponse(APIErrorResponse {
code: serr.ErrorCode(),
message: serr.ErrorMessage(),
bucket_name: bucket,
key: object,
resource: r.URL.Path,
request_id: w.Header().Get(xhttp.AmzRequestID),
host_id: globalDeploymentID(),
});
//writeResponse(w, serr.HTTPStatusCode(), encodedErrorResponse, mimeXML)
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header));
} else {
return Err(S3Error::with_message(S3ErrorCode::Custom("ErrInvalidObjectState".into()), "post restore object failed"));
}
return Ok(());
}
let nr = httptest.NewRecorder();
let rw = xhttp.NewResponseRecorder(nr);
rw.log_err_body = true;
rw.log_all_body = true;
rreq.select_parameters.evaluate(rw);
rreq.select_parameters.Close();
return Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header));
}
let opts = ObjectOptions {
transition: TransitionOptions {
restore_request: rreq,
restore_expiry: restore_expiry,
},
version_id: objInfo.version_id,
}
if Err(err) = store.restore_transitioned_object(bucket, object, opts) {
format!(format!("unable to restore transitioned bucket/object {}/{}: {}", bucket, object, err.to_string()));
return Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header));
}
send_event(EventArgs {
EventName: event.ObjectRestoreCompleted,
BucketName: bucket,
Object: objInfo,
ReqParams: extractReqParams(r),
UserAgent: r.UserAgent(),
Host: handlers.GetSourceIP(r),
});
});
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}*/

View File

@@ -44,7 +44,6 @@ use hyper_util::{
use license::init_license;
use rustfs_common::globals::set_global_addr;
use rustfs_config::{DEFAULT_ACCESS_KEY, DEFAULT_SECRET_KEY, RUSTFS_TLS_CERT, RUSTFS_TLS_KEY};
use rustfs_ecstore::StorageAPI;
use rustfs_ecstore::bucket::metadata_sys::init_bucket_metadata_sys;
use rustfs_ecstore::cmd::bucket_replication::init_bucket_replication_pool;
use rustfs_ecstore::config as ecconfig;
@@ -53,18 +52,16 @@ use rustfs_ecstore::heal::background_heal_ops::init_auto_heal;
use rustfs_ecstore::rpc::make_server;
use rustfs_ecstore::store_api::BucketOptions;
use rustfs_ecstore::{
endpoints::EndpointServerPools,
heal::data_scanner::init_data_scanner,
set_global_endpoints,
store::{ECStore, init_local_disks},
StorageAPI, endpoints::EndpointServerPools, global::set_global_rustfs_port, heal::data_scanner::init_data_scanner,
notification_sys::new_global_notification_sys, set_global_endpoints, store::ECStore, store::init_local_disks,
update_erasure_type,
};
use rustfs_ecstore::{global::set_global_rustfs_port, notification_sys::new_global_notification_sys};
use rustfs_iam::init_iam_sys;
use rustfs_obs::{SystemObserver, init_obs, set_global_guard};
use rustfs_protos::proto_gen::node_service::node_service_server::NodeServiceServer;
use rustfs_utils::net::parse_and_resolve_address;
use rustls::ServerConfig;
use s3s::service::S3Service;
use s3s::{host::MultiDomain, service::S3ServiceBuilder};
use service::hybrid;
use socket2::SockRef;
@@ -72,11 +69,13 @@ use std::io::{Error, Result};
use std::net::SocketAddr;
use std::sync::Arc;
use std::time::Duration;
use tokio::net::TcpListener;
use tokio::net::{TcpListener, TcpStream};
#[cfg(unix)]
use tokio::signal::unix::{SignalKind, signal};
use tokio_rustls::TlsAcceptor;
use tonic::{Request, Status, metadata::MetadataValue};
use tower::ServiceBuilder;
use tower_http::catch_panic::CatchPanicLayer;
use tower_http::cors::CorsLayer;
use tower_http::trace::TraceLayer;
use tracing::{Span, debug, error, info, instrument, warn};
@@ -128,6 +127,49 @@ async fn main() -> Result<()> {
run(opt).await
}
/// Sets up the TLS acceptor if certificates are available.
#[instrument(skip(tls_path))]
async fn setup_tls_acceptor(tls_path: &str) -> Result<Option<TlsAcceptor>> {
if tls_path.is_empty() || tokio::fs::metadata(tls_path).await.is_err() {
debug!("TLS path is not provided or does not exist, starting with HTTP");
return Ok(None);
}
debug!("Found TLS directory, checking for certificates");
// 1. Try to load all certificates from the directory (multi-cert support)
if let Ok(cert_key_pairs) = rustfs_utils::load_all_certs_from_directory(tls_path) {
if !cert_key_pairs.is_empty() {
debug!("Found {} certificates, creating multi-cert resolver", cert_key_pairs.len());
let _ = rustls::crypto::aws_lc_rs::default_provider().install_default();
let mut server_config = ServerConfig::builder()
.with_no_client_auth()
.with_cert_resolver(Arc::new(rustfs_utils::create_multi_cert_resolver(cert_key_pairs)?));
server_config.alpn_protocols = vec![b"h2".to_vec(), b"http/1.1".to_vec(), b"http/1.0".to_vec()];
return Ok(Some(TlsAcceptor::from(Arc::new(server_config))));
}
}
// 2. Fallback to legacy single certificate mode
let key_path = format!("{tls_path}/{RUSTFS_TLS_KEY}");
let cert_path = format!("{tls_path}/{RUSTFS_TLS_CERT}");
if tokio::try_join!(tokio::fs::metadata(&key_path), tokio::fs::metadata(&cert_path)).is_ok() {
debug!("Found legacy single TLS certificate, starting with HTTPS");
let _ = rustls::crypto::aws_lc_rs::default_provider().install_default();
let certs = rustfs_utils::load_certs(&cert_path).map_err(|e| rustfs_utils::certs_error(e.to_string()))?;
let key = rustfs_utils::load_private_key(&key_path).map_err(|e| rustfs_utils::certs_error(e.to_string()))?;
let mut server_config = ServerConfig::builder()
.with_no_client_auth()
.with_single_cert(certs, key)
.map_err(|e| rustfs_utils::certs_error(e.to_string()))?;
server_config.alpn_protocols = vec![b"h2".to_vec(), b"http/1.1".to_vec(), b"http/1.0".to_vec()];
return Ok(Some(TlsAcceptor::from(Arc::new(server_config))));
}
debug!("No valid TLS certificates found in the directory, starting with HTTP");
Ok(None)
}
#[instrument(skip(opt))]
async fn run(opt: config::Opt) -> Result<()> {
debug!("opt: {:?}", &opt);
@@ -147,7 +189,6 @@ async fn run(opt: config::Opt) -> Result<()> {
let listener = TcpListener::bind(server_address.clone()).await?;
// Obtain the listener address
let local_addr: SocketAddr = listener.local_addr()?;
// let local_ip = utils::get_local_ip().ok_or(local_addr.ip()).unwrap();
let local_ip = rustfs_utils::get_local_ip().ok_or(local_addr.ip()).unwrap();
// For RPC
@@ -203,18 +244,14 @@ async fn run(opt: config::Opt) -> Result<()> {
// This project uses the S3S library to implement S3 services
let s3_service = {
let store = storage::ecfs::FS::new();
// let mut b = S3ServiceBuilder::new(storage::ecfs::FS::new(server_address.clone(), endpoint_pools).await?);
let mut b = S3ServiceBuilder::new(store.clone());
let access_key = opt.access_key.clone();
let secret_key = opt.secret_key.clone();
// Displays info information
debug!("authentication is enabled {}, {}", &access_key, &secret_key);
b.set_auth(IAMAuth::new(access_key, secret_key));
b.set_access(store.clone());
b.set_route(admin::make_admin_route()?);
if !opt.server_domains.is_empty() {
@@ -222,20 +259,6 @@ async fn run(opt: config::Opt) -> Result<()> {
b.set_host(MultiDomain::new(&opt.server_domains).map_err(Error::other)?);
}
// // Enable parsing virtual-hosted-style requests
// if let Some(dm) = opt.domain_name {
// info!("virtual-hosted-style requests are enabled use domain_name {}", &dm);
// b.set_base_domain(dm);
// }
// if domain_name.is_some() {
// info!(
// "virtual-hosted-style requests are enabled use domain_name {}",
// domain_name.as_ref().unwrap()
// );
// b.set_base_domain(domain_name.unwrap());
// }
b.build()
};
@@ -253,57 +276,8 @@ async fn run(opt: config::Opt) -> Result<()> {
}
});
let tls_path = opt.tls_path.clone().unwrap_or_default();
let has_tls_certs = tokio::fs::metadata(&tls_path).await.is_ok();
let tls_acceptor = if has_tls_certs {
debug!("Found TLS directory, checking for certificates");
let tls_acceptor = setup_tls_acceptor(opt.tls_path.as_deref().unwrap_or_default()).await?;
// 1. Try to load all certificates directly (including root and subdirectories)
match rustfs_utils::load_all_certs_from_directory(&tls_path) {
Ok(cert_key_pairs) if !cert_key_pairs.is_empty() => {
debug!("Found {} certificates, starting with HTTPS", cert_key_pairs.len());
let _ = rustls::crypto::aws_lc_rs::default_provider().install_default();
// create a multi certificate configuration
let mut server_config = ServerConfig::builder()
.with_no_client_auth()
.with_cert_resolver(Arc::new(rustfs_utils::create_multi_cert_resolver(cert_key_pairs)?));
server_config.alpn_protocols = vec![b"h2".to_vec(), b"http/1.1".to_vec(), b"http/1.0".to_vec()];
Some(TlsAcceptor::from(Arc::new(server_config)))
}
_ => {
// 2. If the synthesis fails, fall back to the traditional document certificate mode (backward compatible)
let key_path = format!("{tls_path}/{RUSTFS_TLS_KEY}");
let cert_path = format!("{tls_path}/{RUSTFS_TLS_CERT}");
let has_single_cert =
tokio::try_join!(tokio::fs::metadata(key_path.clone()), tokio::fs::metadata(cert_path.clone())).is_ok();
if has_single_cert {
debug!("Found legacy single TLS certificate, starting with HTTPS");
let _ = rustls::crypto::aws_lc_rs::default_provider().install_default();
let certs =
rustfs_utils::load_certs(cert_path.as_str()).map_err(|e| rustfs_utils::certs_error(e.to_string()))?;
let key = rustfs_utils::load_private_key(key_path.as_str())
.map_err(|e| rustfs_utils::certs_error(e.to_string()))?;
let mut server_config = ServerConfig::builder()
.with_no_client_auth()
.with_single_cert(certs, key)
.map_err(|e| rustfs_utils::certs_error(e.to_string()))?;
server_config.alpn_protocols = vec![b"h2".to_vec(), b"http/1.1".to_vec(), b"http/1.0".to_vec()];
Some(TlsAcceptor::from(Arc::new(server_config)))
} else {
debug!("No valid TLS certificates found, starting with HTTP");
None
}
}
}
} else {
debug!("TLS certificates not found, starting with HTTP");
None
};
let rpc_service = NodeServiceServer::with_interceptor(make_server(), check_auth);
let state_manager = ServiceStateManager::new();
let worker_state_manager = state_manager.clone();
// Update service status to Starting
@@ -317,72 +291,11 @@ async fn run(opt: config::Opt) -> Result<()> {
#[cfg(unix)]
let (mut sigterm_inner, mut sigint_inner) = {
// Unix platform specific code
let sigterm_inner = match signal(SignalKind::terminate()) {
Ok(signal) => signal,
Err(e) => {
error!("Failed to create SIGTERM signal handler: {}", e);
return;
}
};
let sigint_inner = match signal(SignalKind::interrupt()) {
Ok(signal) => signal,
Err(e) => {
error!("Failed to create SIGINT signal handler: {}", e);
return;
}
};
let sigterm_inner = signal(SignalKind::terminate()).expect("Failed to create SIGTERM signal handler");
let sigint_inner = signal(SignalKind::interrupt()).expect("Failed to create SIGINT signal handler");
(sigterm_inner, sigint_inner)
};
let hybrid_service = TowerToHyperService::new(
tower::ServiceBuilder::new()
.layer(
TraceLayer::new_for_http()
.make_span_with(|request: &HttpRequest<_>| {
let span = tracing::info_span!("http-request",
status_code = tracing::field::Empty,
method = %request.method(),
uri = %request.uri(),
version = ?request.version(),
);
for (header_name, header_value) in request.headers() {
if header_name == "user-agent" || header_name == "content-type" || header_name == "content-length"
{
span.record(header_name.as_str(), header_value.to_str().unwrap_or("invalid"));
}
}
span
})
.on_request(|request: &HttpRequest<_>, _span: &Span| {
info!(
counter.rustfs_api_requests_total = 1_u64,
key_request_method = %request.method().to_string(),
key_request_uri_path = %request.uri().path().to_owned(),
"handle request api total",
);
debug!("http started method: {}, url path: {}", request.method(), request.uri().path())
})
.on_response(|response: &Response<_>, latency: Duration, _span: &Span| {
_span.record("http response status_code", tracing::field::display(response.status()));
debug!("http response generated in {:?}", latency)
})
.on_body_chunk(|chunk: &Bytes, latency: Duration, _span: &Span| {
info!(histogram.request.body.len = chunk.len(), "histogram request body length",);
debug!("http body sending {} bytes in {:?}", chunk.len(), latency)
})
.on_eos(|_trailers: Option<&HeaderMap>, stream_duration: Duration, _span: &Span| {
debug!("http stream closed after {:?}", stream_duration)
})
.on_failure(|_error, latency: Duration, _span: &Span| {
info!(counter.rustfs_api_requests_failure_total = 1_u64, "handle request api failure total");
debug!("http request failure error: {:?} in {:?}", _error, latency)
}),
)
.layer(CorsLayer::permissive())
.service(hybrid(s3_service, rpc_service)),
);
let http_server = Arc::new(ConnBuilder::new(TokioExecutor::new()));
let mut ctrl_c = std::pin::pin!(tokio::signal::ctrl_c());
let graceful = Arc::new(GracefulShutdown::new());
@@ -390,42 +303,36 @@ async fn run(opt: config::Opt) -> Result<()> {
// service ready
worker_state_manager.update(ServiceState::Ready);
let value = hybrid_service.clone();
let tls_acceptor = tls_acceptor.map(Arc::new);
loop {
debug!("waiting for SIGINT or SIGTERM has_tls_certs: {}", has_tls_certs);
// Wait for a connection
debug!("Waiting for new connection...");
let (socket, _) = {
#[cfg(unix)]
{
tokio::select! {
res = listener.accept() => {
match res {
Ok(conn) => conn,
Err(err) => {
error!("error accepting connection: {err}");
continue;
}
res = listener.accept() => match res {
Ok(conn) => conn,
Err(err) => {
error!("error accepting connection: {err}");
continue;
}
}
},
_ = ctrl_c.as_mut() => {
info!("Ctrl-C received in worker thread");
let _ = shutdown_tx_clone.send(());
break;
}
},
Some(_) = sigint_inner.recv() => {
info!("SIGINT received in worker thread");
let _ = shutdown_tx_clone.send(());
break;
}
},
Some(_) = sigterm_inner.recv() => {
info!("SIGTERM received in worker thread");
let _ = shutdown_tx_clone.send(());
break;
}
},
_ = shutdown_rx.recv() => {
info!("Shutdown signal received in worker thread");
break;
@@ -435,22 +342,18 @@ async fn run(opt: config::Opt) -> Result<()> {
#[cfg(not(unix))]
{
tokio::select! {
res = listener.accept() => {
match res {
Ok(conn) => conn,
Err(err) => {
error!("error accepting connection: {err}");
continue;
}
res = listener.accept() => match res {
Ok(conn) => conn,
Err(err) => {
error!("error accepting connection: {err}");
continue;
}
}
},
_ = ctrl_c.as_mut() => {
info!("Ctrl-C received in worker thread");
let _ = shutdown_tx_clone.send(());
break;
}
},
_ = shutdown_rx.recv() => {
info!("Shutdown signal received in worker thread");
break;
@@ -470,70 +373,12 @@ async fn run(opt: config::Opt) -> Result<()> {
warn!(?err, "Failed to set set_send_buffer_size");
}
if has_tls_certs {
debug!("TLS certificates found, starting with SIGINT");
let peer_addr_str = socket.peer_addr().map(|a| a.to_string()).unwrap_or_else(|e| {
warn!("Could not get peer address: {}", e);
"unknown".to_string()
});
let tls_socket = match tls_acceptor.as_ref() {
Some(acceptor) => match acceptor.accept(socket).await {
Ok(tls_socket) => {
info!("TLS handshake successful with peer: {}", peer_addr_str);
tls_socket
}
Err(err) => {
error!("TLS handshake with peer {} failed: {}", peer_addr_str, err);
continue;
}
},
None => {
error!(
"TLS acceptor is not available, but TLS is enabled. This is a bug. Dropping connection from {}",
peer_addr_str
);
continue;
}
};
let http_server_clone = http_server.clone();
let value_clone = value.clone();
let graceful_clone = graceful.clone();
tokio::task::spawn_blocking(move || {
tokio::runtime::Runtime::new()
.expect("Failed to create runtime")
.block_on(async move {
let conn = http_server_clone.serve_connection(TokioIo::new(tls_socket), value_clone);
let conn = graceful_clone.watch(conn);
if let Err(err) = conn.await {
// Handle hyper::Error and low-level IO errors at a more granular level
handle_connection_error(&*err);
}
});
});
debug!("TLS handshake success");
} else {
debug!("Http handshake start");
let http_server_clone = http_server.clone();
let value_clone = value.clone();
let graceful_clone = graceful.clone();
tokio::spawn(async move {
let conn = http_server_clone.serve_connection(TokioIo::new(socket), value_clone);
let conn = graceful_clone.watch(conn);
if let Err(err) = conn.await {
// Handle hyper::Error and low-level IO errors at a more granular level
handle_connection_error(&*err);
}
});
debug!("Http handshake success");
}
process_connection(socket, tls_acceptor.clone(), http_server.clone(), s3_service.clone(), graceful.clone());
}
worker_state_manager.update(ServiceState::Stopping);
match Arc::try_unwrap(graceful) {
Ok(g) => {
// Successfully obtaining unique ownership, you can call shutdown
tokio::select! {
() = g.shutdown() => {
debug!("Gracefully shutdown!");
@@ -544,9 +389,7 @@ async fn run(opt: config::Opt) -> Result<()> {
}
}
Err(arc_graceful) => {
// There are other references that cannot be obtained for unique ownership
error!("Cannot perform graceful shutdown, other references exist err: {:?}", arc_graceful);
// In this case, we can only wait for the timeout
tokio::time::sleep(Duration::from_secs(10)).await;
debug!("Timeout reached, forcing shutdown");
}
@@ -586,17 +429,15 @@ async fn run(opt: config::Opt) -> Result<()> {
})?;
// init scanner
init_data_scanner().await;
let scanner_cancel_token = init_data_scanner().await;
// init auto heal
init_auto_heal().await;
// init console configuration
init_console_cfg(local_ip, server_port);
print_server_info();
init_bucket_replication_pool().await;
print_server_info();
// Async update check (optional)
tokio::spawn(async {
use crate::update_checker::{UpdateCheckError, check_updates};
@@ -652,11 +493,11 @@ async fn run(opt: config::Opt) -> Result<()> {
match wait_for_shutdown().await {
#[cfg(unix)]
ShutdownSignal::CtrlC | ShutdownSignal::Sigint | ShutdownSignal::Sigterm => {
handle_shutdown(&state_manager, &shutdown_tx).await;
handle_shutdown(&state_manager, &shutdown_tx, &scanner_cancel_token).await;
}
#[cfg(not(unix))]
ShutdownSignal::CtrlC => {
handle_shutdown(&state_manager, &shutdown_tx).await;
handle_shutdown(&state_manager, &shutdown_tx, &scanner_cancel_token).await;
}
}
@@ -664,12 +505,117 @@ async fn run(opt: config::Opt) -> Result<()> {
Ok(())
}
/// Process a single incoming TCP connection.
///
/// This function is executed in a new Tokio task and it will:
/// 1. If TLS is configured, perform TLS handshake.
/// 2. Build a complete service stack for this connection, including S3, RPC services, and all middleware.
/// 3. Use Hyper to handle HTTP requests on this connection.
/// 4. Incorporate connections into the management of elegant closures.
#[instrument(skip_all, fields(peer_addr = %socket.peer_addr().map(|a| a.to_string()).unwrap_or_else(|_| "unknown".to_string())))]
fn process_connection(
socket: TcpStream,
tls_acceptor: Option<Arc<TlsAcceptor>>,
http_server: Arc<ConnBuilder<TokioExecutor>>,
s3_service: S3Service,
graceful: Arc<GracefulShutdown>,
) {
tokio::spawn(async move {
// Build services inside each connected task to avoid passing complex service types across tasks,
// It also ensures that each connection has an independent service instance.
let rpc_service = NodeServiceServer::with_interceptor(make_server(), check_auth);
let hybrid_service = ServiceBuilder::new()
.layer(CatchPanicLayer::new())
.layer(
TraceLayer::new_for_http()
.make_span_with(|request: &HttpRequest<_>| {
let span = tracing::info_span!("http-request",
status_code = tracing::field::Empty,
method = %request.method(),
uri = %request.uri(),
version = ?request.version(),
);
for (header_name, header_value) in request.headers() {
if header_name == "user-agent" || header_name == "content-type" || header_name == "content-length" {
span.record(header_name.as_str(), header_value.to_str().unwrap_or("invalid"));
}
}
span
})
.on_request(|request: &HttpRequest<_>, _span: &Span| {
info!(
counter.rustfs_api_requests_total = 1_u64,
key_request_method = %request.method().to_string(),
key_request_uri_path = %request.uri().path().to_owned(),
"handle request api total",
);
debug!("http started method: {}, url path: {}", request.method(), request.uri().path())
})
.on_response(|response: &Response<_>, latency: Duration, _span: &Span| {
_span.record("http response status_code", tracing::field::display(response.status()));
debug!("http response generated in {:?}", latency)
})
.on_body_chunk(|chunk: &Bytes, latency: Duration, _span: &Span| {
info!(histogram.request.body.len = chunk.len(), "histogram request body length",);
debug!("http body sending {} bytes in {:?}", chunk.len(), latency)
})
.on_eos(|_trailers: Option<&HeaderMap>, stream_duration: Duration, _span: &Span| {
debug!("http stream closed after {:?}", stream_duration)
})
.on_failure(|_error, latency: Duration, _span: &Span| {
info!(counter.rustfs_api_requests_failure_total = 1_u64, "handle request api failure total");
debug!("http request failure error: {:?} in {:?}", _error, latency)
}),
)
.layer(CorsLayer::permissive())
.service(hybrid(s3_service, rpc_service));
let hybrid_service = TowerToHyperService::new(hybrid_service);
// Decide whether to handle HTTPS or HTTP connections based on the existence of TLS Acceptor
if let Some(acceptor) = tls_acceptor {
debug!("TLS handshake start");
match acceptor.accept(socket).await {
Ok(tls_socket) => {
debug!("TLS handshake successful");
let stream = TokioIo::new(tls_socket);
let conn = http_server.serve_connection(stream, hybrid_service);
if let Err(err) = graceful.watch(conn).await {
handle_connection_error(&*err);
}
}
Err(err) => {
error!(?err, "TLS handshake failed");
return; // Failed to end the task directly
}
}
debug!("TLS handshake success");
} else {
debug!("Http handshake start");
let stream = TokioIo::new(socket);
let conn = http_server.serve_connection(stream, hybrid_service);
if let Err(err) = graceful.watch(conn).await {
handle_connection_error(&*err);
}
debug!("Http handshake success");
};
});
}
/// Handles the shutdown process of the server
async fn handle_shutdown(state_manager: &ServiceStateManager, shutdown_tx: &tokio::sync::broadcast::Sender<()>) {
async fn handle_shutdown(
state_manager: &ServiceStateManager,
shutdown_tx: &tokio::sync::broadcast::Sender<()>,
scanner_cancel_token: &tokio_util::sync::CancellationToken,
) {
info!("Shutdown signal received in main thread");
// update the status to stopping first
state_manager.update(ServiceState::Stopping);
// Stop data scanner gracefully
info!("Stopping data scanner...");
scanner_cancel_token.cancel();
// Stop the notification system
shutdown_event_notifier().await;

View File

@@ -1976,19 +1976,19 @@ impl S3 for FS {
..
} = req.input;
let mut lr_retention = false;
let rcfg = metadata_sys::get_object_lock_config(&bucket).await;
let lr_retention = false;
/*let rcfg = metadata_sys::get_object_lock_config(&bucket).await;
if let Ok(rcfg) = rcfg {
if let Some(rule) = rcfg.0.rule {
if let Some(retention) = rule.default_retention {
if let Some(mode) = retention.mode {
if mode == ObjectLockRetentionMode::from_static(ObjectLockRetentionMode::GOVERNANCE) {
//if mode == ObjectLockRetentionMode::from_static(ObjectLockRetentionMode::GOVERNANCE) {
lr_retention = true;
}
//}
}
}
}
}
}*/
//info!("lifecycle_configuration: {:?}", &lifecycle_configuration);

View File

@@ -52,7 +52,7 @@ pub async fn del_opts(
opts.version_id = {
if is_dir_object(object) && vid.is_none() {
Some(Uuid::nil().to_string())
Some(Uuid::max().to_string())
} else {
vid
}
@@ -91,7 +91,7 @@ pub async fn get_opts(
opts.version_id = {
if is_dir_object(object) && vid.is_none() {
Some(Uuid::nil().to_string())
Some(Uuid::max().to_string())
} else {
vid
}
@@ -133,7 +133,7 @@ pub async fn put_opts(
opts.version_id = {
if is_dir_object(object) && vid.is_none() {
Some(Uuid::nil().to_string())
Some(Uuid::max().to_string())
} else {
vid
}
@@ -273,7 +273,7 @@ mod tests {
assert!(result.is_ok());
let opts = result.unwrap();
assert_eq!(opts.version_id, Some(Uuid::nil().to_string()));
assert_eq!(opts.version_id, Some(Uuid::max().to_string()));
}
#[tokio::test]
@@ -346,7 +346,7 @@ mod tests {
assert!(result.is_ok());
let opts = result.unwrap();
assert_eq!(opts.version_id, Some(Uuid::nil().to_string()));
assert_eq!(opts.version_id, Some(Uuid::max().to_string()));
}
#[tokio::test]
@@ -390,7 +390,7 @@ mod tests {
assert!(result.is_ok());
let opts = result.unwrap();
assert_eq!(opts.version_id, Some(Uuid::nil().to_string()));
assert_eq!(opts.version_id, Some(Uuid::max().to_string()));
}
#[tokio::test]

View File

@@ -0,0 +1,71 @@
# Delete __XLDIR__ Directory Scripts
This directory contains scripts for deleting all directories ending with `__XLDIR__` in the specified path.
## Script Description
### 1. delete_xldir.sh (Full Version)
A feature-rich version with multiple options and safety checks.
**Usage:**
```bash
./scripts/delete_xldir.sh <path> [options]
```
**Options:**
- `-f, --force` Force deletion without confirmation
- `-v, --verbose` Show verbose information
- `-d, --dry-run` Show directories to be deleted without actually deleting
- `-h, --help` Show help information
**Examples:**
```bash
# Preview directories to be deleted (without actually deleting)
./scripts/delete_xldir.sh /path/to/search --dry-run
# Interactive deletion (will ask for confirmation)
./scripts/delete_xldir.sh /path/to/search
# Force deletion (without confirmation)
./scripts/delete_xldir.sh /path/to/search --force
# Verbose mode deletion
./scripts/delete_xldir.sh /path/to/search --verbose
```
### 2. delete_xldir_simple.sh (Simple Version)
A streamlined version that directly deletes found directories.
**Usage:**
```bash
./scripts/delete_xldir_simple.sh <path>
```
**Example:**
```bash
# Delete all directories ending with __XLDIR__ in the specified path
./scripts/delete_xldir_simple.sh /path/to/search
```
## How It Works
Both scripts use the `find` command to locate directories:
```bash
find "$SEARCH_PATH" -type d -name "*__XLDIR__"
```
- `-type d`: Only search for directories
- `-name "*__XLDIR__"`: Find directories ending with `__XLDIR__`
## Safety Notes
⚠️ **Important Reminders:**
- Deletion operations are irreversible, please confirm the path is correct before use
- It's recommended to use the `--dry-run` option first to preview directories to be deleted
- For important data, please backup first
## Use Cases
These scripts are typically used for cleaning up temporary directories or metadata directories in storage systems, especially in distributed storage systems where `__XLDIR__` is commonly used as a specific directory identifier.

147
scripts/test/delete_xldir.sh Executable file
View File

@@ -0,0 +1,147 @@
#!/bin/bash
# Delete all directories ending with __XLDIR__ in the specified path
# Check parameters
if [ $# -eq 0 ]; then
echo "Usage: $0 <path> [options]"
echo "Options:"
echo " -f, --force Force deletion without confirmation"
echo " -v, --verbose Show verbose information"
echo " -d, --dry-run Show directories to be deleted without actually deleting"
echo ""
echo "Examples:"
echo " $0 /path/to/search"
echo " $0 /path/to/search --dry-run"
echo " $0 /path/to/search --force"
exit 1
fi
# Parse parameters
SEARCH_PATH=""
FORCE=false
VERBOSE=false
DRY_RUN=false
while [[ $# -gt 0 ]]; do
case $1 in
-f|--force)
FORCE=true
shift
;;
-v|--verbose)
VERBOSE=true
shift
;;
-d|--dry-run)
DRY_RUN=true
shift
;;
-h|--help)
echo "Usage: $0 <path> [options]"
echo "Delete all directories ending with __XLDIR__ in the specified path"
exit 0
;;
-*)
echo "Unknown option: $1"
exit 1
;;
*)
if [ -z "$SEARCH_PATH" ]; then
SEARCH_PATH="$1"
else
echo "Error: Only one path can be specified"
exit 1
fi
shift
;;
esac
done
# Check if path is provided
if [ -z "$SEARCH_PATH" ]; then
echo "Error: Search path must be specified"
exit 1
fi
# Check if path exists
if [ ! -d "$SEARCH_PATH" ]; then
echo "Error: Path '$SEARCH_PATH' does not exist or is not a directory"
exit 1
fi
# Find all directories ending with __XLDIR__
echo "Searching in path: $SEARCH_PATH"
echo "Looking for directories ending with __XLDIR__..."
# Use find command to locate directories
DIRS_TO_DELETE=$(find "$SEARCH_PATH" -type d -name "*__XLDIR__" 2>/dev/null)
if [ -z "$DIRS_TO_DELETE" ]; then
echo "No directories ending with __XLDIR__ found"
exit 0
fi
# Display found directories
echo "Found the following directories:"
echo "$DIRS_TO_DELETE"
echo ""
# Count directories
DIR_COUNT=$(echo "$DIRS_TO_DELETE" | wc -l)
echo "Total found: $DIR_COUNT directories"
# If dry-run mode, only show without deleting
if [ "$DRY_RUN" = true ]; then
echo ""
echo "This is dry-run mode, no directories will be actually deleted"
echo "To actually delete these directories, remove the --dry-run option"
exit 0
fi
# If not force mode, ask for confirmation
if [ "$FORCE" = false ]; then
echo ""
read -p "Are you sure you want to delete these directories? (y/N): " -n 1 -r
echo ""
if [[ ! $REPLY =~ ^[Yy]$ ]]; then
echo "Operation cancelled"
exit 0
fi
fi
# Delete directories
echo ""
echo "Starting to delete directories..."
deleted_count=0
failed_count=0
while IFS= read -r dir; do
if [ -d "$dir" ]; then
if [ "$VERBOSE" = true ]; then
echo "Deleting: $dir"
fi
if rm -rf "$dir" 2>/dev/null; then
((deleted_count++))
if [ "$VERBOSE" = true ]; then
echo " ✓ Deleted successfully"
fi
else
((failed_count++))
echo " ✗ Failed to delete: $dir"
fi
fi
done <<< "$DIRS_TO_DELETE"
echo ""
echo "Deletion completed!"
echo "Successfully deleted: $deleted_count directories"
if [ $failed_count -gt 0 ]; then
echo "Failed to delete: $failed_count directories"
exit 1
else
echo "All directories have been successfully deleted"
exit 0
fi

View File

@@ -0,0 +1,24 @@
#!/bin/bash
# Simple version: Delete all directories ending with __XLDIR__ in the specified path
if [ $# -eq 0 ]; then
echo "Usage: $0 <path>"
echo "Example: $0 /path/to/search"
exit 1
fi
SEARCH_PATH="$1"
# Check if path exists
if [ ! -d "$SEARCH_PATH" ]; then
echo "Error: Path '$SEARCH_PATH' does not exist or is not a directory"
exit 1
fi
echo "Searching in path: $SEARCH_PATH"
# Find and delete all directories ending with __XLDIR__
find "$SEARCH_PATH" -type d -name "*__XLDIR__" -exec rm -rf {} \; 2>/dev/null
echo "Deletion completed!"