Compare commits

..

85 Commits

Author SHA1 Message Date
安正超
7d5fc87002 fix: extract release notes template to external file to resolve YAML syntax error (#143) 2025-07-09 23:07:10 +08:00
安正超
13130e9dd4 fix: add missing OSSUTIL_BIN variable in linux case branch (#141)
* fix: improve ossutil install logic in GitHub Actions workflow

* wip

* wip

* fix: add missing OSSUTIL_BIN variable in linux case branch
2025-07-09 22:36:37 +08:00
安正超
1061ce11a3 fix: improve ossutil install logic in GitHub Actions workflow (#139)
* fix: improve ossutil install logic in GitHub Actions workflow

* wip

* wip
2025-07-09 21:37:38 +08:00
loverustfs
9f9a74000d Fix dockerfile link error (#138)
* fix unzip error

* fix url change error

fix url change error
2025-07-09 21:04:10 +08:00
shiro.lee
d1863018df Merge pull request #137 from shiroleeee/windows_start
fix: troubleshooting startup failure in Windows System
2025-07-09 20:51:42 +08:00
shiro
166080aac8 fix: troubleshooting startup failure in Windows System 2025-07-09 20:32:20 +08:00
loverustfs
78b2487639 Delete GUI build workflow 2025-07-09 19:57:01 +08:00
loverustfs
79f4e81fea disable ubuntu & macos GUI
disable ubuntu & macos GUI
2025-07-09 19:50:42 +08:00
loverustfs
28da78d544 Add image fix build error 2025-07-09 19:03:01 +08:00
loverustfs
df2eb9bc6a docs: add status warning 2025-07-09 08:23:40 +00:00
loverustfs
7c20d92fe5 wip: fix ossutil 2025-07-09 08:18:31 +00:00
houseme
b4c316c662 fix logger format (#134)
* fix logger format

* fmt
2025-07-09 16:05:22 +08:00
loverustfs
411b511937 fix: oss utils 2025-07-09 07:48:23 +00:00
loverustfs
c902475443 fix: oss utils 2025-07-09 07:21:23 +00:00
weisd
00d8008a89 Feat/region (#132)
* add region config
2025-07-09 14:48:51 +08:00
houseme
36acb5bce9 feat(console): Enhance network address handling for WebUI (#129)
* add crates homepage,description,keywords,categories,documentation

* add readme

* modify version 0.0.3

* cargo fmt

* fix: yaml.docker-compose.security.no-new-privileges.no-new-privileges-docker-compose.yml (#63)

* Feature up/ilm (#61)

* fix delete-marker expiration. add api_restore.

* remove target return 204

* log level

* fix: make lint build and clippy happy (#71)

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: make ci and local use the same toolchain (#72)

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* feat: optimize GitHub Actions workflows with performance improvements (#77)

* feat: optimize GitHub Actions workflows with performance improvements

- Rename workflows with more descriptive names
- Add unified setup action for consistent environment setup
- Optimize caching strategy with Swatinem/rust-cache@v2
- Implement skip-check mechanism to avoid duplicate builds
- Simplify matrix builds with better include/exclude logic
- Add intelligent build strategy checks
- Optimize Docker multi-arch builds
- Improve artifact naming and retention
- Add performance testing with benchmark support
- Enhance security audit with dependency scanning
- Change Chinese comments to English for better maintainability

Performance improvements:
- CI testing: ~35 min (42% faster)
- Build release: ~60 min (50% faster)
- Docker builds: ~45 min (50% faster)
- Security audit: ~8 min (47% faster)

* fix: correct secrets context usage in GitHub Actions workflow

- Move environment variables to job level to fix secrets access issue
- Fix unrecognized named-value 'secrets' error in if condition
- Ensure OSS upload step can properly check for required secrets

* fix: resolve GitHub API rate limit by adding authentication token

- Add github-token input to setup action to authenticate GitHub API requests
- Pass GITHUB_TOKEN to all setup action usages to avoid rate limiting
- Fix arduino/setup-protoc@v3 API access issues in CI/CD workflows
- Ensure protoc installation can successfully access GitHub releases API

* fix:make bucket err (#85)

* Rename DEVELOPMENT.md to CONTRIBUTING.md

* Create issue-translator.yml (#89)

Enable Issues Translator

* fix(dockerfile): correct env variable names for access/secret key and improve compatibility (#90)

* fix: restore Zig and cargo-zigbuild caching in GitHub Actions setup action (#92)

* fix: restore Zig and cargo-zigbuild caching in GitHub Actions setup action

Use mlugg/setup-zig and taiki-e/cache-cargo-install-action to speed up cross-compilation tool installation and avoid repeated downloads. All comments and code are in English.

* fix: use correct taiki-e/install-action for cargo-zigbuild

Use taiki-e/install-action@cargo-zigbuild instead of taiki-e/cache-cargo-install-action@v2 to match the original implementation from PR #77.

* refactor: remove explicit Zig version to use latest stable

* Create CODE_OF_CONDUCT.md

* Create SECURITY.md

* Update issue templates

* Create CLA.md

* docs: update PR template to English version

* fix: improve data scanner random sleep calculation

- Fix random number generation API usage
- Adjust sleep calculation to follow MinIO pattern
- Ensure proper random range for scanner cycles

Signed-off-by: junxiang Mu <1948535941@qq.com>

* fix: soupprt ipv6

* improve log

* add client ip log

* Update rustfs/src/console.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* improve code

* feat: unify package format to zip for all platforms

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Signed-off-by: junxiang Mu <1948535941@qq.com>
Co-authored-by: kira-offgrid <kira@offgridsec.com>
Co-authored-by: likewu <likewu@126.com>
Co-authored-by: laoliu <lygn128@163.com>
Co-authored-by: yihong <zouzou0208@gmail.com>
Co-authored-by: 安正超 <anzhengchao@gmail.com>
Co-authored-by: weisd <im@weisd.in>
Co-authored-by: Yone <zhiyu@live.cn>
Co-authored-by: loverustfs <155562731+loverustfs@users.noreply.github.com>
Co-authored-by: junxiang Mu <1948535941@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-09 14:39:40 +08:00
overtrue
e033b019f6 feat: align GUI artifact retention with build-rustfs 2025-07-09 13:19:34 +08:00
overtrue
259b80777e feat: align build-gui condition with build-rustfs 2025-07-09 13:19:11 +08:00
overtrue
abdfad8521 feat: unify package format to zip for all platforms 2025-07-09 12:56:39 +08:00
lihaixing
c498fbcb27 fix: drop writers to close all files, this is to prevent FileAccessDenied errors when renaming data 2025-07-09 11:09:22 +08:00
loverustfs
874d486b1e fix workflow 2025-07-09 10:28:53 +08:00
weisd
21516251b0 fix:ci (#124) 2025-07-09 09:49:27 +08:00
neo
a2f83b0d2d doc: Add links to translated README versions (#119)
Added language selection links to the README for easier access to translated versions: German, Spanish, French, Japanese, Korean, Portuguese, and Russian.
2025-07-09 09:34:43 +08:00
overtrue
aa65766312 fix: api rate limit 2025-07-09 09:16:04 +08:00
overtrue
660f004cfd fix: api rate limit 2025-07-09 09:11:46 +08:00
loverustfs
6d2c420f54 fix unzip error (#117) 2025-07-09 01:19:12 +08:00
安正超
5f0b9a5fa8 chore: remove skip-duplicate and skip-check jobs from workflows (#116) 2025-07-08 23:55:21 +08:00
安正超
8378e308e0 fix: prevent overwriting existing release content in build workflow (#115) 2025-07-08 23:29:45 +08:00
overtrue
b9f54519fd fix: prevent overwriting existing release content in build workflow 2025-07-08 23:27:13 +08:00
overtrue
4108a9649f refactor: optimize performance workflow trigger conditions
- Replace paths-ignore with paths for more precise control
- Only trigger on Rust source files, Cargo files, and workflow itself
- Improve efficiency by avoiding unnecessary performance tests
- Follow best practices for targeted workflow execution
2025-07-08 23:24:50 +08:00
安正超
6244e23451 refactor: simplify workflow skip logic using do_not_skip parameter (#114)
* feat: ensure workflows never skip execution during version releases

- Modified skip-duplicate-actions to never skip when pushing tags
- Updated all workflow jobs to force execution for tag pushes (version releases)
- Ensures complete CI/CD pipeline execution for releases including:
  - All tests and lint checks
  - Multi-platform builds
  - GUI builds
  - Release asset creation
  - OSS uploads

This guarantees that version releases always undergo full validation
and build processes, maintaining release quality and consistency.

* refactor: simplify workflow skip logic using do_not_skip parameter

- Replace complex conditional expressions with do_not_skip: ['release', 'push']
- Add skip-duplicate-actions to docker.yml workflow
- Ensure all workflows use consistent skip mechanism
- Maintain release and tag push execution guarantee
- Simplify job conditions by removing redundant tag checks

This change makes workflows more maintainable and follows
official skip-duplicate-actions best practices.
2025-07-08 23:08:45 +08:00
安正超
713b322f99 feat: enhance build and release workflow with multi-platform support (#113)
* feat: enhance build and release workflow with multi-platform support

- Add Windows support (x86_64 and ARM64) to build matrix
- Add macOS Intel x86_64 support alongside Apple Silicon
- Improve cross-platform builds with proper toolchain selection
- Use GitHub CLI (gh) for release management instead of GitHub Actions
- Add automatic checksum generation (SHA256/SHA512) for all binaries
- Support different archive formats per platform (zip for Windows, tar.gz for Unix)
- Add comprehensive release notes with installation guides
- Enhanced error handling for console assets download
- Platform-specific build information in packages
- Support both binary and GUI application releases
- Update OSS upload to handle multiple file formats

This brings RustFS builds up to enterprise-grade standards with:
- 6 binary targets (Linux x86_64/ARM64, macOS x86_64/ARM64, Windows x86_64/ARM64)
- Professional release management with checksums
- User-friendly installation instructions
- Multi-platform GUI applications

* feat: add core development principles to cursor rules

- Add precision-first development principle: 每次改动都要精准,没把握就别改
- Add GitHub CLI priority rule: GitHub PR 创建优先使用 gh 命令
- Emphasize careful analysis before making changes
- Promote use of gh commands for better automation and integration

* refactor: translate cursor rules to English

- Translate core development principles from Chinese to English
- Maintain consistency with project's English-first policy
- Update 'Every change must be precise' principle
- Update 'GitHub PR creation prioritizes gh command usage' rule
- Ensure all cursor rules are in English for better accessibility

* fix: prevent workflow changes from triggering CI/CD pipelines

- Add .github/** to paths-ignore in build.yml workflow
- Add .github/** to paths-ignore in docker.yml workflow
- Update skip-duplicate paths_ignore to include .github files
- Workflow changes should not trigger performance, build, or docker workflows
- Saves unnecessary CI/CD resource usage when updating workflow configurations
- Consistent with performance.yml which already ignores .github/**
2025-07-08 22:49:35 +08:00
安正超
e1a5a195c3 feat: enhance build and release workflow with multi-platform support (#112)
* feat: enhance build and release workflow with multi-platform support

- Add Windows support (x86_64 and ARM64) to build matrix
- Add macOS Intel x86_64 support alongside Apple Silicon
- Improve cross-platform builds with proper toolchain selection
- Use GitHub CLI (gh) for release management instead of GitHub Actions
- Add automatic checksum generation (SHA256/SHA512) for all binaries
- Support different archive formats per platform (zip for Windows, tar.gz for Unix)
- Add comprehensive release notes with installation guides
- Enhanced error handling for console assets download
- Platform-specific build information in packages
- Support both binary and GUI application releases
- Update OSS upload to handle multiple file formats

This brings RustFS builds up to enterprise-grade standards with:
- 6 binary targets (Linux x86_64/ARM64, macOS x86_64/ARM64, Windows x86_64/ARM64)
- Professional release management with checksums
- User-friendly installation instructions
- Multi-platform GUI applications

* feat: add core development principles to cursor rules

- Add precision-first development principle: 每次改动都要精准,没把握就别改
- Add GitHub CLI priority rule: GitHub PR 创建优先使用 gh 命令
- Emphasize careful analysis before making changes
- Promote use of gh commands for better automation and integration

* refactor: translate cursor rules to English

- Translate core development principles from Chinese to English
- Maintain consistency with project's English-first policy
- Update 'Every change must be precise' principle
- Update 'GitHub PR creation prioritizes gh command usage' rule
- Ensure all cursor rules are in English for better accessibility
2025-07-08 22:39:41 +08:00
安正超
bc37417d6c ci: fix workflows triggering on documentation-only changes (#111)
- Fix performance.yml: now ignores *.md, README*, and docs/**
- Fix build.yml: now ignores documentation files and images
- Fix docker.yml: prevent Docker builds on README changes
- Replace 'paths:' with 'paths-ignore:' to properly exclude docs
- Reduces unnecessary CI runs for documentation-only PRs

This resolves the issue where README changes triggered expensive
CI pipelines including Performance Testing and Docker builds.
2025-07-08 21:20:18 +08:00
安正超
3dbcaaa221 docs: simplify crates README files and enforce PR-only workflow (#110)
* docs: simplify all crates README files

- Remove extensive code examples and detailed documentation
- Convert to minimal module introductions with core feature lists
- Direct users to main RustFS repository for comprehensive docs
- Updated 20 crate README files for consistency and brevity

Files updated:
- crates/rio/README.md (415→15 lines)
- crates/s3select-api/README.md (592→15 lines)
- crates/s3select-query/README.md (658→15 lines)
- crates/signer/README.md (407→15 lines)
- crates/utils/README.md (395→15 lines)
- crates/workers/README.md (463→15 lines)
- crates/zip/README.md (408→15 lines)

* docs: restore original headers in crates README files

- Add back RustFS logo image and CI badges
- Restore formatted headers and structured layout
- Keep simplified content with module introductions
- Maintain consistent documentation structure across all crates

All 20 crate README files now have proper headers while keeping
the simplified content that directs users to the main repository.

* rules: enforce PR-only workflow for main branch

- Strengthen rule that ALL changes must go through pull requests
- Explicitly forbid direct commits to main branch under any circumstances
- Add comprehensive PR requirements and enforcement guidelines
- Clarify that PRs are the ONLY way to merge to main branch
- Add requirement for PR approval before merging
- Include enforcement mechanisms for branch protection
2025-07-08 21:10:07 +08:00
overtrue
49f480d346 fix: resolve GitHub Actions build failures and optimize cross-compilation
- Remove invalid github-token parameter from arduino/setup-protoc action
- Fix cross-compilation RUSTFLAGS issue by conditionally setting target-cpu=native
- Update workflow tag triggers from v* to * for non-v prefixed tags
- Optimize Zig and cargo-zigbuild installation using official actions

This resolves build failures in aarch64-unknown-linux-musl target where
zig was receiving invalid x86_64 CPU flags during cross-compilation.
2025-07-08 20:21:11 +08:00
安正超
055a99ba25 fix: github flow (#107) 2025-07-08 20:16:18 +08:00
weisd
2bd11d476e fix: delete empty dir (#100)
* fix: delete empty dir
2025-07-08 15:08:20 +08:00
guojidan
297004c259 Merge pull request #96 from guojidan/scanner
fix: improve data scanner random sleep calculation
2025-07-08 11:36:36 +08:00
junxiang Mu
4e2c4d8dba fix: improve data scanner random sleep calculation
- Fix random number generation API usage
- Adjust sleep calculation to follow MinIO pattern
- Ensure proper random range for scanner cycles

Signed-off-by: junxiang Mu <1948535941@qq.com>
2025-07-08 11:15:06 +08:00
loverustfs
0626099c3b docs: update PR template to English version 2025-07-08 01:46:36 +00:00
loverustfs
107ddcf394 Create CLA.md 2025-07-08 09:27:06 +08:00
安正超
8893ffc10f Update issue templates 2025-07-08 09:06:11 +08:00
安正超
f23e855d23 Create SECURITY.md 2025-07-08 09:05:28 +08:00
安正超
8366413970 Create CODE_OF_CONDUCT.md 2025-07-08 09:04:37 +08:00
安正超
9862677fcf fix: restore Zig and cargo-zigbuild caching in GitHub Actions setup action (#92)
* fix: restore Zig and cargo-zigbuild caching in GitHub Actions setup action

Use mlugg/setup-zig and taiki-e/cache-cargo-install-action to speed up cross-compilation tool installation and avoid repeated downloads. All comments and code are in English.

* fix: use correct taiki-e/install-action for cargo-zigbuild

Use taiki-e/install-action@cargo-zigbuild instead of taiki-e/cache-cargo-install-action@v2 to match the original implementation from PR #77.

* refactor: remove explicit Zig version to use latest stable
2025-07-07 23:15:40 +08:00
安正超
e50bc4c60c fix(dockerfile): correct env variable names for access/secret key and improve compatibility (#90) 2025-07-07 23:05:23 +08:00
Yone
5f6104731d Create issue-translator.yml (#89)
Enable Issues Translator
2025-07-07 23:00:05 +08:00
安正超
6a6866c337 Rename DEVELOPMENT.md to CONTRIBUTING.md 2025-07-07 22:59:38 +08:00
weisd
ce2ce4b16e fix:make bucket err (#85) 2025-07-07 18:07:18 +08:00
安正超
1ecd5a87d9 feat: optimize GitHub Actions workflows with performance improvements (#77)
* feat: optimize GitHub Actions workflows with performance improvements

- Rename workflows with more descriptive names
- Add unified setup action for consistent environment setup
- Optimize caching strategy with Swatinem/rust-cache@v2
- Implement skip-check mechanism to avoid duplicate builds
- Simplify matrix builds with better include/exclude logic
- Add intelligent build strategy checks
- Optimize Docker multi-arch builds
- Improve artifact naming and retention
- Add performance testing with benchmark support
- Enhance security audit with dependency scanning
- Change Chinese comments to English for better maintainability

Performance improvements:
- CI testing: ~35 min (42% faster)
- Build release: ~60 min (50% faster)
- Docker builds: ~45 min (50% faster)
- Security audit: ~8 min (47% faster)

* fix: correct secrets context usage in GitHub Actions workflow

- Move environment variables to job level to fix secrets access issue
- Fix unrecognized named-value 'secrets' error in if condition
- Ensure OSS upload step can properly check for required secrets

* fix: resolve GitHub API rate limit by adding authentication token

- Add github-token input to setup action to authenticate GitHub API requests
- Pass GITHUB_TOKEN to all setup action usages to avoid rate limiting
- Fix arduino/setup-protoc@v3 API access issues in CI/CD workflows
- Ensure protoc installation can successfully access GitHub releases API
2025-07-07 12:38:17 +08:00
yihong
72aead5466 fix: make ci and local use the same toolchain (#72)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-07-07 10:40:53 +08:00
yihong
abd5dff9b5 fix: make lint build and clippy happy (#71)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-07-07 09:55:53 +08:00
laoliu
040b05c318 Merge pull request #68 from rustfs/bucket-replication
change some log level
2025-07-06 20:27:49 +08:00
laoliu
ce470c95c4 log level 2025-07-06 12:26:24 +00:00
laoliu
32e531bc61 Merge pull request #67 from rustfs/bucket-replication
remove target return 204
2025-07-06 15:42:24 +08:00
laoliu
dcf25e46af remove target return 204 2025-07-06 07:39:09 +00:00
likewu
2b079ae065 Feature up/ilm (#61)
* fix delete-marker expiration. add api_restore.
2025-07-06 12:31:08 +08:00
kira-offgrid
d41ccc1551 fix: yaml.docker-compose.security.no-new-privileges.no-new-privileges-docker-compose.yml (#63) 2025-07-06 12:28:44 +08:00
安正超
fa17f7b1e3 feat: add comprehensive README documentation for all RustFS submodules (#48) 2025-07-04 23:02:13 +08:00
loverustfs
c41299a29f Merge pull request #47 from rustfs/feature-up/ilm
Feature up/ilm
2025-07-04 22:50:35 +08:00
likewu
79156d2d82 fix 2025-07-04 21:57:51 +08:00
likewu
26542b741e request::Builder -> request::Request<Body> 2025-07-04 16:59:15 +08:00
loverustfs
8b2b4a0146 Add default username and password 2025-07-04 11:17:06 +08:00
houseme
5cf9087113 modify version 0.0.3 2025-07-04 09:17:48 +08:00
Nugine
dd12250987 build: upgrade s3s (#42) 2025-07-04 08:39:56 +08:00
loverustfs
e172b277f2 Merge pull request #41 from rustfs/feature/tls
Refactor(server): Encapsulate service creation within connection handler
2025-07-04 08:15:01 +08:00
houseme
086331b8e7 fix 2025-07-04 01:48:35 +08:00
houseme
96d22c3276 Refactor(server): Encapsulate service creation within connection handler
Move the construction of the hybrid service stack, including all middleware and the RPC service, from the main `run` function into the `process_connection` function.

This change ensures that each incoming connection gets its own isolated service instance. This improves modularity by making the connection handling logic more self-contained and simplifies the main server loop.

Key changes:
- The `hybrid_service` and `rpc_service` are now created inside `process_connection`.
- The `run` function's responsibility is reduced to accepting TCP connections and spawning tasks for `process_connection`.
2025-07-04 01:33:16 +08:00
houseme
caa3564439 Merge branch 'main' of github.com:rustfs/rustfs into feature/tls
* 'main' of github.com:rustfs/rustfs:
  Modify quickstart
  fix Dockerfile
  fix Dockerfile
2025-07-03 20:14:40 +08:00
loverustfs
18933fdb58 Modify quickstart 2025-07-03 19:11:44 +08:00
loverustfs
65a731a243 fix Dockerfile 2025-07-03 18:59:42 +08:00
loverustfs
89035d3b3b fix Dockerfile 2025-07-03 18:35:44 +08:00
houseme
c6527643a3 merge 2025-07-03 17:35:02 +08:00
loverustfs
b9157d5e9d Modify Dockerfile 2025-07-03 17:32:32 +08:00
loverustfs
20be2d9859 Fix the error of anonymous users viewing pictures 2025-07-03 16:36:45 +08:00
weisd
855541678e fix(ecstore): doc test (#38) 2025-07-03 16:23:36 +08:00
weisd
73d3d8ab5c refactor: simplify hash algorithm API and remove custom hasher implementation (#37)
- Remove custom hasher.rs module and Hasher trait
- Replace with HashAlgorithm enum for better type safety
- Simplify hash calculation from write()+sum() to hash_encode()
- Remove stateful hasher operations (reset, write, sum)
- Update all hash usage in ecstore client modules
- Maintain compatibility with existing checksum functionality
2025-07-03 15:53:00 +08:00
weisd
6983a3ffce feat: change default listen to IPv4 and add panic recovery (#36) 2025-07-03 13:51:38 +08:00
loverustfs
d6653f1258 Delete TODO.md 2025-07-03 08:55:58 +08:00
安正超
7ab53a6d7d Update README_ZH.md 2025-07-03 08:53:52 +08:00
安正超
85ee9811d8 Update README.md 2025-07-03 08:53:38 +08:00
安正超
61bd76f77e Update README_ZH.md 2025-07-03 08:52:55 +08:00
安正超
8cf611426b Update README.md 2025-07-03 08:52:38 +08:00
安正超
b0ac977a3d feat: restrict build triggers and add GitHub release automation (#34)
- Only execute builds on tag push, scheduled runs, or commit message contains --build
- Add latest.json version tracking to rustfs-version OSS bucket
- Create GitHub Release with all build artifacts automatically
- Update comments to English for consistency
- Reduce unnecessary CI resource usage while maintaining automation
2025-07-02 23:31:17 +08:00
132 changed files with 4463 additions and 2272 deletions

View File

@@ -5,15 +5,18 @@
### 🚨 NEVER COMMIT DIRECTLY TO MASTER/MAIN BRANCH 🚨
- **This is the most important rule - NEVER modify code directly on main or master branch**
- **ALL CHANGES MUST GO THROUGH PULL REQUESTS - NO EXCEPTIONS**
- **Always work on feature branches and use pull requests for all changes**
- **Any direct commits to master/main branch are strictly forbidden**
- **Pull requests are the ONLY way to merge code to main branch**
- Before starting any development, always:
1. `git checkout main` (switch to main branch)
2. `git pull` (get latest changes)
3. `git checkout -b feat/your-feature-name` (create and switch to feature branch)
4. Make your changes on the feature branch
5. Commit and push to the feature branch
6. Create a pull request for review
6. **Create a pull request for review - THIS IS MANDATORY**
7. **Wait for PR approval and merge through GitHub interface only**
## Project Overview
@@ -817,6 +820,7 @@ These rules should serve as guiding principles when developing the RustFS projec
- **🚨 CRITICAL: NEVER modify code directly on main or master branch - THIS IS ABSOLUTELY FORBIDDEN 🚨**
- **⚠️ ANY DIRECT COMMITS TO MASTER/MAIN WILL BE REJECTED AND MUST BE REVERTED IMMEDIATELY ⚠️**
- **🔒 ALL CHANGES MUST GO THROUGH PULL REQUESTS - NO DIRECT COMMITS TO MAIN UNDER ANY CIRCUMSTANCES 🔒**
- **Always work on feature branches - NO EXCEPTIONS**
- Always check the .cursorrules file before starting to ensure you understand the project guidelines
- **MANDATORY workflow for ALL changes:**
@@ -826,13 +830,39 @@ These rules should serve as guiding principles when developing the RustFS projec
4. Make your changes ONLY on the feature branch
5. Test thoroughly before committing
6. Commit and push to the feature branch
7. Create a pull request for code review
7. **Create a pull request for code review - THIS IS THE ONLY WAY TO MERGE TO MAIN**
8. **Wait for PR approval before merging - NEVER merge your own PRs without review**
- Use descriptive branch names following the pattern: `feat/feature-name`, `fix/issue-name`, `refactor/component-name`, etc.
- **Double-check current branch before ANY commit: `git branch` to ensure you're NOT on main/master**
- Ensure all changes are made on feature branches and merged through pull requests
- **Pull Request Requirements:**
- All changes must be submitted via PR regardless of size or urgency
- PRs must include comprehensive description and testing information
- PRs must pass all CI/CD checks before merging
- PRs require at least one approval from code reviewers
- Even hotfixes and emergency changes must go through PR process
- **Enforcement:**
- Main branch should be protected with branch protection rules
- Direct pushes to main should be blocked by repository settings
- Any accidental direct commits to main must be immediately reverted via PR
#### Development Workflow
## 🎯 **Core Development Principles**
- **🔴 Every change must be precise - don't modify unless you're confident**
- Carefully analyze code logic and ensure complete understanding before making changes
- When uncertain, prefer asking users or consulting documentation over blind modifications
- Use small iterative steps, modify only necessary parts at a time
- Evaluate impact scope before changes to ensure no new issues are introduced
- **🚀 GitHub PR creation prioritizes gh command usage**
- Prefer using `gh pr create` command to create Pull Requests
- Avoid having users manually create PRs through web interface
- Provide clear and professional PR titles and descriptions
- Using `gh` commands ensures better integration and automation
## 📝 **Code Quality Requirements**
- Use English for all code comments, documentation, and variable names
- Write meaningful and descriptive names for variables, functions, and methods
- Avoid meaningless test content like "debug 111" or placeholder values

38
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,38 @@
---
name: Bug report
about: Create a report to help us improve
title: ''
labels: ''
assignees: ''
---
**Describe the bug**
A clear and concise description of what the bug is.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,20 @@
---
name: Feature request
about: Suggest an idea for this project
title: ''
labels: ''
assignees: ''
---
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.

View File

@@ -12,56 +12,96 @@
# See the License for the specific language governing permissions and
# limitations under the License.
name: "setup"
description: "setup environment for rustfs"
name: "Setup Rust Environment"
description: "Setup Rust development environment with caching for RustFS"
inputs:
rust-version:
required: true
description: "Rust version to install"
required: false
default: "stable"
description: "Rust version to use"
cache-shared-key:
required: true
default: ""
description: "Cache key for shared cache"
description: "Shared cache key for Rust dependencies"
required: false
default: "rustfs-deps"
cache-save-if:
required: true
default: ${{ github.ref == 'refs/heads/main' }}
description: "Cache save condition"
runs-on:
required: true
default: "ubuntu-latest"
description: "Running system"
description: "Condition for saving cache"
required: false
default: "true"
install-cross-tools:
description: "Install cross-compilation tools"
required: false
default: "false"
target:
description: "Target architecture to add"
required: false
default: ""
github-token:
description: "GitHub token for API access"
required: false
default: ""
runs:
using: "composite"
steps:
- name: Install system dependencies
if: inputs.runs-on == 'ubuntu-latest'
- name: Install system dependencies (Ubuntu)
if: runner.os == 'Linux'
shell: bash
run: |
sudo apt update
sudo apt install -y musl-tools build-essential lld libdbus-1-dev libwayland-dev libwebkit2gtk-4.1-dev libxdo-dev
sudo apt-get update
sudo apt-get install -y \
musl-tools \
build-essential \
lld \
libdbus-1-dev \
libwayland-dev \
libwebkit2gtk-4.1-dev \
libxdo-dev \
pkg-config \
libssl-dev
- uses: arduino/setup-protoc@v3
- name: Cache protoc binary
id: cache-protoc
uses: actions/cache@v4
with:
path: ~/.local/bin/protoc
key: protoc-31.1-${{ runner.os }}-${{ runner.arch }}
- name: Install protoc
if: steps.cache-protoc.outputs.cache-hit != 'true'
uses: arduino/setup-protoc@v3
with:
version: "31.1"
repo-token: ${{ inputs.github-token }}
- uses: Nugine/setup-flatc@v1
- name: Install flatc
uses: Nugine/setup-flatc@v1
with:
version: "25.2.10"
- uses: dtolnay/rust-toolchain@master
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@stable
with:
toolchain: ${{ inputs.rust-version }}
targets: ${{ inputs.target }}
components: rustfmt, clippy
- uses: Swatinem/rust-cache@v2
- name: Install Zig
if: inputs.install-cross-tools == 'true'
uses: mlugg/setup-zig@v2
- name: Install cargo-zigbuild
if: inputs.install-cross-tools == 'true'
uses: taiki-e/install-action@cargo-zigbuild
- name: Setup Rust cache
uses: Swatinem/rust-cache@v2
with:
cache-all-crates: true
cache-on-failure: true
shared-key: ${{ inputs.cache-shared-key }}
save-if: ${{ inputs.cache-save-if }}
- uses: mlugg/setup-zig@v2
- uses: taiki-e/install-action@cargo-zigbuild
# Cache workspace dependencies
workspaces: |
. -> target
cli/rustfs-gui -> cli/rustfs-gui/target

39
.github/pull_request_template.md vendored Normal file
View File

@@ -0,0 +1,39 @@
<!--
Pull Request Template for RustFS
-->
## Type of Change
- [ ] New Feature
- [ ] Bug Fix
- [ ] Documentation
- [ ] Performance Improvement
- [ ] Test/CI
- [ ] Refactor
- [ ] Other:
## Related Issues
<!-- List related Issue numbers, e.g. #123 -->
## Summary of Changes
<!-- Briefly describe the main changes and motivation for this PR -->
## Checklist
- [ ] I have read and followed the [CONTRIBUTING.md](CONTRIBUTING.md) guidelines
- [ ] Code is formatted with `cargo fmt --all`
- [ ] Passed `cargo clippy --all-targets --all-features -- -D warnings`
- [ ] Passed `cargo check --all-targets`
- [ ] Added/updated necessary tests
- [ ] Documentation updated (if needed)
- [ ] CI/CD passed (if applicable)
## Impact
- [ ] Breaking change (compatibility)
- [ ] Requires doc/config/deployment update
- [ ] Other impact:
## Additional Notes
<!-- Any extra information for reviewers -->
---
Thank you for your contribution! Please ensure your PR follows the community standards ([CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md)) and sign the CLA if this is your first contribution.

View File

@@ -12,28 +12,67 @@
# See the License for the specific language governing permissions and
# limitations under the License.
name: Audit
name: Security Audit
on:
push:
branches:
- main
branches: [main]
paths:
- '**/Cargo.toml'
- '**/Cargo.lock'
- '.github/workflows/audit.yml'
pull_request:
branches:
- main
branches: [main]
paths:
- '**/Cargo.toml'
- '**/Cargo.lock'
- '.github/workflows/audit.yml'
schedule:
- cron: '0 0 * * 0' # at midnight of each sunday
- cron: '0 0 * * 0' # Weekly on Sunday at midnight UTC
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
jobs:
audit:
security-audit:
name: Security Audit
runs-on: ubuntu-latest
timeout-minutes: 15
steps:
- uses: actions/checkout@v4.2.2
- uses: taiki-e/install-action@cargo-audit
- run: cargo audit -D warnings
- name: Checkout repository
uses: actions/checkout@v4
- name: Install cargo-audit
uses: taiki-e/install-action@v2
with:
tool: cargo-audit
- name: Run security audit
run: |
cargo audit -D warnings --json | tee audit-results.json
- name: Upload audit results
if: always()
uses: actions/upload-artifact@v4
with:
name: security-audit-results-${{ github.run_number }}
path: audit-results.json
retention-days: 30
dependency-review:
name: Dependency Review
runs-on: ubuntu-latest
if: github.event_name == 'pull_request'
permissions:
contents: read
pull-requests: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Dependency Review
uses: actions/dependency-review-action@v4
with:
fail-on-severity: moderate
comment-summary-in-pr: true

View File

@@ -12,572 +12,406 @@
# See the License for the specific language governing permissions and
# limitations under the License.
name: Build RustFS And GUI
name: Build and Release
on:
workflow_dispatch:
schedule:
- cron: "0 0 * * 0" # at midnight of each sunday
push:
tags: ["v*", "*"]
branches:
- main
tags: ["*"]
branches: [main]
paths-ignore:
- "**.md"
- "**.txt"
- ".github/**"
- "docs/**"
- "deploy/**"
- "scripts/dev_*.sh"
- "LICENSE*"
- "README*"
- "**/*.png"
- "**/*.jpg"
- "**/*.svg"
- ".gitignore"
- ".dockerignore"
pull_request:
branches: [main]
paths-ignore:
- "**.md"
- "**.txt"
- ".github/**"
- "docs/**"
- "deploy/**"
- "scripts/dev_*.sh"
- "LICENSE*"
- "README*"
- "**/*.png"
- "**/*.jpg"
- "**/*.svg"
- ".gitignore"
- ".dockerignore"
schedule:
- cron: "0 0 * * 0" # Weekly on Sunday at midnight UTC
workflow_dispatch:
inputs:
force_build:
description: "Force build even without changes"
required: false
default: false
type: boolean
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
# Optimize build performance
CARGO_INCREMENTAL: 0
jobs:
# Second layer: Business logic level checks (handling build strategy)
build-check:
name: Build Strategy Check
runs-on: ubuntu-latest
outputs:
should_build: ${{ steps.check.outputs.should_build }}
build_type: ${{ steps.check.outputs.build_type }}
steps:
- name: Determine build strategy
id: check
run: |
should_build=false
build_type="none"
# Business logic: when we need to build
if [[ "${{ github.event_name }}" == "schedule" ]] || \
[[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
[[ "${{ github.event.inputs.force_build }}" == "true" ]] || \
[[ "${{ contains(github.event.head_commit.message, '--build') }}" == "true" ]]; then
should_build=true
build_type="development"
fi
# Always build for tag pushes (version releases)
if [[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]]; then
should_build=true
build_type="release"
echo "🏷️ Tag detected: forcing release build"
fi
echo "should_build=$should_build" >> $GITHUB_OUTPUT
echo "build_type=$build_type" >> $GITHUB_OUTPUT
echo "Build needed: $should_build (type: $build_type)"
# Build RustFS binaries
build-rustfs:
name: Build RustFS
needs: [build-check]
if: needs.build-check.outputs.should_build == 'true'
runs-on: ${{ matrix.os }}
# Only execute in the following cases: 1) tag push 2) scheduled run 3) commit message contains --build
if: |
startsWith(github.ref, 'refs/tags/') ||
github.event_name == 'schedule' ||
github.event_name == 'workflow_dispatch' ||
contains(github.event.head_commit.message, '--build')
timeout-minutes: 60
env:
RUSTFLAGS: ${{ matrix.cross == 'false' && '-C target-cpu=native' || '' }}
strategy:
fail-fast: false
matrix:
os: [ubuntu-latest, macos-latest, windows-latest]
variant:
- {
profile: release,
target: x86_64-unknown-linux-musl,
glibc: "default",
}
- {
profile: release,
target: x86_64-unknown-linux-gnu,
glibc: "default",
}
- { profile: release, target: aarch64-apple-darwin, glibc: "default" }
#- { profile: release, target: aarch64-unknown-linux-gnu, glibc: "default" }
- {
profile: release,
target: aarch64-unknown-linux-musl,
glibc: "default",
}
#- { profile: release, target: x86_64-pc-windows-msvc, glibc: "default" }
exclude:
# Linux targets on non-Linux systems
- os: macos-latest
variant:
{
profile: release,
target: x86_64-unknown-linux-gnu,
glibc: "default",
}
- os: macos-latest
variant:
{
profile: release,
target: x86_64-unknown-linux-musl,
glibc: "default",
}
- os: macos-latest
variant:
{
profile: release,
target: aarch64-unknown-linux-gnu,
glibc: "default",
}
- os: macos-latest
variant:
{
profile: release,
target: aarch64-unknown-linux-musl,
glibc: "default",
}
- os: windows-latest
variant:
{
profile: release,
target: x86_64-unknown-linux-gnu,
glibc: "default",
}
- os: windows-latest
variant:
{
profile: release,
target: x86_64-unknown-linux-musl,
glibc: "default",
}
- os: windows-latest
variant:
{
profile: release,
target: aarch64-unknown-linux-gnu,
glibc: "default",
}
- os: windows-latest
variant:
{
profile: release,
target: aarch64-unknown-linux-musl,
glibc: "default",
}
# Apple targets on non-macOS systems
include:
# Linux builds
- os: ubuntu-latest
variant:
{
profile: release,
target: aarch64-apple-darwin,
glibc: "default",
}
- os: windows-latest
variant:
{
profile: release,
target: aarch64-apple-darwin,
glibc: "default",
}
# Windows targets on non-Windows systems
target: x86_64-unknown-linux-musl
cross: false
platform: linux
- os: ubuntu-latest
variant:
{
profile: release,
target: x86_64-pc-windows-msvc,
glibc: "default",
}
target: aarch64-unknown-linux-musl
cross: true
platform: linux
# macOS builds
- os: macos-latest
variant:
{
profile: release,
target: x86_64-pc-windows-msvc,
glibc: "default",
}
target: aarch64-apple-darwin
cross: false
platform: macos
- os: macos-latest
target: x86_64-apple-darwin
cross: false
platform: macos
# # Windows builds (temporarily disabled)
# - os: windows-latest
# target: x86_64-pc-windows-msvc
# cross: false
# platform: windows
# - os: windows-latest
# target: aarch64-pc-windows-msvc
# cross: true
# platform: windows
steps:
- name: Checkout repository
uses: actions/checkout@v4.2.2
uses: actions/checkout@v4
with:
fetch-depth: 0
# Installation system dependencies
- name: Install system dependencies (Ubuntu)
if: runner.os == 'Linux'
- name: Setup Rust environment
uses: ./.github/actions/setup
with:
rust-version: stable
target: ${{ matrix.target }}
cache-shared-key: build-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
github-token: ${{ secrets.GITHUB_TOKEN }}
cache-save-if: ${{ github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/') }}
install-cross-tools: ${{ matrix.cross }}
- name: Download static console assets
run: |
sudo apt update
sudo apt install -y musl-tools build-essential lld libdbus-1-dev libwayland-dev libwebkit2gtk-4.1-dev libxdo-dev
shell: bash
#Install Rust using dtolnay/rust-toolchain
- uses: dtolnay/rust-toolchain@master
with:
toolchain: stable
targets: ${{ matrix.variant.target }}
components: rustfmt, clippy
# Install system dependencies
- name: Cache Protoc
id: cache-protoc
uses: actions/cache@v4.2.3
with:
path: /Users/runner/hostedtoolcache/protoc
key: protoc-${{ runner.os }}-31.1
restore-keys: |
protoc-${{ runner.os }}-
- name: Install Protoc
if: steps.cache-protoc.outputs.cache-hit != 'true'
uses: arduino/setup-protoc@v3
with:
version: "31.1"
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Setup Flatc
uses: Nugine/setup-flatc@v1
with:
version: "25.2.10"
# Cache Cargo dependencies
- uses: Swatinem/rust-cache@v2
with:
cache-on-failure: true
cache-all-crates: true
shared-key: rustfs-${{ matrix.os }}-${{ matrix.variant.profile }}-${{ matrix.variant.target }}-${{ matrix.variant.glibc }}-${{ hashFiles('**/Cargo.lock') }}
save-if: ${{ github.event_name != 'pull_request' }}
# Set up Zig for cross-compilation
- uses: mlugg/setup-zig@v2
if: matrix.variant.glibc != 'default' || contains(matrix.variant.target, 'aarch64-unknown-linux')
- uses: taiki-e/install-action@cargo-zigbuild
if: matrix.variant.glibc != 'default' || contains(matrix.variant.target, 'aarch64-unknown-linux')
# Download static resources
- name: Download and Extract Static Assets
run: |
url="https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip"
# Create a static resource directory
mkdir -p ./rustfs/static
# Download static resources
echo "::group::Downloading static assets"
curl -L -o static_assets.zip "$url" --retry 3
# Unzip static resources
echo "::group::Extracting static assets"
if [ "${{ runner.os }}" = "Windows" ]; then
7z x static_assets.zip -o./rustfs/static
del static_assets.zip
if [[ "${{ matrix.platform }}" == "windows" ]]; then
curl.exe -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" -o console.zip --retry 3 --retry-delay 5 --max-time 300
if [[ $? -eq 0 ]]; then
unzip -o console.zip -d ./rustfs/static
rm console.zip
else
echo "Warning: Failed to download console assets, continuing without them"
echo "// Static assets not available" > ./rustfs/static/empty.txt
fi
else
unzip -o static_assets.zip -d ./rustfs/static
rm static_assets.zip
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
-o console.zip --retry 3 --retry-delay 5 --max-time 300
if [[ $? -eq 0 ]]; then
unzip -o console.zip -d ./rustfs/static
rm console.zip
else
echo "Warning: Failed to download console assets, continuing without them"
echo "// Static assets not available" > ./rustfs/static/empty.txt
fi
fi
echo "::group::Static assets content"
ls -la ./rustfs/static
shell: bash
# Build rustfs
- name: Build rustfs
id: build
shell: bash
- name: Build RustFS
run: |
echo "::group::Setting up build parameters"
PROFILE="${{ matrix.variant.profile }}"
TARGET="${{ matrix.variant.target }}"
GLIBC="${{ matrix.variant.glibc }}"
# Determine whether to use zigbuild
USE_ZIGBUILD=false
if [[ "$GLIBC" != "default" || "$TARGET" == *"aarch64-unknown-linux"* ]]; then
USE_ZIGBUILD=true
echo "Using zigbuild for cross-compilation"
fi
# Determine the target parameters
TARGET_ARG="$TARGET"
if [[ "$GLIBC" != "default" ]]; then
TARGET_ARG="${TARGET}.${GLIBC}"
echo "Using custom glibc target: $TARGET_ARG"
fi
# Confirm the profile directory name
if [[ "$PROFILE" == "dev" ]]; then
PROFILE_DIR="debug"
else
PROFILE_DIR="$PROFILE"
fi
# Determine the binary suffix
BIN_SUFFIX=""
if [[ "${{ matrix.variant.target }}" == *"windows"* ]]; then
BIN_SUFFIX=".exe"
fi
# Determine the binary name - Use the appropriate extension for Windows
BIN_NAME="rustfs.${PROFILE}.${TARGET}"
if [[ "$GLIBC" != "default" ]]; then
BIN_NAME="${BIN_NAME}.glibc${GLIBC}"
fi
# Windows systems use exe suffix, and other systems do not have suffix
if [[ "${{ matrix.variant.target }}" == *"windows"* ]]; then
BIN_NAME="${BIN_NAME}.exe"
else
BIN_NAME="${BIN_NAME}.bin"
fi
echo "Binary name will be: $BIN_NAME"
echo "::group::Building rustfs"
# Refresh build information
# Force rebuild by touching build.rs
touch rustfs/build.rs
# Identify the build command and execute it
if [[ "$USE_ZIGBUILD" == "true" ]]; then
echo "Build command: cargo zigbuild --profile $PROFILE --target $TARGET_ARG -p rustfs --bins"
cargo zigbuild --profile $PROFILE --target $TARGET_ARG -p rustfs --bins
else
echo "Build command: cargo build --profile $PROFILE --target $TARGET_ARG -p rustfs --bins"
cargo build --profile $PROFILE --target $TARGET_ARG -p rustfs --bins
fi
# Determine the binary path and output path
BIN_PATH="target/${TARGET_ARG}/${PROFILE_DIR}/rustfs${BIN_SUFFIX}"
OUT_PATH="target/artifacts/${BIN_NAME}"
# Create a target directory
mkdir -p target/artifacts
echo "Copying binary from ${BIN_PATH} to ${OUT_PATH}"
cp "${BIN_PATH}" "${OUT_PATH}"
# Record the output path for use in the next steps
echo "bin_path=${OUT_PATH}" >> $GITHUB_OUTPUT
echo "bin_name=${BIN_NAME}" >> $GITHUB_OUTPUT
- name: Package Binary and Static Assets
id: package
run: |
# Create component file name
ARTIFACT_NAME="rustfs-${{ matrix.variant.profile }}-${{ matrix.variant.target }}"
if [ "${{ matrix.variant.glibc }}" != "default" ]; then
ARTIFACT_NAME="${ARTIFACT_NAME}-glibc${{ matrix.variant.glibc }}"
fi
echo "artifact_name=${ARTIFACT_NAME}" >> $GITHUB_OUTPUT
# Get the binary path
BIN_PATH="${{ steps.build.outputs.bin_path }}"
# Create a packaged directory structure - only contains bin and docs directories
mkdir -p ${ARTIFACT_NAME}/{bin,docs}
# Copy binary files (note the difference between Windows and other systems)
if [[ "${{ matrix.variant.target }}" == *"windows"* ]]; then
cp "${BIN_PATH}" ${ARTIFACT_NAME}/bin/rustfs.exe
else
cp "${BIN_PATH}" ${ARTIFACT_NAME}/bin/rustfs
fi
# copy documents and licenses
if [ -f "LICENSE" ]; then
cp LICENSE ${ARTIFACT_NAME}/docs/
fi
if [ -f "README.md" ]; then
cp README.md ${ARTIFACT_NAME}/docs/
fi
# Packaged as zip
if [ "${{ runner.os }}" = "Windows" ]; then
7z a ${ARTIFACT_NAME}.zip ${ARTIFACT_NAME}
else
zip -r ${ARTIFACT_NAME}.zip ${ARTIFACT_NAME}
fi
echo "Created artifact: ${ARTIFACT_NAME}.zip"
ls -la ${ARTIFACT_NAME}.zip
shell: bash
- uses: actions/upload-artifact@v4
with:
name: ${{ steps.package.outputs.artifact_name }}
path: ${{ steps.package.outputs.artifact_name }}.zip
retention-days: 7
# Install ossutil2 tool for OSS upload
- name: Install ossutil2
if: startsWith(github.ref, 'refs/tags/') || github.ref == 'refs/heads/main'
shell: bash
run: |
echo "::group::Installing ossutil2"
# Download and install ossutil based on platform
if [ "${{ runner.os }}" = "Linux" ]; then
curl -o ossutil.zip https://gosspublic.alicdn.com/ossutil/v2/2.1.1/ossutil-2.1.1-linux-amd64.zip
unzip -o ossutil.zip
chmod 755 ossutil-2.1.1-linux-amd64/ossutil
sudo mv ossutil-2.1.1-linux-amd64/ossutil /usr/local/bin/
rm -rf ossutil.zip ossutil-2.1.1-linux-amd64
elif [ "${{ runner.os }}" = "macOS" ]; then
if [ "$(uname -m)" = "arm64" ]; then
curl -o ossutil.zip https://gosspublic.alicdn.com/ossutil/v2/2.1.1/ossutil-2.1.1-mac-arm64.zip
if [[ "${{ matrix.cross }}" == "true" ]]; then
if [[ "${{ matrix.platform }}" == "windows" ]]; then
# Use cross for Windows ARM64
cargo install cross --git https://github.com/cross-rs/cross
cross build --release --target ${{ matrix.target }} -p rustfs --bins
else
curl -o ossutil.zip https://gosspublic.alicdn.com/ossutil/v2/2.1.1/ossutil-2.1.1-mac-amd64.zip
# Use zigbuild for Linux ARM64
cargo zigbuild --release --target ${{ matrix.target }} -p rustfs --bins
fi
unzip -o ossutil.zip
chmod 755 ossutil-*/ossutil
sudo mv ossutil-*/ossutil /usr/local/bin/
rm -rf ossutil.zip ossutil-*
elif [ "${{ runner.os }}" = "Windows" ]; then
curl -o ossutil.zip https://gosspublic.alicdn.com/ossutil/v2/2.1.1/ossutil-2.1.1-windows-amd64.zip
unzip -o ossutil.zip
mv ossutil-*/ossutil.exe /usr/bin/ossutil.exe
rm -rf ossutil.zip ossutil-*
else
cargo build --release --target ${{ matrix.target }} -p rustfs --bins
fi
echo "ossutil2 installation completed"
- name: Create release package
id: package
shell: bash
run: |
PACKAGE_NAME="rustfs-${{ matrix.target }}"
# Create zip packages for all platforms
# Ensure zip is available
if ! command -v zip &> /dev/null; then
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then
sudo apt-get update && sudo apt-get install -y zip
fi
fi
cd target/${{ matrix.target }}/release
zip "../../../${PACKAGE_NAME}.zip" rustfs
cd ../../..
echo "package_name=${PACKAGE_NAME}" >> $GITHUB_OUTPUT
echo "package_file=${PACKAGE_NAME}.zip" >> $GITHUB_OUTPUT
echo "Package created: ${PACKAGE_NAME}.zip"
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: ${{ steps.package.outputs.package_name }}
path: ${{ steps.package.outputs.package_file }}
retention-days: ${{ startsWith(github.ref, 'refs/tags/') && 30 || 7 }}
- name: Upload to Aliyun OSS
if: startsWith(github.ref, 'refs/tags/') || github.ref == 'refs/heads/main'
shell: bash
if: needs.build-check.outputs.build_type == 'release' && env.OSS_ACCESS_KEY_ID != ''
env:
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
OSS_REGION: cn-beijing
OSS_ENDPOINT: https://oss-cn-beijing.aliyuncs.com
run: |
echo "::group::Uploading files to OSS"
# Upload the artifact file to two different paths
ossutil cp "${{ steps.package.outputs.artifact_name }}.zip" "oss://rustfs-artifacts/artifacts/rustfs/${{ steps.package.outputs.artifact_name }}.zip" --force
ossutil cp "${{ steps.package.outputs.artifact_name }}.zip" "oss://rustfs-artifacts/artifacts/rustfs/${{ steps.package.outputs.artifact_name }}.latest.zip" --force
echo "Successfully uploaded artifacts to OSS"
# Install ossutil (platform-specific)
OSSUTIL_VERSION="2.1.1"
case "${{ matrix.platform }}" in
linux)
if [[ "$(uname -m)" == "arm64" ]]; then
ARCH="arm64"
else
ARCH="amd64"
fi
OSSUTIL_ZIP="ossutil-${OSSUTIL_VERSION}-linux-${ARCH}.zip"
OSSUTIL_DIR="ossutil-${OSSUTIL_VERSION}-linux-${ARCH}"
# Create and upload latest version info
- name: Create and Upload latest.json
if: startsWith(github.ref, 'refs/tags/') && matrix.os == 'ubuntu-latest' && matrix.variant.target == 'x86_64-unknown-linux-musl'
shell: bash
env:
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
OSS_REGION: cn-beijing
OSS_ENDPOINT: https://oss-cn-beijing.aliyuncs.com
run: |
echo "::group::Creating latest.json file"
curl -o "$OSSUTIL_ZIP" "https://gosspublic.alicdn.com/ossutil/v2/${OSSUTIL_VERSION}/${OSSUTIL_ZIP}"
unzip "$OSSUTIL_ZIP"
mv "${OSSUTIL_DIR}/ossutil" /usr/local/bin/
rm -rf "$OSSUTIL_DIR" "$OSSUTIL_ZIP"
chmod +x /usr/local/bin/ossutil
OSSUTIL_BIN=ossutil
;;
macos)
if [[ "$(uname -m)" == "arm64" ]]; then
ARCH="arm64"
else
ARCH="amd64"
fi
OSSUTIL_ZIP="ossutil-${OSSUTIL_VERSION}-mac-${ARCH}.zip"
OSSUTIL_DIR="ossutil-${OSSUTIL_VERSION}-mac-${ARCH}"
# Extract version from tag (remove 'refs/tags/' prefix)
VERSION="${GITHUB_REF#refs/tags/}"
# Remove 'v' prefix if present
VERSION="${VERSION#v}"
# Get current timestamp in ISO 8601 format
RELEASE_DATE=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
# Create latest.json content
cat > latest.json << EOF
{
"version": "${VERSION}",
"release_date": "${RELEASE_DATE}",
"release_notes": "Release ${VERSION}",
"download_url": "https://github.com/rustfs/rustfs/releases/tag/${GITHUB_REF#refs/tags/}"
}
EOF
echo "Generated latest.json:"
cat latest.json
echo "::group::Uploading latest.json to OSS"
# Upload latest.json to rustfs-version bucket
ossutil cp latest.json "oss://rustfs-version/latest.json" --force
echo "Successfully uploaded latest.json to OSS"
# Determine whether to perform GUI construction based on conditions
- name: Prepare for GUI build
if: startsWith(github.ref, 'refs/tags/')
id: prepare_gui
run: |
# Create a target directory
mkdir -p ./cli/rustfs-gui/embedded-rustfs/
# Copy the currently built binary to the embedded-rustfs directory
if [[ "${{ matrix.variant.target }}" == *"windows"* ]]; then
cp "${{ steps.build.outputs.bin_path }}" ./cli/rustfs-gui/embedded-rustfs/rustfs.exe
else
cp "${{ steps.build.outputs.bin_path }}" ./cli/rustfs-gui/embedded-rustfs/rustfs
fi
echo "Copied binary to embedded-rustfs directory"
ls -la ./cli/rustfs-gui/embedded-rustfs/
shell: bash
#Install the dioxus-cli tool
- uses: taiki-e/cache-cargo-install-action@v2
if: startsWith(github.ref, 'refs/tags/')
with:
tool: dioxus-cli
# Build and package GUI applications
- name: Build and Bundle rustfs-gui
if: startsWith(github.ref, 'refs/tags/')
id: build_gui
shell: bash
run: |
echo "::group::Setting up build parameters for GUI"
PROFILE="${{ matrix.variant.profile }}"
TARGET="${{ matrix.variant.target }}"
GLIBC="${{ matrix.variant.glibc }}"
RELEASE_PATH="target/artifacts/$TARGET"
# Make sure the output directory exists
mkdir -p ${RELEASE_PATH}
# Configure the target platform linker
echo "::group::Configuring linker for $TARGET"
case "$TARGET" in
"x86_64-unknown-linux-gnu")
export CC_x86_64_unknown_linux_gnu=gcc
export CARGO_TARGET_X86_64_UNKNOWN_LINUX_GNU_LINKER=gcc
;;
"x86_64-unknown-linux-musl")
export CC_x86_64_unknown_linux_musl=musl-gcc
export CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_LINKER=musl-gcc
;;
"aarch64-unknown-linux-gnu")
export CC_aarch64_unknown_linux_gnu=aarch64-linux-gnu-gcc
export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_LINKER=aarch64-linux-gnu-gcc
;;
"aarch64-unknown-linux-musl")
export CC_aarch64_unknown_linux_musl=aarch64-linux-musl-gcc
export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_LINKER=aarch64-linux-musl-gcc
;;
"aarch64-apple-darwin")
export CC_aarch64_apple_darwin=clang
export CARGO_TARGET_AARCH64_APPLE_DARWIN_LINKER=clang
;;
"x86_64-pc-windows-msvc")
export CC_x86_64_pc_windows_msvc=cl
export CARGO_TARGET_X86_64_PC_WINDOWS_MSVC_LINKER=link
;;
curl -o "$OSSUTIL_ZIP" "https://gosspublic.alicdn.com/ossutil/v2/${OSSUTIL_VERSION}/${OSSUTIL_ZIP}"
unzip "$OSSUTIL_ZIP"
mv "${OSSUTIL_DIR}/ossutil" /usr/local/bin/
rm -rf "$OSSUTIL_DIR" "$OSSUTIL_ZIP"
chmod +x /usr/local/bin/ossutil
OSSUTIL_BIN=ossutil
;;
esac
echo "::group::Building GUI application"
cd cli/rustfs-gui
# Upload the package file directly to OSS
echo "Uploading ${{ steps.package.outputs.package_file }} to OSS..."
$OSSUTIL_BIN cp "${{ steps.package.outputs.package_file }}" oss://rustfs-artifacts/artifacts/rustfs/ --force
# Building according to the target platform
if [[ "$TARGET" == *"apple-darwin"* ]]; then
echo "Building for macOS"
dx bundle --platform macos --package-types "macos" --package-types "dmg" --release --profile ${PROFILE} --out-dir ../../${RELEASE_PATH}
elif [[ "$TARGET" == *"windows-msvc"* ]]; then
echo "Building for Windows"
dx bundle --platform windows --package-types "msi" --release --profile ${PROFILE} --out-dir ../../${RELEASE_PATH}
elif [[ "$TARGET" == *"linux"* ]]; then
echo "Building for Linux"
dx bundle --platform linux --package-types "deb" --package-types "rpm" --package-types "appimage" --release --profile ${PROFILE} --out-dir ../../${RELEASE_PATH}
# Create latest.json (only for the first Linux build to avoid duplication)
if [[ "${{ matrix.target }}" == "x86_64-unknown-linux-musl" ]]; then
VERSION="${GITHUB_REF#refs/tags/v}"
echo "{\"version\":\"${VERSION}\",\"release_date\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\"}" > latest.json
$OSSUTIL_BIN cp latest.json oss://rustfs-version/latest.json --force
fi
cd ../..
# Create component name
GUI_ARTIFACT_NAME="rustfs-gui-${PROFILE}-${TARGET}"
if [ "$GLIBC" != "default" ]; then
GUI_ARTIFACT_NAME="${GUI_ARTIFACT_NAME}-glibc${GLIBC}"
fi
echo "::group::Packaging GUI application"
# Select packaging method according to the operating system
if [ "${{ runner.os }}" = "Windows" ]; then
7z a ${GUI_ARTIFACT_NAME}.zip ${RELEASE_PATH}/*
else
zip -r ${GUI_ARTIFACT_NAME}.zip ${RELEASE_PATH}/*
fi
echo "gui_artifact_name=${GUI_ARTIFACT_NAME}" >> $GITHUB_OUTPUT
echo "Created GUI artifact: ${GUI_ARTIFACT_NAME}.zip"
ls -la ${GUI_ARTIFACT_NAME}.zip
# Upload GUI components
- uses: actions/upload-artifact@v4
if: startsWith(github.ref, 'refs/tags/')
with:
name: ${{ steps.build_gui.outputs.gui_artifact_name }}
path: ${{ steps.build_gui.outputs.gui_artifact_name }}.zip
retention-days: 7
# Upload GUI to Alibaba Cloud OSS
- name: Upload GUI to Aliyun OSS
if: startsWith(github.ref, 'refs/tags/')
shell: bash
env:
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
OSS_REGION: cn-beijing
OSS_ENDPOINT: https://oss-cn-beijing.aliyuncs.com
run: |
echo "::group::Uploading GUI files to OSS"
# Upload the GUI artifact file to two different paths
ossutil cp "${{ steps.build_gui.outputs.gui_artifact_name }}.zip" "oss://rustfs-artifacts/artifacts/rustfs/${{ steps.build_gui.outputs.gui_artifact_name }}.zip" --force
ossutil cp "${{ steps.build_gui.outputs.gui_artifact_name }}.zip" "oss://rustfs-artifacts/artifacts/rustfs/${{ steps.build_gui.outputs.gui_artifact_name }}.latest.zip" --force
echo "Successfully uploaded GUI artifacts to OSS"
merge:
# Release management
release:
name: GitHub Release
needs: [build-check, build-rustfs]
if: always() && needs.build-check.outputs.build_type == 'release'
runs-on: ubuntu-latest
needs: [build-rustfs]
# Only execute merge operation when tag is pushed
if: startsWith(github.ref, 'refs/tags/')
permissions:
contents: write
steps:
- uses: actions/upload-artifact/merge@v4
- name: Checkout repository
uses: actions/checkout@v4
with:
name: rustfs-packages
pattern: "rustfs-*"
delete-merged: true
fetch-depth: 0
- name: Download all artifacts
uses: actions/download-artifact@v4
with:
path: ./release-artifacts
- name: Prepare release assets
id: release_prep
run: |
VERSION="${GITHUB_REF#refs/tags/}"
VERSION_CLEAN="${VERSION#v}"
echo "version=${VERSION}" >> $GITHUB_OUTPUT
echo "version_clean=${VERSION_CLEAN}" >> $GITHUB_OUTPUT
# Organize artifacts
mkdir -p ./release-files
# Copy all artifacts (.zip files)
find ./release-artifacts -name "*.zip" -exec cp {} ./release-files/ \;
# Generate checksums for all files
cd ./release-files
if ls *.zip >/dev/null 2>&1; then
sha256sum *.zip >> SHA256SUMS
sha512sum *.zip >> SHA512SUMS
fi
cd ..
# Display what we're releasing
echo "=== Release Files ==="
ls -la ./release-files/
- name: Create GitHub Release
env:
GH_TOKEN: ${{ github.token }}
run: |
VERSION="${{ steps.release_prep.outputs.version }}"
VERSION_CLEAN="${{ steps.release_prep.outputs.version_clean }}"
# Check if release already exists
if gh release view "$VERSION" >/dev/null 2>&1; then
echo "Release $VERSION already exists, skipping creation"
else
# Get release notes from tag message
RELEASE_NOTES=$(git tag -l --format='%(contents)' "${VERSION}")
if [[ -z "$RELEASE_NOTES" || "$RELEASE_NOTES" =~ ^[[:space:]]*$ ]]; then
RELEASE_NOTES="Release ${VERSION_CLEAN}"
fi
# Determine if this is a prerelease
PRERELEASE_FLAG=""
if [[ "$VERSION" == *"alpha"* ]] || [[ "$VERSION" == *"beta"* ]] || [[ "$VERSION" == *"rc"* ]]; then
PRERELEASE_FLAG="--prerelease"
fi
# Create the release only if it doesn't exist
gh release create "$VERSION" \
--title "RustFS $VERSION_CLEAN" \
--notes "$RELEASE_NOTES" \
$PRERELEASE_FLAG
fi
- name: Upload release assets
env:
GH_TOKEN: ${{ github.token }}
run: |
VERSION="${{ steps.release_prep.outputs.version }}"
cd ./release-files
# Upload all binary files
for file in *.zip; do
if [[ -f "$file" ]]; then
echo "Uploading $file..."
gh release upload "$VERSION" "$file" --clobber
fi
done
# Upload checksum files
if [[ -f "SHA256SUMS" ]]; then
echo "Uploading SHA256SUMS..."
gh release upload "$VERSION" "SHA256SUMS" --clobber
fi
if [[ -f "SHA512SUMS" ]]; then
echo "Uploading SHA512SUMS..."
gh release upload "$VERSION" "SHA512SUMS" --clobber
fi
- name: Update release notes
env:
GH_TOKEN: ${{ github.token }}
run: |
VERSION="${{ steps.release_prep.outputs.version }}"
VERSION_CLEAN="${{ steps.release_prep.outputs.version_clean }}"
# Check if release already has custom notes (not auto-generated)
EXISTING_NOTES=$(gh release view "$VERSION" --json body --jq '.body' 2>/dev/null || echo "")
# Only update if release notes are empty or auto-generated
if [[ -z "$EXISTING_NOTES" ]] || [[ "$EXISTING_NOTES" == *"Release ${VERSION_CLEAN}"* ]]; then
echo "Updating release notes for $VERSION"
# Get original release notes from tag
ORIGINAL_NOTES=$(git tag -l --format='%(contents)' "${VERSION}")
if [[ -z "$ORIGINAL_NOTES" || "$ORIGINAL_NOTES" =~ ^[[:space:]]*$ ]]; then
ORIGINAL_NOTES="Release ${VERSION_CLEAN}"
fi
# Use external template file and substitute variables
sed -e "s/\${VERSION}/$VERSION/g" \
-e "s/\${VERSION_CLEAN}/$VERSION_CLEAN/g" \
-e "s/\${ORIGINAL_NOTES}/$(echo "$ORIGINAL_NOTES" | sed 's/[[\.*^$()+?{|]/\\&/g')/g" \
.github/workflows/release-notes-template.md > enhanced_notes.md
# Update the release with enhanced notes
gh release edit "$VERSION" --notes-file enhanced_notes.md
else
echo "Release $VERSION already has custom notes, skipping update to preserve manual edits"
fi

View File

@@ -12,12 +12,11 @@
# See the License for the specific language governing permissions and
# limitations under the License.
name: CI
name: Continuous Integration
on:
push:
branches:
- main
branches: [main]
paths-ignore:
- "**.md"
- "**.txt"
@@ -35,10 +34,9 @@ on:
- ".github/workflows/build.yml"
- ".github/workflows/docker.yml"
- ".github/workflows/audit.yml"
- ".github/workflows/samply.yml"
- ".github/workflows/performance.yml"
pull_request:
branches:
- main
branches: [main]
paths-ignore:
- "**.md"
- "**.txt"
@@ -56,13 +54,18 @@ on:
- ".github/workflows/build.yml"
- ".github/workflows/docker.yml"
- ".github/workflows/audit.yml"
- ".github/workflows/samply.yml"
- ".github/workflows/performance.yml"
schedule:
- cron: "0 0 * * 0" # at midnight of each sunday
- cron: "0 0 * * 0" # Weekly on Sunday at midnight UTC
workflow_dispatch:
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
jobs:
skip-check:
name: Skip Duplicate Actions
permissions:
actions: write
contents: read
@@ -70,59 +73,82 @@ jobs:
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
steps:
- id: skip_check
- name: Skip duplicate actions
id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
concurrent_skipping: "same_content_newer"
cancel_others: true
paths_ignore: '["*.md"]'
paths_ignore: '["*.md", "docs/**", "deploy/**"]'
# Never skip release events and tag pushes
do_not_skip: '["release", "push"]'
develop:
test-and-lint:
name: Test and Lint
needs: skip-check
if: needs.skip-check.outputs.should_skip != 'true'
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
- uses: ./.github/actions/setup
- name: Checkout repository
uses: actions/checkout@v4
- name: Test
- name: Setup Rust environment
uses: ./.github/actions/setup
with:
rust-version: stable
cache-shared-key: ci-test-${{ hashFiles('**/Cargo.lock') }}
github-token: ${{ secrets.GITHUB_TOKEN }}
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
- name: Run tests
run: cargo test --all --exclude e2e_test
- name: Format
- name: Check code formatting
run: cargo fmt --all --check
- name: Lint
- name: Run clippy lints
run: cargo clippy --all-targets --all-features -- -D warnings
s3s-e2e:
name: E2E (s3s-e2e)
e2e-tests:
name: End-to-End Tests
needs: skip-check
if: needs.skip-check.outputs.should_skip != 'true'
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- uses: actions/checkout@v4.2.2
- uses: ./.github/actions/setup
- name: Checkout repository
uses: actions/checkout@v4
- name: Install s3s-e2e
- name: Setup Rust environment
uses: ./.github/actions/setup
with:
rust-version: stable
cache-shared-key: ci-e2e-${{ hashFiles('**/Cargo.lock') }}
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install s3s-e2e test tool
uses: taiki-e/cache-cargo-install-action@v2
with:
tool: s3s-e2e
git: https://github.com/Nugine/s3s.git
rev: b7714bfaa17ddfa9b23ea01774a1e7bbdbfc2ca3
- name: Build debug
- name: Build debug binary
run: |
touch rustfs/build.rs
cargo build -p rustfs --bins
- name: Run s3s-e2e
- name: Run end-to-end tests
run: |
s3s-e2e --version
./scripts/e2e-run.sh ./target/debug/rustfs /tmp/rustfs
- uses: actions/upload-artifact@v4
- name: Upload test logs
if: failure()
uses: actions/upload-artifact@v4
with:
name: s3s-e2e.logs
name: e2e-test-logs-${{ github.run_number }}
path: /tmp/rustfs.log
retention-days: 3

View File

@@ -12,155 +12,112 @@
# See the License for the specific language governing permissions and
# limitations under the License.
name: Build and Push Docker Images
name: Docker Images
on:
push:
tags:
- "v*"
branches:
- main
tags: ["*"]
branches: [main]
paths-ignore:
- "**.md"
- "**.txt"
- ".github/**"
- "docs/**"
- "deploy/**"
- "scripts/dev_*.sh"
- "LICENSE*"
- "README*"
- "**/*.png"
- "**/*.jpg"
- "**/*.svg"
- ".gitignore"
- ".dockerignore"
pull_request:
branches:
- main
branches: [main]
paths-ignore:
- "**.md"
- "**.txt"
- ".github/**"
- "docs/**"
- "deploy/**"
- "scripts/dev_*.sh"
- "LICENSE*"
- "README*"
- "**/*.png"
- "**/*.jpg"
- "**/*.svg"
- ".gitignore"
- ".dockerignore"
workflow_dispatch:
inputs:
push_to_registry:
description: "Push images to registry"
push_images:
description: "Push images to registries"
required: false
default: true
type: boolean
env:
REGISTRY_IMAGE_DOCKERHUB: rustfs/rustfs
REGISTRY_IMAGE_GHCR: ghcr.io/${{ github.repository }}
CARGO_TERM_COLOR: always
REGISTRY_DOCKERHUB: rustfs/rustfs
REGISTRY_GHCR: ghcr.io/${{ github.repository }}
jobs:
# Skip duplicate job runs
skip-check:
permissions:
actions: write
contents: read
# Check if we should build
build-check:
name: Build Check
runs-on: ubuntu-latest
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
should_build: ${{ steps.check.outputs.should_build }}
should_push: ${{ steps.check.outputs.should_push }}
steps:
- id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
concurrent_skipping: "same_content_newer"
cancel_others: true
paths_ignore: '["*.md", "docs/**"]'
# Build RustFS binary for different platforms
build-binary:
needs: skip-check
# Only execute in the following cases: 1) tag push 2) commit message contains --build 3) workflow_dispatch 4) PR
if: needs.skip-check.outputs.should_skip != 'true' && (startsWith(github.ref, 'refs/tags/') || contains(github.event.head_commit.message, '--build') || github.event_name == 'workflow_dispatch' || github.event_name == 'pull_request')
strategy:
matrix:
include:
- target: x86_64-unknown-linux-musl
os: ubuntu-latest
arch: amd64
use_cross: false
- target: aarch64-unknown-linux-gnu
os: ubuntu-latest
arch: arm64
use_cross: true
runs-on: ${{ matrix.os }}
timeout-minutes: 120
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Rust toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
target: ${{ matrix.target }}
components: rustfmt, clippy
- name: Install cross-compilation dependencies (native build)
if: matrix.use_cross == false
- name: Check build conditions
id: check
run: |
sudo apt-get update
sudo apt-get install -y musl-tools
should_build=false
should_push=false
- name: Install cross tool (cross compilation)
if: matrix.use_cross == true
uses: taiki-e/install-action@v2
with:
tool: cross
# Always build on workflow_dispatch or when changes detected
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
[[ "${{ github.event_name }}" == "push" ]] || \
[[ "${{ github.event_name }}" == "pull_request" ]]; then
should_build=true
fi
- name: Install protoc
uses: arduino/setup-protoc@v3
with:
version: "31.1"
repo-token: ${{ secrets.GITHUB_TOKEN }}
# Push only on main branch, tags, or manual trigger
if [[ "${{ github.ref }}" == "refs/heads/main" ]] || \
[[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]] || \
[[ "${{ github.event.inputs.push_images }}" == "true" ]]; then
should_push=true
fi
- name: Install flatc
uses: Nugine/setup-flatc@v1
with:
version: "25.2.10"
echo "should_build=$should_build" >> $GITHUB_OUTPUT
echo "should_push=$should_push" >> $GITHUB_OUTPUT
echo "Build: $should_build, Push: $should_push"
- name: Cache cargo dependencies
uses: actions/cache@v3
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
restore-keys: |
${{ runner.os }}-cargo-${{ matrix.target }}-
${{ runner.os }}-cargo-
- name: Generate protobuf code
run: cargo run --bin gproto
- name: Build RustFS binary (native)
if: matrix.use_cross == false
run: |
cargo build --release --target ${{ matrix.target }} --bin rustfs
- name: Build RustFS binary (cross)
if: matrix.use_cross == true
run: |
cross build --release --target ${{ matrix.target }} --bin rustfs
- name: Upload binary artifact
uses: actions/upload-artifact@v4
with:
name: rustfs-${{ matrix.arch }}
path: target/${{ matrix.target }}/release/rustfs
retention-days: 1
# Build and push multi-arch Docker images
build-images:
needs: [skip-check, build-binary]
if: needs.skip-check.outputs.should_skip != 'true'
# Build multi-arch Docker images
build-docker:
name: Build Docker Images
needs: build-check
if: needs.build-check.outputs.should_build == 'true'
runs-on: ubuntu-latest
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
image-type: [production, ubuntu, rockylinux, devenv]
variant:
- name: production
dockerfile: Dockerfile
platforms: linux/amd64,linux/arm64
- name: ubuntu
dockerfile: .docker/Dockerfile.ubuntu22.04
platforms: linux/amd64,linux/arm64
- name: alpine
dockerfile: .docker/Dockerfile.alpine
platforms: linux/amd64,linux/arm64
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Download binary artifacts
uses: actions/download-artifact@v4
with:
path: ./artifacts
- name: Setup binary files
run: |
mkdir -p target/x86_64-unknown-linux-musl/release
mkdir -p target/aarch64-unknown-linux-gnu/release
cp artifacts/rustfs-amd64/rustfs target/x86_64-unknown-linux-musl/release/
cp artifacts/rustfs-arm64/rustfs target/aarch64-unknown-linux-gnu/release/
chmod +x target/*/release/rustfs
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
@@ -168,75 +125,86 @@ jobs:
uses: docker/setup-qemu-action@v3
- name: Login to Docker Hub
if: github.event_name != 'pull_request' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/'))
if: needs.build-check.outputs.should_push == 'true' && secrets.DOCKERHUB_USERNAME != ''
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
if: github.event_name != 'pull_request' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/'))
if: needs.build-check.outputs.should_push == 'true'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Set Dockerfile and context
id: dockerfile
run: |
case "${{ matrix.image-type }}" in
production)
echo "dockerfile=Dockerfile" >> $GITHUB_OUTPUT
echo "context=." >> $GITHUB_OUTPUT
echo "suffix=" >> $GITHUB_OUTPUT
;;
ubuntu)
echo "dockerfile=.docker/Dockerfile.ubuntu22.04" >> $GITHUB_OUTPUT
echo "context=." >> $GITHUB_OUTPUT
echo "suffix=-ubuntu22.04" >> $GITHUB_OUTPUT
;;
rockylinux)
echo "dockerfile=.docker/Dockerfile.rockylinux9.3" >> $GITHUB_OUTPUT
echo "context=." >> $GITHUB_OUTPUT
echo "suffix=-rockylinux9.3" >> $GITHUB_OUTPUT
;;
devenv)
echo "dockerfile=.docker/Dockerfile.devenv" >> $GITHUB_OUTPUT
echo "context=." >> $GITHUB_OUTPUT
echo "suffix=-devenv" >> $GITHUB_OUTPUT
;;
esac
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ env.REGISTRY_IMAGE_DOCKERHUB }}
${{ env.REGISTRY_IMAGE_GHCR }}
${{ env.REGISTRY_DOCKERHUB }}
${{ env.REGISTRY_GHCR }}
tags: |
type=ref,event=branch,suffix=${{ steps.dockerfile.outputs.suffix }}
type=ref,event=pr,suffix=${{ steps.dockerfile.outputs.suffix }}
type=semver,pattern={{version}},suffix=${{ steps.dockerfile.outputs.suffix }}
type=semver,pattern={{major}}.{{minor}},suffix=${{ steps.dockerfile.outputs.suffix }}
type=semver,pattern={{major}},suffix=${{ steps.dockerfile.outputs.suffix }}
type=raw,value=latest,suffix=${{ steps.dockerfile.outputs.suffix }},enable={{is_default_branch}}
type=ref,event=branch,suffix=-${{ matrix.variant.name }}
type=ref,event=pr,suffix=-${{ matrix.variant.name }}
type=semver,pattern={{version}},suffix=-${{ matrix.variant.name }}
type=semver,pattern={{major}}.{{minor}},suffix=-${{ matrix.variant.name }}
type=raw,value=latest,suffix=-${{ matrix.variant.name }},enable={{is_default_branch}}
flavor: |
latest=false
- name: Build and push multi-arch Docker image
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: ${{ steps.dockerfile.outputs.context }}
file: ${{ steps.dockerfile.outputs.dockerfile }}
platforms: linux/amd64,linux/arm64
push: ${{ (github.event_name != 'pull_request' && (github.ref == 'refs/heads/main' || startsWith(github.ref, 'refs/tags/'))) || github.event.inputs.push_to_registry == 'true' }}
context: .
file: ${{ matrix.variant.dockerfile }}
platforms: ${{ matrix.variant.platforms }}
push: ${{ needs.build-check.outputs.should_push == 'true' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha,scope=${{ matrix.image-type }}
cache-to: type=gha,mode=max,scope=${{ matrix.image-type }}
cache-from: type=gha,scope=docker-${{ matrix.variant.name }}
cache-to: type=gha,mode=max,scope=docker-${{ matrix.variant.name }}
build-args: |
BUILDTIME=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.created'] }}
VERSION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.version'] }}
REVISION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }}
# Create manifest for main production image
create-manifest:
name: Create Manifest
needs: [build-check, build-docker]
if: needs.build-check.outputs.should_push == 'true' && startsWith(github.ref, 'refs/tags/')
runs-on: ubuntu-latest
steps:
- name: Login to Docker Hub
if: secrets.DOCKERHUB_USERNAME != ''
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create and push manifest
run: |
VERSION=${GITHUB_REF#refs/tags/}
# Create main image tag (without variant suffix)
if [[ -n "${{ secrets.DOCKERHUB_USERNAME }}" ]]; then
docker buildx imagetools create \
-t ${{ env.REGISTRY_DOCKERHUB }}:${VERSION} \
-t ${{ env.REGISTRY_DOCKERHUB }}:latest \
${{ env.REGISTRY_DOCKERHUB }}:${VERSION}-production
fi
docker buildx imagetools create \
-t ${{ env.REGISTRY_GHCR }}:${VERSION} \
-t ${{ env.REGISTRY_GHCR }}:latest \
${{ env.REGISTRY_GHCR }}:${VERSION}-production

18
.github/workflows/issue-translator.yml vendored Normal file
View File

@@ -0,0 +1,18 @@
name: 'issue-translator'
on:
issue_comment:
types: [created]
issues:
types: [opened]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: usthe/issues-translate-action@v2.7
with:
IS_MODIFY_TITLE: false
# not require, default false, . Decide whether to modify the issue title
# if true, the robot account @Issues-translate-bot must have modification permissions, invite @Issues-translate-bot to your project or use your custom bot.
CUSTOM_BOT_NOTE: Bot detected the issue body's language is not English, translate it automatically.
# not require. Customize the translation robot prefix message.

140
.github/workflows/performance.yml vendored Normal file
View File

@@ -0,0 +1,140 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: Performance Testing
on:
push:
branches: [main]
paths:
- '**/*.rs'
- '**/Cargo.toml'
- '**/Cargo.lock'
- '.github/workflows/performance.yml'
workflow_dispatch:
inputs:
profile_duration:
description: "Profiling duration in seconds"
required: false
default: "120"
type: string
env:
CARGO_TERM_COLOR: always
RUST_BACKTRACE: 1
jobs:
performance-profile:
name: Performance Profiling
runs-on: ubuntu-latest
timeout-minutes: 30
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Rust environment
uses: ./.github/actions/setup
with:
rust-version: nightly
cache-shared-key: perf-${{ hashFiles('**/Cargo.lock') }}
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install additional nightly components
run: rustup component add llvm-tools-preview
- name: Install samply profiler
uses: taiki-e/cache-cargo-install-action@v2
with:
tool: samply
- name: Configure kernel for profiling
run: echo '1' | sudo tee /proc/sys/kernel/perf_event_paranoid
- name: Prepare test environment
run: |
# Create test volumes
for i in {0..4}; do
mkdir -p ./target/volume/test$i
done
# Set environment variables
echo "RUSTFS_VOLUMES=./target/volume/test{0...4}" >> $GITHUB_ENV
echo "RUST_LOG=rustfs=info,ecstore=info,s3s=info,iam=info,rustfs-obs=info" >> $GITHUB_ENV
- name: Download static files
run: |
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
-o tempfile.zip --retry 3 --retry-delay 5
unzip -o tempfile.zip -d ./rustfs/static
rm tempfile.zip
- name: Build with profiling optimizations
run: |
RUSTFLAGS="-C force-frame-pointers=yes -C debug-assertions=off" \
cargo +nightly build --profile profiling -p rustfs --bins
- name: Run performance profiling
id: profiling
run: |
DURATION="${{ github.event.inputs.profile_duration || '120' }}"
echo "Running profiling for ${DURATION} seconds..."
timeout "${DURATION}s" samply record \
--output samply-profile.json \
./target/profiling/rustfs ${RUSTFS_VOLUMES} || true
if [ -f "samply-profile.json" ]; then
echo "profile_generated=true" >> $GITHUB_OUTPUT
echo "Profile generated successfully"
else
echo "profile_generated=false" >> $GITHUB_OUTPUT
echo "::warning::Profile data not generated"
fi
- name: Upload profile data
if: steps.profiling.outputs.profile_generated == 'true'
uses: actions/upload-artifact@v4
with:
name: performance-profile-${{ github.run_number }}
path: samply-profile.json
retention-days: 30
benchmark:
name: Benchmark Tests
runs-on: ubuntu-latest
timeout-minutes: 45
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Rust environment
uses: ./.github/actions/setup
with:
rust-version: stable
cache-shared-key: bench-${{ hashFiles('**/Cargo.lock') }}
github-token: ${{ secrets.GITHUB_TOKEN }}
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
- name: Run benchmarks
run: |
cargo bench --package ecstore --bench comparison_benchmark -- --output-format json | \
tee benchmark-results.json
- name: Upload benchmark results
uses: actions/upload-artifact@v4
with:
name: benchmark-results-${{ github.run_number }}
path: benchmark-results.json
retention-days: 7

View File

@@ -0,0 +1,78 @@
## RustFS ${VERSION_CLEAN}
${ORIGINAL_NOTES}
---
### 🚀 Quick Download
**Linux (Static Binaries - No Dependencies):**
```bash
# x86_64 (Intel/AMD)
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-x86_64-unknown-linux-musl.zip
unzip rustfs-x86_64-unknown-linux-musl.zip
sudo mv rustfs /usr/local/bin/
# ARM64 (Graviton, Apple Silicon VMs)
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-aarch64-unknown-linux-musl.zip
unzip rustfs-aarch64-unknown-linux-musl.zip
sudo mv rustfs /usr/local/bin/
```
**macOS:**
```bash
# Apple Silicon (M1/M2/M3)
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-aarch64-apple-darwin.zip
unzip rustfs-aarch64-apple-darwin.zip
sudo mv rustfs /usr/local/bin/
# Intel
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-x86_64-apple-darwin.zip
unzip rustfs-x86_64-apple-darwin.zip
sudo mv rustfs /usr/local/bin/
```
### 📁 Available Downloads
| Platform | Architecture | File | Description |
|----------|-------------|------|-------------|
| Linux | x86_64 | `rustfs-x86_64-unknown-linux-musl.zip` | Static binary, no dependencies |
| Linux | ARM64 | `rustfs-aarch64-unknown-linux-musl.zip` | Static binary, no dependencies |
| macOS | Apple Silicon | `rustfs-aarch64-apple-darwin.zip` | Native binary, ZIP archive |
| macOS | Intel | `rustfs-x86_64-apple-darwin.zip` | Native binary, ZIP archive |
### 🔐 Verification
Download checksums and verify your download:
```bash
# Download checksums
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/SHA256SUMS
# Verify (Linux)
sha256sum -c SHA256SUMS --ignore-missing
# Verify (macOS)
shasum -a 256 -c SHA256SUMS --ignore-missing
```
### 🛠️ System Requirements
- **Linux**: Any distribution with glibc 2.17+ (CentOS 7+, Ubuntu 16.04+)
- **macOS**: 10.15+ (Catalina or later)
- **Windows**: Windows 10 version 1809 or later
### 📚 Documentation
- [Installation Guide](https://github.com/rustfs/rustfs#installation)
- [Quick Start](https://github.com/rustfs/rustfs#quick-start)
- [Configuration](https://github.com/rustfs/rustfs/blob/main/docs/)
- [API Documentation](https://docs.rs/rustfs)
### 🆘 Support
- 🐛 [Report Issues](https://github.com/rustfs/rustfs/issues)
- 💬 [Community Discussions](https://github.com/rustfs/rustfs/discussions)
- 📖 [Documentation](https://github.com/rustfs/rustfs/tree/main/docs)

View File

@@ -1,82 +0,0 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: Profile with Samply
on:
push:
branches: [ main ]
workflow_dispatch:
jobs:
profile:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4.2.2
- uses: dtolnay/rust-toolchain@nightly
with:
components: llvm-tools-preview
- uses: actions/cache@v4
with:
path: |
~/.cargo/registry
~/.cargo/git
target
key: ${{ runner.os }}-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Install samply
uses: taiki-e/cache-cargo-install-action@v2
with:
tool: samply
- name: Configure kernel for profiling
run: echo '1' | sudo tee /proc/sys/kernel/perf_event_paranoid
- name: Create test volumes
run: |
for i in {0..4}; do
mkdir -p ./target/volume/test$i
done
- name: Set environment variables
run: |
echo "RUSTFS_VOLUMES=./target/volume/test{0...4}" >> $GITHUB_ENV
echo "RUST_LOG=rustfs=info,ecstore=info,s3s=info,iam=info,rustfs-obs=info" >> $GITHUB_ENV
- name: Download static files
run: |
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" -o tempfile.zip && unzip -o tempfile.zip -d ./rustfs/static && rm tempfile.zip
- name: Build with profiling
run: |
RUSTFLAGS="-C force-frame-pointers=yes" cargo +nightly build --profile profiling -p rustfs --bins
- name: Run samply with timeout
id: samply_record
run: |
timeout 120s samply record --output samply.json ./target/profiling/rustfs ${RUSTFS_VOLUMES}
if [ -f "samply.json" ]; then
echo "profile_generated=true" >> $GITHUB_OUTPUT
else
echo "profile_generated=false" >> $GITHUB_OUTPUT
echo "::error::Failed to generate profile data"
fi
- name: Upload profile data
if: steps.samply_record.outputs.profile_generated == 'true'
uses: actions/upload-artifact@v4
with:
name: samply-profile-${{ github.run_number }}
path: samply.json
retention-days: 7

39
CLA.md Normal file
View File

@@ -0,0 +1,39 @@
RustFS Individual Contributor License Agreement
Thank you for your interest in contributing documentation and related software code to a project hosted or managed by RustFS. In order to clarify the intellectual property license granted with Contributions from any person or entity, RustFS must have a Contributor License Agreement (“CLA”) on file that has been signed by each Contributor, indicating agreement to the license terms below. This version of the Contributor License Agreement allows an individual to submit Contributions to the applicable project. If you are making a submission on behalf of a legal entity, then you should sign the separate Corporate Contributor License Agreement.
You accept and agree to the following terms and conditions for Your present and future Contributions submitted to RustFS. You hereby irrevocably assign and transfer to RustFS all right, title, and interest in and to Your Contributions, including all copyrights and other intellectual property rights therein.
Definitions
“You” (or “Your”) shall mean the copyright owner or legal entity authorized by the copyright owner that is making this Agreement with RustFS. For legal entities, the entity making a Contribution and all other entities that control, are controlled by, or are under common control with that entity are considered to be a single Contributor. For the purposes of this definition, “control” means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.
“Contribution” shall mean any original work of authorship, including any modifications or additions to an existing work, that is intentionally submitted by You to RustFS for inclusion in, or documentation of, any of the products or projects owned or managed by RustFS (the “Work”), including without limitation any Work described in Schedule A. For the purposes of this definition, “submitted” means any form of electronic or written communication sent to RustFS or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, RustFS for the purpose of discussing and improving the Work.
Assignment of Copyright
Subject to the terms and conditions of this Agreement, You hereby irrevocably assign and transfer to RustFS all right, title, and interest in and to Your Contributions, including all copyrights and other intellectual property rights therein, for the entire term of such rights, including all renewals and extensions. You agree to execute all documents and take all actions as may be reasonably necessary to vest in RustFS the ownership of Your Contributions and to assist RustFS in perfecting, maintaining, and enforcing its rights in Your Contributions.
Grant of Patent License
Subject to the terms and conditions of this Agreement, You hereby grant to RustFS and to recipients of documentation and software distributed by RustFS a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by You that are necessarily infringed by Your Contribution(s) alone or by combination of Your Contribution(s) with the Work to which such Contribution(s) was submitted. If any entity institutes patent litigation against You or any other entity (including a cross-claim or counterclaim in a lawsuit) alleging that your Contribution, or the Work to which you have contributed, constitutes direct or contributory patent infringement, then any patent licenses granted to that entity under this Agreement for that Contribution or Work shall terminate as of the date such litigation is filed.
You represent that you are legally entitled to grant the above assignment and license.
You represent that each of Your Contributions is Your original creation (see section 7 for submissions on behalf of others). You represent that Your Contribution submissions include complete details of any third-party license or other restriction (including, but not limited to, related patents and trademarks) of which you are personally aware and which are associated with any part of Your Contributions.
You are not expected to provide support for Your Contributions, except to the extent You desire to provide support. You may provide support for free, for a fee, or not at all. Unless required by applicable law or agreed to in writing, You provide Your Contributions on an “AS IS” BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON- INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE.
Should You wish to submit work that is not Your original creation, You may submit it to RustFS separately from any Contribution, identifying the complete details of its source and of any license or other restriction (including, but not limited to, related patents, trademarks, and license agreements) of which you are personally aware, and conspicuously marking the work as “Submitted on behalf of a third-party: [named here]”.
You agree to notify RustFS of any facts or circumstances of which you become aware that would make these representations inaccurate in any respect.
Modification of CLA
RustFS reserves the right to update or modify this CLA in the future. Any updates or modifications to this CLA shall apply only to Contributions made after the effective date of the revised CLA. Contributions made prior to the update shall remain governed by the version of the CLA that was in effect at the time of submission. It is not necessary for all Contributors to re-sign the CLA when the CLA is updated or modified.
Governing Law and Dispute Resolution
This Agreement will be governed by and construed in accordance with the laws of the Peoples Republic of China excluding that body of laws known as conflict of laws. The parties expressly agree that the United Nations Convention on Contracts for the International Sale of Goods will not apply. Any legal action or proceeding arising under this Agreement will be brought exclusively in the courts located in Beijing, China, and the parties hereby irrevocably consent to the personal jurisdiction and venue therein.
For your reading convenience, this Agreement is written in parallel English and Chinese sections. To the extent there is a conflict between the English and Chinese sections, the English sections shall govern.

128
CODE_OF_CONDUCT.md Normal file
View File

@@ -0,0 +1,128 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
hello@rustfs.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series
of actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or
permanent ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within
the community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.
Community Impact Guidelines were inspired by [Mozilla's code of conduct
enforcement ladder](https://github.com/mozilla/diversity).
[homepage]: https://www.contributor-covenant.org
For answers to common questions about this code of conduct, see the FAQ at
https://www.contributor-covenant.org/faq. Translations are available at
https://www.contributor-covenant.org/translations.

View File

@@ -11,21 +11,25 @@
Before every commit, you **MUST**:
1. **Format your code**:
```bash
cargo fmt --all
```
2. **Verify formatting**:
```bash
cargo fmt --all --check
```
3. **Pass clippy checks**:
```bash
cargo clippy --all-targets --all-features -- -D warnings
```
4. **Ensure compilation**:
```bash
cargo check --all-targets
```
@@ -136,6 +140,7 @@ Install the `rust-analyzer` extension and add to your `settings.json`:
#### Other IDEs
Configure your IDE to:
- Use the project's `rustfmt.toml` configuration
- Format on save
- Run clippy checks

167
Cargo.lock generated
View File

@@ -472,9 +472,9 @@ dependencies = [
[[package]]
name = "async-channel"
version = "2.3.1"
version = "2.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "89b47800b0be77592da0afd425cc03468052844aff33b84e33cc696f64e77b6a"
checksum = "16c74e56284d2188cabb6ad99603d1ace887a5d7e7b695d01b728155ed9ed427"
dependencies = [
"concurrent-queue",
"event-listener-strategy",
@@ -733,9 +733,9 @@ dependencies = [
[[package]]
name = "aws-sdk-s3"
version = "1.95.0"
version = "1.96.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a316e3c4c38837084dfbf87c0fc6ea016b3dc3e1f867d9d7f5eddfe47e5cae37"
checksum = "6e25d24de44b34dcdd5182ac4e4c6f07bcec2661c505acef94c0d293b65505fe"
dependencies = [
"aws-credential-types",
"aws-runtime",
@@ -1171,7 +1171,7 @@ dependencies = [
"bitflags 2.9.1",
"cexpr",
"clang-sys",
"itertools 0.12.1",
"itertools 0.11.0",
"lazy_static",
"lazycell",
"log",
@@ -2058,7 +2058,6 @@ dependencies = [
"ciborium",
"clap",
"criterion-plot",
"futures",
"is-terminal",
"itertools 0.10.5",
"num-traits",
@@ -2071,7 +2070,6 @@ dependencies = [
"serde_derive",
"serde_json",
"tinytemplate",
"tokio",
"walkdir",
]
@@ -3471,7 +3469,7 @@ checksum = "92773504d58c093f6de2459af4af33faa518c13451eb8f2b5698ed3d36e7c813"
[[package]]
name = "e2e_test"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"bytes",
"flatbuffers 25.2.10",
@@ -4948,6 +4946,17 @@ version = "3.0.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8bb03732005da905c88227371639bf1ad885cc712789c011c31c5fb3ab3ccf02"
[[package]]
name = "io-uring"
version = "0.7.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b86e202f00093dcba4275d4636b93ef9dd75d025ae560d2521b45ea28ab49013"
dependencies = [
"bitflags 2.9.1",
"cfg-if",
"libc",
]
[[package]]
name = "ipnet"
version = "2.11.0"
@@ -5014,15 +5023,6 @@ dependencies = [
"either",
]
[[package]]
name = "itertools"
version = "0.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ba291022dbbd398a455acf126c1e341954079855bc60dfdda641363bd6922569"
dependencies = [
"either",
]
[[package]]
name = "itertools"
version = "0.13.0"
@@ -5332,7 +5332,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "07033963ba89ebaf1584d767badaa2e8fcec21aedea6b8c0346d487d49c28667"
dependencies = [
"cfg-if",
"windows-targets 0.53.0",
"windows-targets 0.52.6",
]
[[package]]
@@ -5625,9 +5625,9 @@ checksum = "490cc448043f947bae3cbee9c203358d62dbee0db12107a74be5c30ccfd09771"
[[package]]
name = "memchr"
version = "2.7.4"
version = "2.7.5"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "78ca9ab1a0babb1e7d5695e3530886289c18cf2f87ec19a575a0abdce112e3a3"
checksum = "32a282da65faaf38286cf3be983213fcf1d2e2a58700e808f83f4ea9a4804bc0"
[[package]]
name = "memoffset"
@@ -7830,7 +7830,7 @@ dependencies = [
[[package]]
name = "rustfs"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"async-trait",
"atoi",
@@ -7899,7 +7899,7 @@ dependencies = [
[[package]]
name = "rustfs-appauth"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"base64-simd",
"rsa",
@@ -7909,7 +7909,7 @@ dependencies = [
[[package]]
name = "rustfs-common"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"lazy_static",
"tokio",
@@ -7918,7 +7918,7 @@ dependencies = [
[[package]]
name = "rustfs-config"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"const-str",
"serde",
@@ -7927,7 +7927,7 @@ dependencies = [
[[package]]
name = "rustfs-crypto"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"aes-gcm",
"argon2",
@@ -7945,7 +7945,7 @@ dependencies = [
[[package]]
name = "rustfs-ecstore"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"async-channel",
"async-trait",
@@ -8020,7 +8020,7 @@ dependencies = [
[[package]]
name = "rustfs-filemeta"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"byteorder",
"bytes",
@@ -8041,7 +8041,7 @@ dependencies = [
[[package]]
name = "rustfs-gui"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"chrono",
"dioxus",
@@ -8062,7 +8062,7 @@ dependencies = [
[[package]]
name = "rustfs-iam"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"arc-swap",
"async-trait",
@@ -8086,7 +8086,7 @@ dependencies = [
[[package]]
name = "rustfs-lock"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"async-trait",
"lazy_static",
@@ -8103,7 +8103,7 @@ dependencies = [
[[package]]
name = "rustfs-madmin"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"chrono",
"humantime",
@@ -8115,7 +8115,7 @@ dependencies = [
[[package]]
name = "rustfs-notify"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"async-trait",
"axum",
@@ -8144,7 +8144,7 @@ dependencies = [
[[package]]
name = "rustfs-obs"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"async-trait",
"chrono",
@@ -8177,7 +8177,7 @@ dependencies = [
[[package]]
name = "rustfs-policy"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"base64-simd",
"ipnetwork",
@@ -8196,7 +8196,7 @@ dependencies = [
[[package]]
name = "rustfs-protos"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"flatbuffers 25.2.10",
"prost",
@@ -8207,12 +8207,11 @@ dependencies = [
[[package]]
name = "rustfs-rio"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"aes-gcm",
"bytes",
"crc32fast",
"criterion",
"futures",
"http 1.3.1",
"md-5",
@@ -8256,7 +8255,7 @@ dependencies = [
[[package]]
name = "rustfs-s3select-api"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"async-trait",
"bytes",
@@ -8280,7 +8279,7 @@ dependencies = [
[[package]]
name = "rustfs-s3select-query"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"async-recursion",
"async-trait",
@@ -8298,21 +8297,25 @@ dependencies = [
[[package]]
name = "rustfs-signer"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"bytes",
"http 1.3.1",
"hyper 1.6.0",
"lazy_static",
"rand 0.9.1",
"rustfs-utils",
"s3s",
"serde",
"serde_urlencoded",
"tempfile",
"time",
"tracing",
]
[[package]]
name = "rustfs-utils"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"base64-simd",
"blake3",
@@ -8356,7 +8359,7 @@ dependencies = [
[[package]]
name = "rustfs-workers"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"tokio",
"tracing",
@@ -8364,7 +8367,7 @@ dependencies = [
[[package]]
name = "rustfs-zip"
version = "0.0.1"
version = "0.0.5"
dependencies = [
"async-compression",
"tokio",
@@ -8552,8 +8555,9 @@ checksum = "28d3b2b1366ec20994f1fd18c3c594f05c5dd4bc44d8bb0c1c632c8d6829481f"
[[package]]
name = "s3s"
version = "0.12.0-dev"
source = "git+https://github.com/Nugine/s3s.git?rev=4733cdfb27b2713e832967232cbff413bb768c10#4733cdfb27b2713e832967232cbff413bb768c10"
version = "0.12.0-minio-preview.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b630a6b9051328a0c185cacf723180ccd7936d08f1fda0b932a60b1b9cd860d"
dependencies = [
"arrayvec",
"async-trait",
@@ -9834,17 +9838,19 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
[[package]]
name = "tokio"
version = "1.45.1"
version = "1.46.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "75ef51a33ef1da925cea3e4eb122833cb377c61439ca401b770f54902b806779"
checksum = "0cc3a2344dafbe23a245241fe8b09735b521110d30fcefbbd5feb1797ca35d17"
dependencies = [
"backtrace",
"bytes",
"io-uring",
"libc",
"mio",
"parking_lot",
"pin-project-lite",
"signal-hook-registry",
"slab",
"socket2",
"tokio-macros",
"tracing",
@@ -10085,6 +10091,7 @@ dependencies = [
"futures-util",
"http 1.3.1",
"http-body 1.0.1",
"http-body-util",
"iri-string",
"pin-project-lite",
"tokio",
@@ -11113,29 +11120,13 @@ dependencies = [
"windows_aarch64_gnullvm 0.52.6",
"windows_aarch64_msvc 0.52.6",
"windows_i686_gnu 0.52.6",
"windows_i686_gnullvm 0.52.6",
"windows_i686_gnullvm",
"windows_i686_msvc 0.52.6",
"windows_x86_64_gnu 0.52.6",
"windows_x86_64_gnullvm 0.52.6",
"windows_x86_64_msvc 0.52.6",
]
[[package]]
name = "windows-targets"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1e4c7e8ceaaf9cb7d7507c974735728ab453b67ef8f18febdd7c11fe59dca8b"
dependencies = [
"windows_aarch64_gnullvm 0.53.0",
"windows_aarch64_msvc 0.53.0",
"windows_i686_gnu 0.53.0",
"windows_i686_gnullvm 0.53.0",
"windows_i686_msvc 0.53.0",
"windows_x86_64_gnu 0.53.0",
"windows_x86_64_gnullvm 0.53.0",
"windows_x86_64_msvc 0.53.0",
]
[[package]]
name = "windows-threading"
version = "0.1.0"
@@ -11172,12 +11163,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "32a4622180e7a0ec044bb555404c800bc9fd9ec262ec147edd5989ccd0c02cd3"
[[package]]
name = "windows_aarch64_gnullvm"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86b8d5f90ddd19cb4a147a5fa63ca848db3df085e25fee3cc10b39b6eebae764"
[[package]]
name = "windows_aarch64_msvc"
version = "0.42.2"
@@ -11196,12 +11181,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "09ec2a7bb152e2252b53fa7803150007879548bc709c039df7627cabbd05d469"
[[package]]
name = "windows_aarch64_msvc"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c7651a1f62a11b8cbd5e0d42526e55f2c99886c77e007179efff86c2b137e66c"
[[package]]
name = "windows_i686_gnu"
version = "0.42.2"
@@ -11220,24 +11199,12 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8e9b5ad5ab802e97eb8e295ac6720e509ee4c243f69d781394014ebfe8bbfa0b"
[[package]]
name = "windows_i686_gnu"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c1dc67659d35f387f5f6c479dc4e28f1d4bb90ddd1a5d3da2e5d97b42d6272c3"
[[package]]
name = "windows_i686_gnullvm"
version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0eee52d38c090b3caa76c563b86c3a4bd71ef1a819287c19d586d7334ae8ed66"
[[package]]
name = "windows_i686_gnullvm"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ce6ccbdedbf6d6354471319e781c0dfef054c81fbc7cf83f338a4296c0cae11"
[[package]]
name = "windows_i686_msvc"
version = "0.42.2"
@@ -11256,12 +11223,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "240948bc05c5e7c6dabba28bf89d89ffce3e303022809e73deaefe4f6ec56c66"
[[package]]
name = "windows_i686_msvc"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "581fee95406bb13382d2f65cd4a908ca7b1e4c2f1917f143ba16efe98a589b5d"
[[package]]
name = "windows_x86_64_gnu"
version = "0.42.2"
@@ -11280,12 +11241,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "147a5c80aabfbf0c7d901cb5895d1de30ef2907eb21fbbab29ca94c5b08b1a78"
[[package]]
name = "windows_x86_64_gnu"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2e55b5ac9ea33f2fc1716d1742db15574fd6fc8dadc51caab1c16a3d3b4190ba"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.42.2"
@@ -11304,12 +11259,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24d5b23dc417412679681396f2b49f3de8c1473deb516bd34410872eff51ed0d"
[[package]]
name = "windows_x86_64_gnullvm"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0a6e035dd0599267ce1ee132e51c27dd29437f63325753051e71dd9e42406c57"
[[package]]
name = "windows_x86_64_msvc"
version = "0.42.2"
@@ -11328,12 +11277,6 @@ version = "0.52.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "589f6da84c646204747d1270a2a5661ea66ed1cced2631d546fdfb155959f9ec"
[[package]]
name = "windows_x86_64_msvc"
version = "0.53.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "271414315aff87387382ec3d271b52d7ae78726f5d44ac98b4f4030c91880486"
[[package]]
name = "winnow"
version = "0.5.40"

View File

@@ -44,7 +44,11 @@ edition = "2024"
license = "Apache-2.0"
repository = "https://github.com/rustfs/rustfs"
rust-version = "1.85"
version = "0.0.1"
version = "0.0.5"
homepage = "https://rustfs.com"
description = "RustFS is a high-performance distributed object storage software built using Rust, one of the most popular languages worldwide. "
keywords = ["RustFS", "Minio", "object-storage", "filesystem", "s3"]
categories = ["web-programming", "development-tools", "filesystem", "network-programming"]
[workspace.lints.rust]
unsafe_code = "deny"
@@ -52,38 +56,43 @@ unsafe_code = "deny"
[workspace.lints.clippy]
all = "warn"
[patch.crates-io]
rustfs-utils = { path = "crates/utils" }
rustfs-filemeta = { path = "crates/filemeta" }
rustfs-rio = { path = "crates/rio" }
[workspace.dependencies]
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.1" }
rustfs-appauth = { path = "crates/appauth", version = "0.0.1" }
rustfs-common = { path = "crates/common", version = "0.0.1" }
rustfs-crypto = { path = "crates/crypto", version = "0.0.1" }
rustfs-ecstore = { path = "crates/ecstore", version = "0.0.1" }
rustfs-iam = { path = "crates/iam", version = "0.0.1" }
rustfs-lock = { path = "crates/lock", version = "0.0.1" }
rustfs-madmin = { path = "crates/madmin", version = "0.0.1" }
rustfs-policy = { path = "crates/policy", version = "0.0.1" }
rustfs-protos = { path = "crates/protos", version = "0.0.1" }
rustfs-s3select-query = { path = "crates/s3select-query", version = "0.0.1" }
rustfs = { path = "./rustfs", version = "0.0.1" }
rustfs-zip = { path = "./crates/zip", version = "0.0.1" }
rustfs-config = { path = "./crates/config", version = "0.0.1" }
rustfs-obs = { path = "crates/obs", version = "0.0.1" }
rustfs-notify = { path = "crates/notify", version = "0.0.1" }
rustfs-utils = { path = "crates/utils", version = "0.0.1" }
rustfs-rio = { path = "crates/rio", version = "0.0.1" }
rustfs-filemeta = { path = "crates/filemeta", version = "0.0.1" }
rustfs-signer = { path = "crates/signer", version = "0.0.1" }
rustfs-workers = { path = "crates/workers", version = "0.0.1" }
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.5" }
rustfs-appauth = { path = "crates/appauth", version = "0.0.5" }
rustfs-common = { path = "crates/common", version = "0.0.5" }
rustfs-crypto = { path = "crates/crypto", version = "0.0.5" }
rustfs-ecstore = { path = "crates/ecstore", version = "0.0.5" }
rustfs-iam = { path = "crates/iam", version = "0.0.5" }
rustfs-lock = { path = "crates/lock", version = "0.0.5" }
rustfs-madmin = { path = "crates/madmin", version = "0.0.5" }
rustfs-policy = { path = "crates/policy", version = "0.0.5" }
rustfs-protos = { path = "crates/protos", version = "0.0.5" }
rustfs-s3select-query = { path = "crates/s3select-query", version = "0.0.5" }
rustfs = { path = "./rustfs", version = "0.0.5" }
rustfs-zip = { path = "./crates/zip", version = "0.0.5" }
rustfs-config = { path = "./crates/config", version = "0.0.5" }
rustfs-obs = { path = "crates/obs", version = "0.0.5" }
rustfs-notify = { path = "crates/notify", version = "0.0.5" }
rustfs-utils = { path = "crates/utils", version = "0.0.5" }
rustfs-rio = { path = "crates/rio", version = "0.0.5" }
rustfs-filemeta = { path = "crates/filemeta", version = "0.0.5" }
rustfs-signer = { path = "crates/signer", version = "0.0.5" }
rustfs-workers = { path = "crates/workers", version = "0.0.5" }
aes-gcm = { version = "0.10.3", features = ["std"] }
arc-swap = "1.7.1"
argon2 = { version = "0.5.3", features = ["std"] }
atoi = "2.0.0"
async-channel = "2.3.1"
async-channel = "2.4.0"
async-recursion = "1.1.1"
async-trait = "0.1.88"
async-compression = { version = "0.4.0" }
atomic_enum = "0.3.0"
aws-sdk-s3 = "1.95.0"
aws-sdk-s3 = "1.96.0"
axum = "0.8.4"
axum-extra = "0.10.1"
axum-server = { version = "0.7.2", features = ["tls-rustls"] }
@@ -107,7 +116,7 @@ dioxus = { version = "0.6.3", features = ["router"] }
dirs = "6.0.0"
enumset = "1.1.6"
flatbuffers = "25.2.10"
flate2 = "1.1.1"
flate2 = "1.1.2"
flexi_logger = { version = "0.31.2", features = ["trc", "dont_minimize_extra_stacks"] }
form_urlencoded = "1.2.1"
futures = "0.3.31"
@@ -124,7 +133,7 @@ hyper-util = { version = "0.1.14", features = [
"server-auto",
"server-graceful",
] }
hyper-rustls = "0.27.5"
hyper-rustls = "0.27.7"
http = "1.3.1"
http-body = "1.0.1"
humantime = "2.2.0"
@@ -171,7 +180,6 @@ pbkdf2 = "0.12.2"
percent-encoding = "2.3.1"
pin-project-lite = "0.2.16"
prost = "0.13.5"
prost-build = "0.13.5"
quick-xml = "0.37.5"
rand = "0.9.1"
rdkafka = { version = "0.37.0", features = ["tokio"] }
@@ -195,12 +203,12 @@ rmp-serde = "1.3.0"
rsa = "0.9.8"
rumqttc = { version = "0.24" }
rust-embed = { version = "8.7.2" }
rust-i18n = { version = "3.1.4" }
rust-i18n = { version = "3.1.5" }
rustfs-rsc = "2025.506.1"
rustls = { version = "0.23.28" }
rustls-pki-types = "1.12.0"
rustls-pemfile = "2.2.0"
s3s = { git = "https://github.com/Nugine/s3s.git", rev = "4733cdfb27b2713e832967232cbff413bb768c10" }
s3s = { version = "0.12.0-minio-preview.1" }
shadow-rs = { version = "1.2.0", default-features = false }
serde = { version = "1.0.219", features = ["derive"] }
serde_json = { version = "1.0.140", features = ["raw_value"] }
@@ -225,7 +233,7 @@ time = { version = "0.3.41", features = [
"macros",
"serde",
] }
tokio = { version = "1.45.1", features = ["fs", "rt-multi-thread"] }
tokio = { version = "1.46.1", features = ["fs", "rt-multi-thread"] }
tokio-rustls = { version = "0.26.2", default-features = false }
tokio-stream = { version = "0.1.17" }
tokio-tar = "0.3.1"
@@ -251,7 +259,7 @@ uuid = { version = "1.17.0", features = [
wildmatch = { version = "2.4.0", features = ["serde"] }
winapi = { version = "0.3.9" }
xxhash-rust = { version = "0.8.15", features = ["xxh64", "xxh3"] }
zip = "2.2.0"
zip = "2.4.2"
zstd = "0.13.3"
[profile.wasm-dev]

View File

@@ -12,36 +12,39 @@
# See the License for the specific language governing permissions and
# limitations under the License.
FROM alpine:latest
FROM alpine:3.18 AS builder
# Install runtime dependencies
RUN apk add --no-cache \
RUN apk add -U --no-cache \
ca-certificates \
tzdata \
&& rm -rf /var/cache/apk/*
curl \
bash \
unzip
# Create rustfs user and group
RUN addgroup -g 1000 rustfs && \
adduser -D -s /bin/sh -u 1000 -G rustfs rustfs
# Create data directories
RUN mkdir -p /data/rustfs && \
chown -R rustfs:rustfs /data
RUN curl -Lo /tmp/rustfs.zip https://dl.rustfs.com/artifacts/rustfs/rustfs-x86_64-unknown-linux-musl.zip && \
unzip -o /tmp/rustfs.zip -d /tmp && \
mv /tmp/rustfs /rustfs && \
chmod +x /rustfs && \
rm -rf /tmp/*
# Copy binary based on target architecture
COPY --chown=rustfs:rustfs \
target/*/release/rustfs \
/usr/local/bin/rustfs
FROM alpine:3.18
RUN chmod +x /usr/local/bin/rustfs
RUN apk add -U --no-cache \
ca-certificates \
bash
# Switch to non-root user
USER rustfs
COPY --from=builder /rustfs /usr/local/bin/rustfs
ENV RUSTFS_ACCESS_KEY=rustfsadmin \
RUSTFS_SECRET_KEY=rustfsadmin \
RUSTFS_ADDRESS=":9000" \
RUSTFS_CONSOLE_ADDRESS=":9001" \
RUSTFS_CONSOLE_ENABLE=true \
RUST_LOG=warn
# Expose ports
EXPOSE 9000 9001
RUN mkdir -p /data
VOLUME /data
# Set default command
CMD ["rustfs", "/data"]

View File

@@ -1,14 +1,14 @@
[![RustFS](https://github.com/user-attachments/assets/547d72f7-d1f4-4763-b9a8-6040bad9251a)](https://rustfs.com)
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
<p align="center">RustFS is a high-performance distributed object storage software built using Rust</p>
<p align="center">
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
<img alt="Github Contributors" src="https://img.shields.io/github/contributors/rustfs/rustfs"/>
<img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/rustfs/rustfs"/>
<img alt="Discord" src="https://img.shields.io/discord/1107178041848909847?label=discord"/>
</p>
<p align="center">
@@ -19,11 +19,22 @@
</p>
<p align="center">
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a>
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a> |
<!-- Keep these links. Translations will automatically update with the README. -->
<a href="https://readme-i18n.com/rustfs/rustfs?lang=de">Deutsch</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=es">Español</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=fr">français</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ja">日本語</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ko">한국어</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=pt">Português</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ru">Русский</a>
</p>
RustFS is a high-performance distributed object storage software built using Rust, one of the most popular languages worldwide. Along with MinIO, it shares a range of advantages such as simplicity, S3 compatibility, open-source nature, support for data lakes, AI, and big data. Furthermore, it has a better and more user-friendly open-source license in comparison to other storage systems, being constructed under the Apache license. As Rust serves as its foundation, RustFS provides faster speed and safer distributed features for high-performance object storage.
> ⚠️ **RustFS is under rapid development. Do NOT use in production environments!**
## Features
- **High Performance**: Built with Rust, ensuring speed and efficiency.
@@ -63,14 +74,20 @@ Stress test server parameters
To get started with RustFS, follow these steps:
1. **Install RustFS**: Download the latest release from our [GitHub Releases](https://github.com/rustfs/rustfs/releases).
2. **Run RustFS**: Use the provided binary to start the server.
1. **One-click installation script (Option 1)**
```bash
./rustfs /data
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
```
3. **Access the Console**: Open your web browser and navigate to `http://localhost:9001` to access the RustFS console.
2. **Docker Quick Start (Option 2)**
```bash
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
```
3. **Access the Console**: Open your web browser and navigate to `http://localhost:9001` to access the RustFS console, default username and password is `rustfsadmin` .
4. **Create a Bucket**: Use the console to create a new bucket for your objects.
5. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs to interact with your RustFS instance.

View File

@@ -1,14 +1,12 @@
[![RustFS](https://github.com/user-attachments/assets/547d72f7-d1f4-4763-b9a8-6040bad9251a)](https://rustfs.com)
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
<p align="center">RustFS 是一个使用 Rust 构建的高性能分布式对象存储软件</p >
<p align="center">
<!-- ALL-CONTRIBUTORS-BADGE:START - Do not remove or modify this section -->
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
<img alt="Github Contributors" src="https://img.shields.io/github/contributors/rustfs/rustfs"/>
<img alt="GitHub closed issues" src="https://img.shields.io/github/issues-closed/rustfs/rustfs"/>
<img alt="Discord" src="https://img.shields.io/discord/1107178041848909847?label=discord"/>
</p >
<p align="center">
@@ -63,14 +61,20 @@ RustFS 是一个使用 Rust全球最受欢迎的编程语言之一构建
要开始使用 RustFS请按照以下步骤操作
1. **安装 RustFS**:从我们的 [GitHub Releases](https://github.com/rustfs/rustfs/releases) 下载最新版本。
2. **运行 RustFS**:使用提供的二进制文件启动服务器。
1. **一键脚本快速启动 (方案一)**
```bash
./rustfs /data
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
```
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9001` 以访问 RustFS 控制台。
2. **Docker快速启动方案二**
```bash
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
```
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9001` 以访问 RustFS 控制台,默认的用户名和密码是 `rustfsadmin` 。
4. **创建存储桶**:使用控制台为您的对象创建新的存储桶。
5. **上传对象**:您可以直接通过控制台上传文件,或使用 S3 兼容的 API 与您的 RustFS 实例交互。

18
SECURITY.md Normal file
View File

@@ -0,0 +1,18 @@
# Security Policy
## Supported Versions
Use this section to tell people about which versions of your project are
currently being supported with security updates.
| Version | Supported |
| ------- | ------------------ |
| 1.x.x | :white_check_mark: |
## Reporting a Vulnerability
Use this section to tell people how to report a vulnerability.
Tell them where to go, how often they can expect to get an update on a
reported vulnerability, what to expect if the vulnerability is accepted or
declined, etc.

68
TODO.md
View File

@@ -1,68 +0,0 @@
# TODO LIST
## 基础存储
- [x] EC 可用读写数量判断 Read/WriteQuorum
- [ ] 优化后台并发执行,可中断,传引用?
- [x] 小文件存储到 metafile, inlinedata
- [x] 完善 bucketmeta
- [x] 对象锁
- [x] 边读写边 hash实现 reader 嵌套
- [x] 远程 rpc
- [x] 错误类型判断,程序中判断错误类型,如何统一错误
- [x] 优化 xlmeta, 自定义 msg 数据结构
- [ ] 优化 io.reader 参考 GetObjectNInfo 方便 io copy 如果 异步写,再平衡
- [ ] 代码优化 使用范型?
- [ ] 抽象出 metafile 存储
## 基础功能
- [ ] 桶操作
- [x] 创建 CreateBucket
- [x] 列表 ListBuckets
- [ ] 桶下面的文件列表 ListObjects
- [x] 简单实现功能
- [ ] 优化并发读取
- [ ] 删除
- [x] 详情 HeadBucket
- [ ] 文件操作
- [x] 上传 PutObject
- [x] 大文件上传
- [x] 创建分片上传 CreateMultipartUpload
- [x] 上传分片 PubObjectPart
- [x] 提交完成 CompleteMultipartUpload
- [x] 取消上传 AbortMultipartUpload
- [x] 下载 GetObject
- [x] 删除 DeleteObjects
- [ ] 版本控制
- [ ] 对象锁
- [ ] 复制 CopyObject
- [ ] 详情 HeadObject
- [ ] 对象预先签名get、put、head、post
## 扩展功能
- [ ] 用户管理
- [ ] Policy 管理
- [ ] AK/SK分配管理
- [ ] data scanner 统计和对象修复
- [ ] 桶配额
- [ ] 桶只读
- [ ] 桶复制
- [ ] 桶事件通知
- [ ] 桶公开、桶私有
- [ ] 对象生命周期管理
- [ ] prometheus 对接
- [ ] 日志收集和日志外发
- [ ] 对象压缩
- [ ] STS
- [ ] 分层阿里云、腾讯云、S3 远程对接)
## 性能优化
- [ ] bitrot impl AsyncRead/AsyncWrite
- [ ] erasure 并发读写
- [x] 完善删除逻辑,并发处理,先移动到回收站,
- [ ] 空间不足时清空回收站
- [ ] list_object 使用 reader 传输

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 498 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 969 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 969 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

BIN
cli/rustfs-gui/icon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 34 KiB

View File

@@ -0,0 +1,15 @@
<svg width="1558" height="260" viewBox="0 0 1558 260" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_0_3)">
<path d="M1288.5 112.905H1159.75V58.4404H1262L1270 0L1074 0V260H1159.75V162.997H1296.95L1288.5 112.905Z" fill="#0196D0"/>
<path d="M1058.62 58.4404V0H789V58.4404H881.133V260H966.885V58.4404H1058.62Z" fill="#0196D0"/>
<path d="M521 179.102V0L454.973 15V161C454.973 181.124 452.084 193.146 443.5 202C434.916 211.257 419.318 214.5 400.5 214.5C381.022 214.5 366.744 210.854 357.5 202C348.916 193.548 346.357 175.721 346.357 156V0L280 15V175.48C280 208.08 290.234 229.412 309.712 241.486C329.19 253.56 358.903 260 400.5 260C440.447 260 470.159 253.56 490.297 241.486C510.766 229.412 521 208.483 521 179.102Z" fill="#0196D0"/>
<path d="M172.84 84.2813C172.84 97.7982 168.249 107.737 158.41 113.303C149.883 118.471 137.092 121.254 120.693 122.049V162.997C129.876 163.792 138.076 166.177 144.307 176.514L184.647 260H265L225.316 180.489C213.181 155.046 201.374 149.48 178.744 143.517C212.197 138.349 241.386 118.471 241.386 73.1499C241.386 53.2722 233.843 30.2141 218.756 17.8899C203.998 5.56575 183.991 0 159.394 0H120.693V48.5015H127.58C142.23 48.5015 153.6 51.4169 161.689 57.2477C169.233 62.8135 172.84 71.5596 172.84 84.2813ZM120.693 122.049C119.163 122.049 117.741 122.049 116.43 122.049H68.5457V48.5015H120.693V0H0V260H70.5137V162.997H110.526C113.806 162.997 117.741 162.997 120.693 162.997V122.049Z" fill="#0196D0"/>
<path d="M774 179.297C774 160.829 766.671 144.669 752.013 131.972C738.127 119.66 712.025 110.169 673.708 103.5C662.136 101.191 651.722 99.6523 643.235 97.3437C586.532 84.6467 594.632 52.7118 650.564 52.7118C680.651 52.7118 709.582 61.946 738.127 66.9478C742.37 67.7174 743.913 68.1021 744.298 68.1021L750.47 12.697C720.383 3.46282 684.895 0 654.036 0C616.619 0 587.689 6.54088 567.245 19.2379C546.801 31.9349 536 57.7137 536 82.3382C536 103.5 543.715 119.66 559.916 131.972C575.731 143.515 604.276 152.749 645.55 160.059C658.279 162.368 668.694 163.907 676.794 166.215C685.023 168.524 691.066 170.704 694.924 172.756C702.253 176.604 706.11 182.375 706.11 188.531C706.11 196.611 701.481 202.767 692.224 207C664.836 220.081 587.689 212.001 556.83 198.15L543.715 247.784C547.186 248.169 552.972 249.323 559.916 250.477C616.619 259.327 690.681 270.869 741.212 238.935C762.814 225.468 774 206.23 774 179.297Z" fill="#0196D0"/>
<path d="M1558 179.568C1558 160.383 1550.42 144.268 1535.67 131.99C1521.32 119.968 1494.34 110.631 1454.74 103.981C1442.38 101.679 1432.01 99.3764 1422.84 97.8416C1422.44 97.8416 1422.04 97.8416 1422.04 97.4579V112.422L1361.04 75.2038L1422.04 38.3692V52.9496C1424.7 52.9496 1427.49 52.9496 1430.41 52.9496C1461.51 52.9496 1491.42 62.5419 1521.32 67.5299C1525.31 67.9136 1526.9 67.9136 1527.3 67.9136L1533.68 12.6619C1502.98 3.83692 1465.9 0 1434 0C1395.33 0 1365.43 6.52277 1345.09 19.5683C1323.16 32.6139 1312 57.9376 1312 82.8776C1312 103.981 1320.37 120.096 1336.72 131.607C1353.46 143.885 1382.97 153.093 1425.23 160.383C1434 161.535 1441.18 162.686 1447.56 164.22L1448.36 150.791L1507.36 190.312L1445.57 224.844L1445.96 212.949C1409.68 215.635 1357.45 209.112 1333.53 197.985L1320.37 247.482C1323.56 248.249 1329.54 248.633 1336.72 250.551C1395.33 259.376 1471.88 270.887 1524.11 238.657C1546.84 225.611 1558 205.659 1558 179.568Z" fill="#0196D0"/>
</g>
<defs>
<clipPath id="clip0_0_3">
<rect width="1558" height="260" fill="white"/>
</clipPath>
</defs>
</svg>

After

Width:  |  Height:  |  Size: 3.4 KiB

View File

@@ -19,6 +19,10 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Application authentication and authorization for RustFS, providing secure access control and user management."
keywords = ["authentication", "authorization", "security", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "authentication"]
[dependencies]
base64-simd = { workspace = true }

37
crates/appauth/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS AppAuth - Application Authentication
<p align="center">
<strong>Application-level authentication and authorization module for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS AppAuth** provides application-level authentication and authorization capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- JWT-based authentication with secure token management
- RBAC (Role-Based Access Control) for fine-grained permissions
- Multi-tenant application isolation and management
- OAuth 2.0 and OpenID Connect integration
- API key management and rotation
- Session management with configurable expiration
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -23,14 +23,14 @@ use std::io::{Error, Result};
#[derive(Serialize, Deserialize, Debug, Default, Clone)]
pub struct Token {
pub name: String, // 应用 ID
pub expired: u64, // 到期时间 (UNIX 时间戳)
pub name: String, // Application ID
pub expired: u64, // Expiry time (UNIX timestamp)
}
// 公钥生成 Token
// [token] Token 对象
// [key] 公钥字符串
// 返回 base64 处理的加密字符串
/// Public key generation Token
/// [token] Token object
/// [key] Public key string
/// Returns the encrypted string processed by base64
pub fn gencode(token: &Token, key: &str) -> Result<String> {
let data = serde_json::to_vec(token)?;
let public_key = RsaPublicKey::from_public_key_pem(key).map_err(Error::other)?;
@@ -38,10 +38,10 @@ pub fn gencode(token: &Token, key: &str) -> Result<String> {
Ok(base64_simd::URL_SAFE_NO_PAD.encode_to_string(&encrypted_data))
}
// 私钥解析 Token
// [token] base64 处理的加密字符串
// [key] 私钥字符串
// 返回 Token 对象
/// Private key resolution Token
/// [token] Encrypted string processed by base64
/// [key] Private key string
/// Return to the Token object
pub fn parse(token: &str, key: &str) -> Result<Token> {
let encrypted_data = base64_simd::URL_SAFE_NO_PAD
.decode_to_vec(token.as_bytes())

View File

@@ -19,6 +19,10 @@ edition.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
homepage.workspace = true
description = "Common utilities and data structures for RustFS, providing shared functionality across the project."
keywords = ["common", "utilities", "data-structures", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "data-structures"]
[lints]
workspace = true

37
crates/common/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Common - Shared Components
<p align="center">
<strong>Shared components and common utilities module for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Common** provides shared components and common utilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Shared data structures and type definitions
- Common error handling and result types
- Utility functions used across modules
- Configuration structures and validation
- Logging and tracing infrastructure
- Cross-platform compatibility helpers
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,10 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Configuration management for RustFS, providing a centralized way to manage application settings and features."
keywords = ["configuration", "settings", "management", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "config"]
[dependencies]
const-str = { workspace = true, optional = true }

37
crates/config/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Config - Configuration Management
<p align="center">
<strong>Configuration management and validation module for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Config** provides configuration management and validation capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Multi-format configuration support (TOML, YAML, JSON, ENV)
- Environment variable integration and override
- Configuration validation and type safety
- Hot-reload capabilities for dynamic updates
- Default value management and fallbacks
- Secure credential handling and encryption
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,11 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Cryptography and security features for RustFS, providing encryption, hashing, and secure authentication mechanisms."
keywords = ["cryptography", "encryption", "hashing", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "cryptography"]
documentation = "https://docs.rs/rustfs-crypto/latest/rustfs_crypto/"
[lints]
workspace = true

37
crates/crypto/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Crypto - Cryptographic Operations
<p align="center">
<strong>High-performance cryptographic operations module for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Crypto** provides high-performance cryptographic operations for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- AES-GCM encryption with hardware acceleration
- RSA and ECDSA digital signature support
- Secure hash functions (SHA-256, BLAKE3)
- Key derivation and management utilities
- Stream ciphers for large data encryption
- Hardware security module integration
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,12 @@ edition.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
homepage.workspace = true
description = "Erasure coding storage backend for RustFS, providing efficient data storage and retrieval with redundancy."
keywords = ["erasure-coding", "storage", "rustfs", "Minio", "solomon"]
categories = ["web-programming", "development-tools", "filesystem"]
documentation = "https://docs.rs/rustfs-ecstore/latest/rustfs_ecstore/"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[lints]

64
crates/ecstore/README.md Normal file
View File

@@ -0,0 +1,64 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS ECStore - Erasure Coding Storage
<p align="center">
<strong>High-performance erasure coding storage engine for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS ECStore** provides erasure coding storage capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Reed-Solomon erasure coding implementation
- Configurable redundancy levels (N+K schemes)
- Automatic data healing and reconstruction
- Multi-drive support with intelligent placement
- Parallel encoding/decoding for performance
- Efficient disk space utilization
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
```
Copyright 2024 RustFS Team
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
---
<p align="center">
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
All other trademarks are the property of their respective owners.
</p>
<p align="center">
Made with ❤️ by the RustFS Storage Team
</p>

View File

@@ -1,103 +1,19 @@
# ECStore - Erasure Coding Storage
ECStore provides erasure coding functionality for the RustFS project, using high-performance Reed-Solomon SIMD
implementation for optimal performance.
ECStore provides erasure coding functionality for the RustFS project, using high-performance Reed-Solomon SIMD implementation for optimal performance.
## Reed-Solomon Implementation
## Features
### SIMD Backend (Only)
- **Reed-Solomon Implementation**: High-performance SIMD-optimized erasure coding
- **Cross-Platform Compatibility**: Support for x86_64, aarch64, and other architectures
- **Performance Optimized**: SIMD instructions for maximum throughput
- **Thread Safety**: Safe concurrent access with caching optimizations
- **Scalable**: Excellent performance for high-throughput scenarios
- **Performance**: Uses SIMD optimization for high-performance encoding/decoding
- **Compatibility**: Works with any shard size through SIMD implementation
- **Reliability**: High-performance SIMD implementation for large data processing
- **Use case**: Optimized for maximum performance in large data processing scenarios
## Documentation
### Usage Example
For complete documentation, examples, and usage information, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
```rust
use rustfs_ecstore::erasure_coding::Erasure;
## License
// Create erasure coding instance
// 4 data shards, 2 parity shards, 1KB block size
let erasure = Erasure::new(4, 2, 1024);
// Encode data
let data = b"hello world from rustfs erasure coding";
let shards = erasure.encode_data(data) ?;
// Simulate loss of one shard
let mut shards_opt: Vec<Option<Vec<u8> > > = shards
.iter()
.map( | b| Some(b.to_vec()))
.collect();
shards_opt[2] = None; // Lose shard 2
// Reconstruct missing data
erasure.decode_data( & mut shards_opt) ?;
// Recover original data
let mut recovered = Vec::new();
for shard in shards_opt.iter().take(4) { // Only data shards
recovered.extend_from_slice(shard.as_ref().unwrap());
}
recovered.truncate(data.len());
assert_eq!(&recovered, data);
```
## Performance Considerations
### SIMD Implementation Benefits
- **High Throughput**: Optimized for large block sizes (>= 1KB recommended)
- **CPU Optimization**: Leverages modern CPU SIMD instructions
- **Scalability**: Excellent performance for high-throughput scenarios
### Implementation Details
#### `reed-solomon-simd`
- **Instance Caching**: Encoder/decoder instances are cached and reused for optimal performance
- **Thread Safety**: Thread-safe with RwLock-based caching
- **SIMD Optimization**: Leverages CPU SIMD instructions for maximum performance
- **Reset Capability**: Cached instances are reset for different parameters, avoiding unnecessary allocations
### Performance Tips
1. **Batch Operations**: When possible, batch multiple small operations into larger blocks
2. **Block Size Optimization**: Use block sizes that are multiples of 64 bytes for optimal SIMD performance
3. **Memory Allocation**: Pre-allocate buffers when processing multiple blocks
4. **Cache Warming**: Initial operations may be slower due to cache setup, subsequent operations benefit from caching
## Cross-Platform Compatibility
The SIMD implementation supports:
- x86_64 with advanced SIMD instructions (AVX2, SSE)
- aarch64 (ARM64) with NEON SIMD optimizations
- Other architectures with fallback implementations
The implementation automatically selects the best available SIMD instructions for the target platform, providing optimal
performance across different architectures.
## Testing and Benchmarking
Run performance benchmarks:
```bash
# Run erasure coding benchmarks
cargo bench --bench erasure_benchmark
# Run comparison benchmarks
cargo bench --bench comparison_benchmark
# Generate benchmark reports
./run_benchmarks.sh
```
## Error Handling
All operations return `Result` types with comprehensive error information:
- Encoding errors: Invalid parameters, insufficient memory
- Decoding errors: Too many missing shards, corrupted data
- Configuration errors: Invalid shard counts, unsupported parameters
This project is licensed under the Apache License, Version 2.0.

View File

@@ -1,4 +1,3 @@
#![allow(unused_imports)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -12,6 +11,7 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
@@ -41,7 +41,7 @@ const ERR_LIFECYCLE_DUPLICATE_ID: &str = "Rule ID must be unique. Found same ID
const _ERR_XML_NOT_WELL_FORMED: &str =
"The XML you provided was not well-formed or did not validate against our published schema";
const ERR_LIFECYCLE_BUCKET_LOCKED: &str =
"ExpiredObjectAllVersions element and DelMarkerExpiration action cannot be used on an object locked bucket";
"ExpiredObjectAllVersions element and DelMarkerExpiration action cannot be used on an retention bucket";
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum IlmAction {
@@ -102,30 +102,30 @@ impl RuleValidate for LifecycleRule {
}
fn validate_status(&self) -> Result<()> {
if self.Status.len() == 0 {
return errEmptyRuleStatus;
if self.status.len() == 0 {
return ErrEmptyRuleStatus;
}
if self.Status != Enabled && self.Status != Disabled {
return errInvalidRuleStatus;
if self.status != Enabled && self.status != Disabled {
return ErrInvalidRuleStatus;
}
Ok(())
}
fn validate_expiration(&self) -> Result<()> {
self.Expiration.Validate();
self.expiration.validate();
}
fn validate_noncurrent_expiration(&self) -> Result<()> {
self.NoncurrentVersionExpiration.Validate()
self.noncurrent_version_expiration.validate()
}
fn validate_prefix_and_filter(&self) -> Result<()> {
if !self.Prefix.set && self.Filter.IsEmpty() || self.Prefix.set && !self.Filter.IsEmpty() {
return errXMLNotWellFormed;
if !self.prefix.set && self.Filter.isempty() || self.prefix.set && !self.filter.isempty() {
return ErrXMLNotWellFormed;
}
if !self.Prefix.set {
return self.Filter.Validate();
if !self.prefix.set {
return self.filter.validate();
}
Ok(())
}
@@ -267,7 +267,7 @@ impl Lifecycle for BucketLifecycleConfiguration {
r.validate()?;
if let Some(expiration) = r.expiration.as_ref() {
if let Some(expired_object_delete_marker) = expiration.expired_object_delete_marker {
if lr_retention && (!expired_object_delete_marker) {
if lr_retention && (expired_object_delete_marker) {
return Err(std::io::Error::other(ERR_LIFECYCLE_BUCKET_LOCKED));
}
}

View File

@@ -20,12 +20,12 @@
#![allow(clippy::all)]
use lazy_static::lazy_static;
use rustfs_utils::HashAlgorithm;
use std::collections::HashMap;
use crate::client::{api_put_object::PutObjectOptions, api_s3_datatypes::ObjectPart};
use crate::{disk::DiskAPI, store_api::GetObjectReader};
use rustfs_utils::crypto::{base64_decode, base64_encode};
use rustfs_utils::hasher::{Hasher, Sha256};
use s3s::header::{
X_AMZ_CHECKSUM_ALGORITHM, X_AMZ_CHECKSUM_CRC32, X_AMZ_CHECKSUM_CRC32C, X_AMZ_CHECKSUM_SHA1, X_AMZ_CHECKSUM_SHA256,
};
@@ -133,7 +133,7 @@ impl ChecksumMode {
}
}
pub fn hasher(&self) -> Result<Box<dyn Hasher>, std::io::Error> {
pub fn hasher(&self) -> Result<HashAlgorithm, std::io::Error> {
match /*C_ChecksumMask & **/self {
/*ChecksumMode::ChecksumCRC32 => {
return Ok(Box::new(crc32fast::Hasher::new()));
@@ -145,7 +145,7 @@ impl ChecksumMode {
return Ok(Box::new(sha1::new()));
}*/
ChecksumMode::ChecksumSHA256 => {
return Ok(Box::new(Sha256::new()));
return Ok(HashAlgorithm::SHA256);
}
/*ChecksumMode::ChecksumCRC64NVME => {
return Ok(Box::new(crc64nvme.New());
@@ -170,8 +170,8 @@ impl ChecksumMode {
return Ok("".to_string());
}
let mut h = self.hasher()?;
h.write(b);
Ok(base64_encode(h.sum().as_bytes()))
let hash = h.hash_encode(b);
Ok(base64_encode(hash.as_ref()))
}
pub fn to_string(&self) -> String {
@@ -201,15 +201,15 @@ impl ChecksumMode {
}
}
pub fn check_sum_reader(&self, r: GetObjectReader) -> Result<Checksum, std::io::Error> {
let mut h = self.hasher()?;
Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
}
// pub fn check_sum_reader(&self, r: GetObjectReader) -> Result<Checksum, std::io::Error> {
// let mut h = self.hasher()?;
// Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
// }
pub fn check_sum_bytes(&self, b: &[u8]) -> Result<Checksum, std::io::Error> {
let mut h = self.hasher()?;
Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
}
// pub fn check_sum_bytes(&self, b: &[u8]) -> Result<Checksum, std::io::Error> {
// let mut h = self.hasher()?;
// Ok(Checksum::new(self.clone(), h.sum().as_bytes()))
// }
pub fn composite_checksum(&self, p: &mut [ObjectPart]) -> Result<Checksum, std::io::Error> {
if !self.can_composite() {
@@ -227,10 +227,10 @@ impl ChecksumMode {
let c = self.base();
let crc_bytes = Vec::<u8>::with_capacity(p.len() * self.raw_byte_len() as usize);
let mut h = self.hasher()?;
h.write(&crc_bytes);
let hash = h.hash_encode(crc_bytes.as_ref());
Ok(Checksum {
checksum_type: self.clone(),
r: h.sum().as_bytes().to_vec(),
r: hash.as_ref().to_vec(),
computed: false,
})
}

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -0,0 +1,184 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
#![allow(unused_must_use)]
#![allow(clippy::all)]
use bytes::Bytes;
use http::{HeaderMap, HeaderValue};
use s3s::dto::Owner;
use std::collections::HashMap;
use std::io::Cursor;
use tokio::io::BufReader;
use crate::client::{
api_error_response::{err_invalid_argument, http_resp_to_error_response},
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
};
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
#[derive(Clone, Debug, Default, serde::Serialize, serde::Deserialize)]
pub struct Grantee {
pub id: String,
pub display_name: String,
pub uri: String,
}
#[derive(Clone, Debug, Default, serde::Serialize, serde::Deserialize)]
pub struct Grant {
pub grantee: Grantee,
pub permission: String,
}
#[derive(Debug, Default, serde::Serialize, serde::Deserialize)]
pub struct AccessControlList {
pub grant: Vec<Grant>,
pub permission: String,
}
#[derive(Debug, Default, serde::Deserialize)]
pub struct AccessControlPolicy {
#[serde(skip)]
owner: Owner,
pub access_control_list: AccessControlList,
}
impl TransitionClient {
pub async fn get_object_acl(&self, bucket_name: &str, object_name: &str) -> Result<ObjectInfo, std::io::Error> {
let mut url_values = HashMap::new();
url_values.insert("acl".to_string(), "".to_string());
let mut resp = self
.execute_method(
http::Method::GET,
&mut RequestMetadata {
bucket_name: bucket_name.to_string(),
object_name: object_name.to_string(),
query_values: url_values,
custom_header: HeaderMap::new(),
content_sha256_hex: EMPTY_STRING_SHA256_HASH.to_string(),
content_body: ReaderImpl::Body(Bytes::new()),
content_length: 0,
content_md5_base64: "".to_string(),
stream_sha256: false,
trailer: HeaderMap::new(),
pre_sign_url: Default::default(),
add_crc: Default::default(),
extra_pre_sign_header: Default::default(),
bucket_location: Default::default(),
expires: Default::default(),
},
)
.await?;
if resp.status() != http::StatusCode::OK {
let b = resp.body().bytes().expect("err").to_vec();
return Err(std::io::Error::other(http_resp_to_error_response(resp, b, bucket_name, object_name)));
}
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
let mut res = match serde_xml_rs::from_str::<AccessControlPolicy>(&String::from_utf8(b).unwrap()) {
Ok(result) => result,
Err(err) => {
return Err(std::io::Error::other(err.to_string()));
}
};
let mut obj_info = self
.stat_object(bucket_name, object_name, &GetObjectOptions::default())
.await?;
obj_info.owner.display_name = res.owner.display_name.clone();
obj_info.owner.id = res.owner.id.clone();
//obj_info.grant.extend(res.access_control_list.grant);
let canned_acl = get_canned_acl(&res);
if canned_acl != "" {
obj_info
.metadata
.insert("X-Amz-Acl", HeaderValue::from_str(&canned_acl).unwrap());
return Ok(obj_info);
}
let grant_acl = get_amz_grant_acl(&res);
/*for (k, v) in grant_acl {
obj_info.metadata.insert(HeaderName::from_bytes(k.as_bytes()).unwrap(), HeaderValue::from_str(&v.to_string()).unwrap());
}*/
Ok(obj_info)
}
}
fn get_canned_acl(ac_policy: &AccessControlPolicy) -> String {
let grants = ac_policy.access_control_list.grant.clone();
if grants.len() == 1 {
if grants[0].grantee.uri == "" && grants[0].permission == "FULL_CONTROL" {
return "private".to_string();
}
} else if grants.len() == 2 {
for g in grants {
if g.grantee.uri == "http://acs.amazonaws.com/groups/global/AuthenticatedUsers" && &g.permission == "READ" {
return "authenticated-read".to_string();
}
if g.grantee.uri == "http://acs.amazonaws.com/groups/global/AllUsers" && &g.permission == "READ" {
return "public-read".to_string();
}
if g.permission == "READ" && g.grantee.id == ac_policy.owner.id.clone().unwrap() {
return "bucket-owner-read".to_string();
}
}
} else if grants.len() == 3 {
for g in grants {
if g.grantee.uri == "http://acs.amazonaws.com/groups/global/AllUsers" && g.permission == "WRITE" {
return "public-read-write".to_string();
}
}
}
"".to_string()
}
pub fn get_amz_grant_acl(ac_policy: &AccessControlPolicy) -> HashMap<String, Vec<String>> {
let grants = ac_policy.access_control_list.grant.clone();
let mut res = HashMap::<String, Vec<String>>::new();
for g in grants {
let mut id = "id=".to_string();
id.push_str(&g.grantee.id);
let permission: &str = &g.permission;
match permission {
"READ" => {
res.entry("X-Amz-Grant-Read".to_string()).or_insert(vec![]).push(id);
}
"WRITE" => {
res.entry("X-Amz-Grant-Write".to_string()).or_insert(vec![]).push(id);
}
"READ_ACP" => {
res.entry("X-Amz-Grant-Read-Acp".to_string()).or_insert(vec![]).push(id);
}
"WRITE_ACP" => {
res.entry("X-Amz-Grant-Write-Acp".to_string()).or_insert(vec![]).push(id);
}
"FULL_CONTROL" => {
res.entry("X-Amz-Grant-Full-Control".to_string()).or_insert(vec![]).push(id);
}
_ => (),
}
}
res
}

View File

@@ -0,0 +1,244 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
#![allow(unused_must_use)]
#![allow(clippy::all)]
use bytes::Bytes;
use http::{HeaderMap, HeaderValue};
use std::collections::HashMap;
use std::io::Cursor;
use time::OffsetDateTime;
use tokio::io::BufReader;
use crate::client::constants::{GET_OBJECT_ATTRIBUTES_MAX_PARTS, GET_OBJECT_ATTRIBUTES_TAGS, ISO8601_DATEFORMAT};
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
use s3s::header::{
X_AMZ_DELETE_MARKER, X_AMZ_MAX_PARTS, X_AMZ_METADATA_DIRECTIVE, X_AMZ_OBJECT_ATTRIBUTES, X_AMZ_PART_NUMBER_MARKER,
X_AMZ_REQUEST_CHARGED, X_AMZ_RESTORE, X_AMZ_VERSION_ID,
};
use s3s::{Body, dto::Owner};
use crate::client::{
api_error_response::err_invalid_argument,
api_get_object_acl::AccessControlPolicy,
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
};
pub struct ObjectAttributesOptions {
pub max_parts: i64,
pub version_id: String,
pub part_number_marker: i64,
//server_side_encryption: encrypt::ServerSide,
}
pub struct ObjectAttributes {
pub version_id: String,
pub last_modified: OffsetDateTime,
pub object_attributes_response: ObjectAttributesResponse,
}
impl ObjectAttributes {
fn new() -> Self {
Self {
version_id: "".to_string(),
last_modified: OffsetDateTime::now_utc(),
object_attributes_response: ObjectAttributesResponse::new(),
}
}
}
#[derive(Debug, Default, serde::Deserialize)]
pub struct Checksum {
checksum_crc32: String,
checksum_crc32c: String,
checksum_sha1: String,
checksum_sha256: String,
}
impl Checksum {
fn new() -> Self {
Self {
checksum_crc32: "".to_string(),
checksum_crc32c: "".to_string(),
checksum_sha1: "".to_string(),
checksum_sha256: "".to_string(),
}
}
}
#[derive(Debug, Default, serde::Deserialize)]
pub struct ObjectParts {
pub parts_count: i64,
pub part_number_marker: i64,
pub next_part_number_marker: i64,
pub max_parts: i64,
is_truncated: bool,
parts: Vec<ObjectAttributePart>,
}
impl ObjectParts {
fn new() -> Self {
Self {
parts_count: 0,
part_number_marker: 0,
next_part_number_marker: 0,
max_parts: 0,
is_truncated: false,
parts: Vec::new(),
}
}
}
#[derive(Debug, Default, serde::Deserialize)]
pub struct ObjectAttributesResponse {
pub etag: String,
pub storage_class: String,
pub object_size: i64,
pub checksum: Checksum,
pub object_parts: ObjectParts,
}
impl ObjectAttributesResponse {
fn new() -> Self {
Self {
etag: "".to_string(),
storage_class: "".to_string(),
object_size: 0,
checksum: Checksum::new(),
object_parts: ObjectParts::new(),
}
}
}
#[derive(Debug, Default, serde::Deserialize)]
struct ObjectAttributePart {
checksum_crc32: String,
checksum_crc32c: String,
checksum_sha1: String,
checksum_sha256: String,
part_number: i64,
size: i64,
}
impl ObjectAttributes {
pub async fn parse_response(&mut self, resp: &mut http::Response<Body>) -> Result<(), std::io::Error> {
let h = resp.headers();
let mod_time = OffsetDateTime::parse(h.get("Last-Modified").unwrap().to_str().unwrap(), ISO8601_DATEFORMAT).unwrap(); //RFC7231Time
self.last_modified = mod_time;
self.version_id = h.get(X_AMZ_VERSION_ID).unwrap().to_str().unwrap().to_string();
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
let mut response = match serde_xml_rs::from_str::<ObjectAttributesResponse>(&String::from_utf8(b).unwrap()) {
Ok(result) => result,
Err(err) => {
return Err(std::io::Error::other(err.to_string()));
}
};
self.object_attributes_response = response;
Ok(())
}
}
impl TransitionClient {
pub async fn get_object_attributes(
&self,
bucket_name: &str,
object_name: &str,
opts: ObjectAttributesOptions,
) -> Result<ObjectAttributes, std::io::Error> {
let mut url_values = HashMap::new();
url_values.insert("attributes".to_string(), "".to_string());
if opts.version_id != "" {
url_values.insert("versionId".to_string(), opts.version_id);
}
let mut headers = HeaderMap::new();
headers.insert(X_AMZ_OBJECT_ATTRIBUTES, HeaderValue::from_str(GET_OBJECT_ATTRIBUTES_TAGS).unwrap());
if opts.part_number_marker > 0 {
headers.insert(
X_AMZ_PART_NUMBER_MARKER,
HeaderValue::from_str(&opts.part_number_marker.to_string()).unwrap(),
);
}
if opts.max_parts > 0 {
headers.insert(X_AMZ_MAX_PARTS, HeaderValue::from_str(&opts.max_parts.to_string()).unwrap());
} else {
headers.insert(
X_AMZ_MAX_PARTS,
HeaderValue::from_str(&GET_OBJECT_ATTRIBUTES_MAX_PARTS.to_string()).unwrap(),
);
}
/*if opts.server_side_encryption.is_some() {
opts.server_side_encryption.Marshal(headers);
}*/
let mut resp = self
.execute_method(
http::Method::HEAD,
&mut RequestMetadata {
bucket_name: bucket_name.to_string(),
object_name: object_name.to_string(),
query_values: url_values,
custom_header: headers,
content_sha256_hex: EMPTY_STRING_SHA256_HASH.to_string(),
content_md5_base64: "".to_string(),
content_body: ReaderImpl::Body(Bytes::new()),
content_length: 0,
stream_sha256: false,
trailer: HeaderMap::new(),
pre_sign_url: Default::default(),
add_crc: Default::default(),
extra_pre_sign_header: Default::default(),
bucket_location: Default::default(),
expires: Default::default(),
},
)
.await?;
let h = resp.headers();
let has_etag = h.get("ETag").unwrap().to_str().unwrap();
if !has_etag.is_empty() {
return Err(std::io::Error::other(
"get_object_attributes is not supported by the current endpoint version",
));
}
if resp.status() != http::StatusCode::OK {
let b = resp.body_mut().store_all_unlimited().await.unwrap().to_vec();
let err_body = String::from_utf8(b).unwrap();
let mut er = match serde_xml_rs::from_str::<AccessControlPolicy>(&err_body) {
Ok(result) => result,
Err(err) => {
return Err(std::io::Error::other(err.to_string()));
}
};
return Err(std::io::Error::other(er.access_control_list.permission));
}
let mut oa = ObjectAttributes::new();
oa.parse_response(&mut resp).await?;
Ok(oa)
}
}

View File

@@ -0,0 +1,147 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
#![allow(unused_must_use)]
#![allow(clippy::all)]
use bytes::Bytes;
use http::HeaderMap;
use std::io::Cursor;
#[cfg(not(windows))]
use std::os::unix::fs::MetadataExt;
#[cfg(not(windows))]
use std::os::unix::fs::OpenOptionsExt;
#[cfg(not(windows))]
use std::os::unix::fs::PermissionsExt;
#[cfg(windows)]
use std::os::windows::fs::MetadataExt;
use tokio::io::BufReader;
use crate::client::{
api_error_response::err_invalid_argument,
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
};
impl TransitionClient {
pub async fn fget_object(
&self,
bucket_name: &str,
object_name: &str,
file_path: &str,
opts: GetObjectOptions,
) -> Result<(), std::io::Error> {
match std::fs::metadata(file_path) {
Ok(file_path_stat) => {
let ft = file_path_stat.file_type();
if ft.is_dir() {
return Err(std::io::Error::other(err_invalid_argument("filename is a directory.")));
}
}
Err(err) => {
return Err(std::io::Error::other(err));
}
}
let path = std::path::Path::new(file_path);
if let Some(parent) = path.parent() {
if let Some(object_dir) = parent.file_name() {
match std::fs::create_dir_all(object_dir) {
Ok(_) => {
let dir = std::path::Path::new(object_dir);
if let Ok(dir_stat) = dir.metadata() {
#[cfg(not(windows))]
dir_stat.permissions().set_mode(0o700);
}
}
Err(err) => {
return Err(std::io::Error::other(err));
}
}
}
}
let object_stat = match self.stat_object(bucket_name, object_name, &opts).await {
Ok(object_stat) => object_stat,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
let mut file_part_path = file_path.to_string();
file_part_path.push_str("" /*sum_sha256_hex(object_stat.etag.as_bytes())*/);
file_part_path.push_str(".part.rustfs");
#[cfg(not(windows))]
let file_part = match std::fs::OpenOptions::new().mode(0o600).open(file_part_path.clone()) {
Ok(file_part) => file_part,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
#[cfg(windows)]
let file_part = match std::fs::OpenOptions::new().open(file_part_path.clone()) {
Ok(file_part) => file_part,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
let mut close_and_remove = true;
/*defer(|| {
if close_and_remove {
_ = file_part.close();
let _ = std::fs::remove(file_part_path);
}
});*/
let st = match file_part.metadata() {
Ok(st) => st,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
let mut opts = opts;
#[cfg(windows)]
if st.file_size() > 0 {
opts.set_range(st.file_size() as i64, 0);
}
let object_reader = match self.get_object(bucket_name, object_name, &opts) {
Ok(object_reader) => object_reader,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
/*if let Err(err) = std::fs::copy(file_part, object_reader) {
return Err(std::io::Error::other(err));
}*/
close_and_remove = false;
/*if let Err(err) = file_part.close() {
return Err(std::io::Error::other(err));
}*/
if let Err(err) = std::fs::rename(file_part_path, file_path) {
return Err(std::io::Error::other(err));
}
Ok(())
}
}

View File

@@ -29,9 +29,9 @@ use crate::client::api_error_response::err_invalid_argument;
#[derive(Default)]
#[allow(dead_code)]
pub struct AdvancedGetOptions {
replication_deletemarker: bool,
is_replication_ready_for_deletemarker: bool,
replication_proxy_request: String,
pub replication_delete_marker: bool,
pub is_replication_ready_for_delete_marker: bool,
pub replication_proxy_request: String,
}
pub struct GetObjectOptions {

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -25,7 +24,6 @@ use std::{collections::HashMap, sync::Arc};
use time::{Duration, OffsetDateTime, macros::format_description};
use tracing::{error, info, warn};
use rustfs_utils::hasher::Hasher;
use s3s::dto::{ObjectLockLegalHoldStatus, ObjectLockRetentionMode, ReplicationStatus};
use s3s::header::{
X_AMZ_OBJECT_LOCK_LEGAL_HOLD, X_AMZ_OBJECT_LOCK_MODE, X_AMZ_OBJECT_LOCK_RETAIN_UNTIL_DATE, X_AMZ_REPLICATION_STATUS,
@@ -364,18 +362,14 @@ impl TransitionClient {
if opts.send_content_md5 {
let mut md5_hasher = self.md5_hasher.lock().unwrap();
let hash = md5_hasher.as_mut().expect("err");
hash.write(&buf[..length]);
md5_base64 = base64_encode(hash.sum().as_bytes());
let hash = hash.hash_encode(&buf[..length]);
md5_base64 = base64_encode(hash.as_ref());
} else {
let csum;
{
let mut crc = opts.auto_checksum.hasher()?;
crc.reset();
crc.write(&buf[..length]);
csum = crc.sum();
}
let mut crc = opts.auto_checksum.hasher()?;
let csum = crc.hash_encode(&buf[..length]);
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key().as_bytes()) {
custom_header.insert(header_name, base64_encode(csum.as_bytes()).parse().unwrap());
custom_header.insert(header_name, base64_encode(csum.as_ref()).parse().expect("err"));
} else {
warn!("Invalid header name: {}", opts.auto_checksum.key());
}

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -31,7 +30,6 @@ use tracing::{error, info};
use url::form_urlencoded::Serializer;
use uuid::Uuid;
use rustfs_utils::hasher::Hasher;
use s3s::header::{X_AMZ_EXPIRATION, X_AMZ_VERSION_ID};
use s3s::{Body, dto::StreamingBlob};
//use crate::disk::{Reader, BufferReader};
@@ -117,8 +115,8 @@ impl TransitionClient {
let length = buf.len();
for (k, v) in hash_algos.iter_mut() {
v.write(&buf[..length]);
hash_sums.insert(k.to_string(), Vec::try_from(v.sum().as_bytes()).unwrap());
let hash = v.hash_encode(&buf[..length]);
hash_sums.insert(k.to_string(), hash.as_ref().to_vec());
}
//let rd = newHook(bytes.NewReader(buf[..length]), opts.progress);
@@ -134,15 +132,11 @@ impl TransitionClient {
sha256_hex = hex_simd::encode_to_string(hash_sums["sha256"].clone(), hex_simd::AsciiCase::Lower);
//}
if hash_sums.len() == 0 {
let csum;
{
let mut crc = opts.auto_checksum.hasher()?;
crc.reset();
crc.write(&buf[..length]);
csum = crc.sum();
}
let mut crc = opts.auto_checksum.hasher()?;
let csum = crc.hash_encode(&buf[..length]);
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key().as_bytes()) {
custom_header.insert(header_name, base64_encode(csum.as_bytes()).parse().expect("err"));
custom_header.insert(header_name, base64_encode(csum.as_ref()).parse().expect("err"));
} else {
warn!("Invalid header name: {}", opts.auto_checksum.key());
}
@@ -297,8 +291,6 @@ impl TransitionClient {
};
let resp = self.execute_method(http::Method::PUT, &mut req_metadata).await?;
//defer closeResponse(resp)
//if resp.is_none() {
if resp.status() != StatusCode::OK {
return Err(std::io::Error::other(http_resp_to_error_response(
resp,
@@ -366,13 +358,13 @@ impl TransitionClient {
bucket_name: bucket_name.to_string(),
object_name: object_name.to_string(),
query_values: url_values,
custom_header: headers,
content_body: ReaderImpl::Body(complete_multipart_upload_buffer),
content_length: 100, //complete_multipart_upload_bytes.len(),
content_sha256_hex: "".to_string(), //hex_simd::encode_to_string(complete_multipart_upload_bytes, hex_simd::AsciiCase::Lower),
custom_header: headers,
content_md5_base64: "".to_string(),
stream_sha256: Default::default(),
trailer: Default::default(),
content_md5_base64: "".to_string(),
pre_sign_url: Default::default(),
add_crc: Default::default(),
extra_pre_sign_header: Default::default(),

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -40,7 +39,7 @@ use crate::client::{
constants::ISO8601_DATEFORMAT,
transition_api::{ReaderImpl, RequestMetadata, TransitionClient, UploadInfo},
};
use rustfs_utils::hasher::Hasher;
use rustfs_utils::{crypto::base64_encode, path::trim_etag};
use s3s::header::{X_AMZ_EXPIRATION, X_AMZ_VERSION_ID};
@@ -153,21 +152,16 @@ impl TransitionClient {
if opts.send_content_md5 {
let mut md5_hasher = self.md5_hasher.lock().unwrap();
let md5_hash = md5_hasher.as_mut().expect("err");
md5_hash.reset();
md5_hash.write(&buf[..length]);
md5_base64 = base64_encode(md5_hash.sum().as_bytes());
let hash = md5_hash.hash_encode(&buf[..length]);
md5_base64 = base64_encode(hash.as_ref());
} else {
let csum;
{
let mut crc = opts.auto_checksum.hasher()?;
crc.reset();
crc.write(&buf[..length]);
csum = crc.sum();
}
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key_capitalized().as_bytes()) {
custom_header.insert(header_name, HeaderValue::from_str(&base64_encode(csum.as_bytes())).expect("err"));
let mut crc = opts.auto_checksum.hasher()?;
let csum = crc.hash_encode(&buf[..length]);
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key().as_bytes()) {
custom_header.insert(header_name, base64_encode(csum.as_ref()).parse().expect("err"));
} else {
warn!("Invalid header name: {}", opts.auto_checksum.key_capitalized());
warn!("Invalid header name: {}", opts.auto_checksum.key());
}
}
@@ -308,17 +302,11 @@ impl TransitionClient {
let mut custom_header = HeaderMap::new();
if !opts.send_content_md5 {
let csum;
{
let mut crc = opts.auto_checksum.hasher()?;
crc.reset();
crc.write(&buf[..length]);
csum = crc.sum();
}
let mut crc = opts.auto_checksum.hasher()?;
let csum = crc.hash_encode(&buf[..length]);
if let Ok(header_name) = HeaderName::from_bytes(opts.auto_checksum.key().as_bytes()) {
if let Ok(header_value) = HeaderValue::from_str(&base64_encode(csum.as_bytes())) {
custom_header.insert(header_name, header_value);
}
custom_header.insert(header_name, base64_encode(csum.as_ref()).parse().expect("err"));
} else {
warn!("Invalid header name: {}", opts.auto_checksum.key());
}
@@ -334,8 +322,8 @@ impl TransitionClient {
if opts.send_content_md5 {
let mut md5_hasher = clone_self.md5_hasher.lock().unwrap();
let md5_hash = md5_hasher.as_mut().expect("err");
md5_hash.write(&buf[..length]);
md5_base64 = base64_encode(md5_hash.sum().as_bytes());
let hash = md5_hash.hash_encode(&buf[..length]);
md5_base64 = base64_encode(hash.as_ref());
}
//defer wg.Done()

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -21,6 +20,7 @@
use bytes::Bytes;
use http::{HeaderMap, HeaderValue, Method, StatusCode};
use rustfs_utils::{HashAlgorithm, crypto::base64_encode};
use s3s::S3ErrorCode;
use s3s::dto::ReplicationStatus;
use s3s::header::X_AMZ_BYPASS_GOVERNANCE_RETENTION;
@@ -38,7 +38,6 @@ use crate::{
store_api::{GetObjectReader, ObjectInfo, StorageAPI},
};
use rustfs_utils::hash::EMPTY_STRING_SHA256_HASH;
use rustfs_utils::hasher::{sum_md5_base64, sum_sha256_hex};
pub struct RemoveBucketOptions {
_forced_elete: bool,
@@ -330,8 +329,8 @@ impl TransitionClient {
query_values: url_values.clone(),
content_body: ReaderImpl::Body(Bytes::from(remove_bytes.clone())),
content_length: remove_bytes.len() as i64,
content_md5_base64: sum_md5_base64(&remove_bytes),
content_sha256_hex: sum_sha256_hex(&remove_bytes),
content_md5_base64: base64_encode(&HashAlgorithm::Md5.hash_encode(&remove_bytes).as_ref()),
content_sha256_hex: base64_encode(&HashAlgorithm::SHA256.hash_encode(&remove_bytes).as_ref()),
custom_header: headers,
object_name: "".to_string(),
stream_sha256: false,

View File

@@ -0,0 +1,172 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
#![allow(unused_must_use)]
#![allow(clippy::all)]
use bytes::Bytes;
use http::HeaderMap;
use std::collections::HashMap;
use std::io::Cursor;
use tokio::io::BufReader;
use crate::client::{
api_error_response::{err_invalid_argument, http_resp_to_error_response},
api_get_object_acl::AccessControlList,
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
};
const TIER_STANDARD: &str = "Standard";
const TIER_BULK: &str = "Bulk";
const TIER_EXPEDITED: &str = "Expedited";
#[derive(Debug, Default, serde::Serialize)]
pub struct GlacierJobParameters {
pub tier: String,
}
#[derive(Debug, Default, serde::Serialize, serde::Deserialize)]
pub struct Encryption {
pub encryption_type: String,
pub kms_context: String,
pub kms_key_id: String,
}
#[derive(Debug, Default, serde::Serialize, serde::Deserialize)]
pub struct MetadataEntry {
pub name: String,
pub value: String,
}
#[derive(Debug, Default, serde::Serialize)]
pub struct S3 {
pub access_control_list: AccessControlList,
pub bucket_name: String,
pub prefix: String,
pub canned_acl: String,
pub encryption: Encryption,
pub storage_class: String,
//tagging: Tags,
pub user_metadata: MetadataEntry,
}
#[derive(Debug, Default, serde::Serialize)]
pub struct SelectParameters {
pub expression_type: String,
pub expression: String,
//input_serialization: SelectObjectInputSerialization,
//output_serialization: SelectObjectOutputSerialization,
}
#[derive(Debug, Default, serde::Serialize)]
pub struct OutputLocation(pub S3);
#[derive(Debug, Default, serde::Serialize)]
pub struct RestoreRequest {
pub restore_type: String,
pub tier: String,
pub days: i64,
pub glacier_job_parameters: GlacierJobParameters,
pub description: String,
pub select_parameters: SelectParameters,
pub output_location: OutputLocation,
}
impl RestoreRequest {
fn set_days(&mut self, v: i64) {
self.days = v;
}
fn set_glacier_job_parameters(&mut self, v: GlacierJobParameters) {
self.glacier_job_parameters = v;
}
fn set_type(&mut self, v: &str) {
self.restore_type = v.to_string();
}
fn set_tier(&mut self, v: &str) {
self.tier = v.to_string();
}
fn set_description(&mut self, v: &str) {
self.description = v.to_string();
}
fn set_select_parameters(&mut self, v: SelectParameters) {
self.select_parameters = v;
}
fn set_output_location(&mut self, v: OutputLocation) {
self.output_location = v;
}
}
impl TransitionClient {
pub async fn restore_object(
&self,
bucket_name: &str,
object_name: &str,
version_id: &str,
restore_req: &RestoreRequest,
) -> Result<(), std::io::Error> {
let restore_request = match serde_xml_rs::to_string(restore_req) {
Ok(buf) => buf,
Err(e) => {
return Err(std::io::Error::other(e));
}
};
let restore_request_bytes = restore_request.as_bytes().to_vec();
let mut url_values = HashMap::new();
url_values.insert("restore".to_string(), "".to_string());
if version_id != "" {
url_values.insert("versionId".to_string(), version_id.to_string());
}
let restore_request_buffer = Bytes::from(restore_request_bytes.clone());
let resp = self
.execute_method(
http::Method::HEAD,
&mut RequestMetadata {
bucket_name: bucket_name.to_string(),
object_name: object_name.to_string(),
query_values: url_values,
custom_header: HeaderMap::new(),
content_sha256_hex: "".to_string(), //sum_sha256_hex(&restore_request_bytes),
content_md5_base64: "".to_string(), //sum_md5_base64(&restore_request_bytes),
content_body: ReaderImpl::Body(restore_request_buffer),
content_length: restore_request_bytes.len() as i64,
stream_sha256: false,
trailer: HeaderMap::new(),
pre_sign_url: Default::default(),
add_crc: Default::default(),
extra_pre_sign_header: Default::default(),
bucket_location: Default::default(),
expires: Default::default(),
},
)
.await?;
let b = resp.body().bytes().expect("err").to_vec();
if resp.status() != http::StatusCode::ACCEPTED && resp.status() != http::StatusCode::OK {
return Err(std::io::Error::other(http_resp_to_error_response(resp, b, bucket_name, "")));
}
Ok(())
}
}

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");

View File

@@ -0,0 +1,166 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]
#![allow(unused_must_use)]
#![allow(clippy::all)]
use bytes::Bytes;
use http::{HeaderMap, HeaderValue};
use rustfs_utils::EMPTY_STRING_SHA256_HASH;
use std::{collections::HashMap, str::FromStr};
use tokio::io::BufReader;
use uuid::Uuid;
use crate::client::{
api_error_response::{ErrorResponse, err_invalid_argument, http_resp_to_error_response},
api_get_options::GetObjectOptions,
transition_api::{ObjectInfo, ReadCloser, ReaderImpl, RequestMetadata, TransitionClient, to_object_info},
};
use s3s::header::{X_AMZ_DELETE_MARKER, X_AMZ_VERSION_ID};
impl TransitionClient {
pub async fn bucket_exists(&self, bucket_name: &str) -> Result<bool, std::io::Error> {
let resp = self
.execute_method(
http::Method::HEAD,
&mut RequestMetadata {
bucket_name: bucket_name.to_string(),
object_name: "".to_string(),
query_values: HashMap::new(),
custom_header: HeaderMap::new(),
content_sha256_hex: EMPTY_STRING_SHA256_HASH.to_string(),
content_md5_base64: "".to_string(),
content_body: ReaderImpl::Body(Bytes::new()),
content_length: 0,
stream_sha256: false,
trailer: HeaderMap::new(),
pre_sign_url: Default::default(),
add_crc: Default::default(),
extra_pre_sign_header: Default::default(),
bucket_location: Default::default(),
expires: Default::default(),
},
)
.await;
if let Ok(resp) = resp {
let b = resp.body().bytes().expect("err").to_vec();
let resperr = http_resp_to_error_response(resp, b, bucket_name, "");
/*if to_error_response(resperr).code == "NoSuchBucket" {
return Ok(false);
}
if resp.status_code() != http::StatusCode::OK {
return Ok(false);
}*/
}
Ok(true)
}
pub async fn stat_object(
&self,
bucket_name: &str,
object_name: &str,
opts: &GetObjectOptions,
) -> Result<ObjectInfo, std::io::Error> {
let mut headers = opts.header();
if opts.internal.replication_delete_marker {
headers.insert("X-Source-DeleteMarker", HeaderValue::from_str("true").unwrap());
}
if opts.internal.is_replication_ready_for_delete_marker {
headers.insert("X-Check-Replication-Ready", HeaderValue::from_str("true").unwrap());
}
let resp = self
.execute_method(
http::Method::HEAD,
&mut RequestMetadata {
bucket_name: bucket_name.to_string(),
object_name: object_name.to_string(),
query_values: opts.to_query_values(),
custom_header: headers,
content_sha256_hex: EMPTY_STRING_SHA256_HASH.to_string(),
content_md5_base64: "".to_string(),
content_body: ReaderImpl::Body(Bytes::new()),
content_length: 0,
stream_sha256: false,
trailer: HeaderMap::new(),
pre_sign_url: Default::default(),
add_crc: Default::default(),
extra_pre_sign_header: Default::default(),
bucket_location: Default::default(),
expires: Default::default(),
},
)
.await;
match resp {
Ok(resp) => {
let h = resp.headers();
let delete_marker = if let Some(x_amz_delete_marker) = h.get(X_AMZ_DELETE_MARKER.as_str()) {
x_amz_delete_marker.to_str().unwrap() == "true"
} else {
false
};
let replication_ready = if let Some(x_amz_delete_marker) = h.get("X-Replication-Ready") {
x_amz_delete_marker.to_str().unwrap() == "true"
} else {
false
};
if resp.status() != http::StatusCode::OK && resp.status() != http::StatusCode::PARTIAL_CONTENT {
if resp.status() == http::StatusCode::METHOD_NOT_ALLOWED && opts.version_id != "" && delete_marker {
let err_resp = ErrorResponse {
status_code: resp.status(),
code: s3s::S3ErrorCode::MethodNotAllowed,
message: "the specified method is not allowed against this resource.".to_string(),
bucket_name: bucket_name.to_string(),
key: object_name.to_string(),
..Default::default()
};
return Ok(ObjectInfo {
version_id: match Uuid::from_str(h.get(X_AMZ_VERSION_ID).unwrap().to_str().unwrap()) {
Ok(v) => v,
Err(e) => {
return Err(std::io::Error::other(e));
}
},
is_delete_marker: delete_marker,
..Default::default()
});
//err_resp
}
return Ok(ObjectInfo {
version_id: match Uuid::from_str(h.get(X_AMZ_VERSION_ID).unwrap().to_str().unwrap()) {
Ok(v) => v,
Err(e) => {
return Err(std::io::Error::other(e));
}
},
is_delete_marker: delete_marker,
replication_ready: replication_ready,
..Default::default()
});
//http_resp_to_error_response(resp, bucket_name, object_name)
}
Ok(to_object_info(bucket_name, object_name, h).unwrap())
}
Err(err) => {
return Err(std::io::Error::other(err));
}
}
}
}

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -31,7 +30,6 @@ use crate::client::{
transition_api::{Document, TransitionClient},
};
use rustfs_utils::hash::EMPTY_STRING_SHA256_HASH;
use rustfs_utils::hasher::{Hasher, Sha256};
use s3s::Body;
use s3s::S3ErrorCode;
@@ -125,9 +123,11 @@ impl TransitionClient {
url_str = target_url.to_string();
}
let mut req_builder = Request::builder().method(http::Method::GET).uri(url_str);
let Ok(mut req) = Request::builder().method(http::Method::GET).uri(url_str).body(Body::empty()) else {
return Err(std::io::Error::other("create request error"));
};
self.set_user_agent(&mut req_builder);
self.set_user_agent(&mut req);
let value;
{
@@ -154,22 +154,12 @@ impl TransitionClient {
}
if signer_type == SignatureType::SignatureAnonymous {
let req = match req_builder.body(Body::empty()) {
Ok(req) => return Ok(req),
Err(err) => {
return Err(std::io::Error::other(err));
}
};
return Ok(req);
}
if signer_type == SignatureType::SignatureV2 {
let req_builder = rustfs_signer::sign_v2(req_builder, 0, &access_key_id, &secret_access_key, is_virtual_style);
let req = match req_builder.body(Body::empty()) {
Ok(req) => return Ok(req),
Err(err) => {
return Err(std::io::Error::other(err));
}
};
let req = rustfs_signer::sign_v2(req, 0, &access_key_id, &secret_access_key, is_virtual_style);
return Ok(req);
}
let mut content_sha256 = EMPTY_STRING_SHA256_HASH.to_string();
@@ -177,17 +167,10 @@ impl TransitionClient {
content_sha256 = UNSIGNED_PAYLOAD.to_string();
}
req_builder
.headers_mut()
.expect("err")
req.headers_mut()
.insert("X-Amz-Content-Sha256", content_sha256.parse().unwrap());
let req_builder = rustfs_signer::sign_v4(req_builder, 0, &access_key_id, &secret_access_key, &session_token, "us-east-1");
let req = match req_builder.body(Body::empty()) {
Ok(req) => return Ok(req),
Err(err) => {
return Err(std::io::Error::other(err));
}
};
let req = rustfs_signer::sign_v4(req, 0, &access_key_id, &secret_access_key, &session_token, "us-east-1");
Ok(req)
}
}

View File

@@ -16,6 +16,9 @@ pub mod admin_handler_utils;
pub mod api_bucket_policy;
pub mod api_error_response;
pub mod api_get_object;
pub mod api_get_object_acl;
pub mod api_get_object_attributes;
pub mod api_get_object_file;
pub mod api_get_options;
pub mod api_list;
pub mod api_put_object;
@@ -23,7 +26,9 @@ pub mod api_put_object_common;
pub mod api_put_object_multipart;
pub mod api_put_object_streaming;
pub mod api_remove;
pub mod api_restore;
pub mod api_s3_datatypes;
pub mod api_stat;
pub mod bucket_cache;
pub mod constants;
pub mod credentials;

View File

@@ -1,4 +1,3 @@
#![allow(clippy::map_entry)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -28,8 +27,12 @@ use http::{
};
use hyper_rustls::{ConfigBuilderExt, HttpsConnector};
use hyper_util::{client::legacy::Client, client::legacy::connect::HttpConnector, rt::TokioExecutor};
use md5::Digest;
use md5::Md5;
use rand::Rng;
use rustfs_utils::HashAlgorithm;
use serde::{Deserialize, Serialize};
use sha2::Sha256;
use std::io::Cursor;
use std::pin::Pin;
use std::sync::atomic::{AtomicI32, Ordering};
@@ -60,7 +63,6 @@ use crate::client::{
};
use crate::{checksum::ChecksumMode, store_api::GetObjectReader};
use rustfs_rio::HashReader;
use rustfs_utils::hasher::{MD5, Sha256};
use rustfs_utils::{
net::get_endpoint_url,
retry::{MAX_RETRY, new_retry_timer},
@@ -69,7 +71,6 @@ use s3s::S3ErrorCode;
use s3s::dto::ReplicationStatus;
use s3s::{Body, dto::Owner};
const _C_USER_AGENT_PREFIX: &str = "RustFS (linux; x86)";
const C_USER_AGENT: &str = "RustFS (linux; x86)";
const SUCCESS_STATUS: [StatusCode; 3] = [StatusCode::OK, StatusCode::NO_CONTENT, StatusCode::PARTIAL_CONTENT];
@@ -90,22 +91,18 @@ pub struct TransitionClient {
pub endpoint_url: Url,
pub creds_provider: Arc<Mutex<Credentials<Static>>>,
pub override_signer_type: SignatureType,
/*app_info: TODO*/
pub secure: bool,
pub http_client: Client<HttpsConnector<HttpConnector>, Body>,
//pub http_trace: Httptrace.ClientTrace,
pub bucket_loc_cache: Arc<Mutex<BucketLocationCache>>,
pub is_trace_enabled: Arc<Mutex<bool>>,
pub trace_errors_only: Arc<Mutex<bool>>,
//pub trace_output: io.Writer,
pub s3_accelerate_endpoint: Arc<Mutex<String>>,
pub s3_dual_stack_enabled: Arc<Mutex<bool>>,
pub region: String,
pub random: u64,
pub lookup: BucketLookupType,
//pub lookupFn: func(u url.URL, bucketName string) BucketLookupType,
pub md5_hasher: Arc<Mutex<Option<MD5>>>,
pub sha256_hasher: Option<Sha256>,
pub md5_hasher: Arc<Mutex<Option<HashAlgorithm>>>,
pub sha256_hasher: Option<HashAlgorithm>,
pub health_status: AtomicI32,
pub trailing_header_support: bool,
pub max_retries: i64,
@@ -115,15 +112,11 @@ pub struct TransitionClient {
pub struct Options {
pub creds: Credentials<Static>,
pub secure: bool,
//pub transport: http.RoundTripper,
//pub trace: *httptrace.ClientTrace,
pub region: String,
pub bucket_lookup: BucketLookupType,
//pub custom_region_via_url: func(u url.URL) string,
//pub bucket_lookup_via_url: func(u url.URL, bucketName string) BucketLookupType,
pub trailing_headers: bool,
pub custom_md5: Option<MD5>,
pub custom_sha256: Option<Sha256>,
pub custom_md5: Option<HashAlgorithm>,
pub custom_sha256: Option<HashAlgorithm>,
pub max_retries: i64,
}
@@ -145,8 +138,6 @@ impl TransitionClient {
async fn private_new(endpoint: &str, opts: Options) -> Result<TransitionClient, std::io::Error> {
let endpoint_url = get_endpoint_url(endpoint, opts.secure)?;
//let jar = cookiejar.New(cookiejar.Options{PublicSuffixList: publicsuffix.List})?;
//#[cfg(feature = "ring")]
//let _ = rustls::crypto::ring::default_provider().install_default();
//#[cfg(feature = "aws-lc-rs")]
@@ -154,9 +145,6 @@ impl TransitionClient {
let scheme = endpoint_url.scheme();
let client;
//if scheme == "https" {
// client = Client::builder(TokioExecutor::new()).build_http();
//} else {
let tls = rustls::ClientConfig::builder().with_native_roots()?.with_no_client_auth();
let https = hyper_rustls::HttpsConnectorBuilder::new()
.with_tls_config(tls)
@@ -164,7 +152,6 @@ impl TransitionClient {
.enable_http1()
.build();
client = Client::builder(TokioExecutor::new()).build(https);
//}
let mut clnt = TransitionClient {
endpoint_url,
@@ -190,11 +177,11 @@ impl TransitionClient {
{
let mut md5_hasher = clnt.md5_hasher.lock().unwrap();
if md5_hasher.is_none() {
*md5_hasher = Some(MD5::new());
*md5_hasher = Some(HashAlgorithm::Md5);
}
}
if clnt.sha256_hasher.is_none() {
clnt.sha256_hasher = Some(Sha256::new());
clnt.sha256_hasher = Some(HashAlgorithm::SHA256);
}
clnt.trailing_header_support = opts.trailing_headers && clnt.override_signer_type == SignatureType::SignatureV4;
@@ -210,13 +197,6 @@ impl TransitionClient {
self.endpoint_url.clone()
}
fn set_appinfo(&self, app_name: &str, app_version: &str) {
/*if app_name != "" && app_version != "" {
self.appInfo.app_name = app_name
self.appInfo.app_version = app_version
}*/
}
fn trace_errors_only_off(&self) {
let mut trace_errors_only = self.trace_errors_only.lock().unwrap();
*trace_errors_only = false;
@@ -241,8 +221,8 @@ impl TransitionClient {
&self,
is_md5_requested: bool,
is_sha256_requested: bool,
) -> (HashMap<String, MD5>, HashMap<String, Vec<u8>>) {
todo!();
) -> (HashMap<String, HashAlgorithm>, HashMap<String, Vec<u8>>) {
todo!()
}
fn is_online(&self) -> bool {
@@ -265,6 +245,7 @@ impl TransitionClient {
fn dump_http(&self, req: &http::Request<Body>, resp: &http::Response<Body>) -> Result<(), std::io::Error> {
let mut resp_trace: Vec<u8>;
//info!("{}{}", self.trace_output, "---------BEGIN-HTTP---------");
//info!("{}{}", self.trace_output, "---------END-HTTP---------");
Ok(())
@@ -335,7 +316,7 @@ impl TransitionClient {
//let mut retry_timer = RetryTimer::new();
//while let Some(v) = retry_timer.next().await {
for _ in [1; 1]
/*new_retry_timer(req_retry, DefaultRetryUnit, DefaultRetryCap, MaxJitter)*/
/*new_retry_timer(req_retry, default_retry_unit, default_retry_cap, max_jitter)*/
{
let req = self.new_request(method, metadata).await?;
@@ -406,7 +387,13 @@ impl TransitionClient {
&metadata.query_values,
)?;
let mut req_builder = Request::builder().method(method).uri(target_url.to_string());
let Ok(mut req) = Request::builder()
.method(method)
.uri(target_url.to_string())
.body(Body::empty())
else {
return Err(std::io::Error::other("create request error"));
};
let value;
{
@@ -430,30 +417,25 @@ impl TransitionClient {
if metadata.expires != 0 && metadata.pre_sign_url {
if signer_type == SignatureType::SignatureAnonymous {
return Err(std::io::Error::other(err_invalid_argument(
"Presigned URLs cannot be generated with anonymous credentials.",
"presigned urls cannot be generated with anonymous credentials.",
)));
}
if metadata.extra_pre_sign_header.is_some() {
if signer_type == SignatureType::SignatureV2 {
return Err(std::io::Error::other(err_invalid_argument(
"Extra signed headers for Presign with Signature V2 is not supported.",
"extra signed headers for presign with signature v2 is not supported.",
)));
}
let headers = req.headers_mut();
for (k, v) in metadata.extra_pre_sign_header.as_ref().unwrap() {
req_builder = req_builder.header(k, v);
headers.insert(k, v.clone());
}
}
if signer_type == SignatureType::SignatureV2 {
req_builder = rustfs_signer::pre_sign_v2(
req_builder,
&access_key_id,
&secret_access_key,
metadata.expires,
is_virtual_host,
);
req = rustfs_signer::pre_sign_v2(req, &access_key_id, &secret_access_key, metadata.expires, is_virtual_host);
} else if signer_type == SignatureType::SignatureV4 {
req_builder = rustfs_signer::pre_sign_v4(
req_builder,
req = rustfs_signer::pre_sign_v4(
req,
&access_key_id,
&secret_access_key,
&session_token,
@@ -462,57 +444,38 @@ impl TransitionClient {
OffsetDateTime::now_utc(),
);
}
let req = match req_builder.body(Body::empty()) {
Ok(req) => req,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
return Ok(req);
}
self.set_user_agent(&mut req_builder);
self.set_user_agent(&mut req);
for (k, v) in metadata.custom_header.clone() {
req_builder.headers_mut().expect("err").insert(k.expect("err"), v);
req.headers_mut().insert(k.expect("err"), v);
}
//req.content_length = metadata.content_length;
if metadata.content_length <= -1 {
let chunked_value = HeaderValue::from_str(&vec!["chunked"].join(",")).expect("err");
req_builder
.headers_mut()
.expect("err")
.insert(http::header::TRANSFER_ENCODING, chunked_value);
req.headers_mut().insert(http::header::TRANSFER_ENCODING, chunked_value);
}
if metadata.content_md5_base64.len() > 0 {
let md5_value = HeaderValue::from_str(&metadata.content_md5_base64).expect("err");
req_builder.headers_mut().expect("err").insert("Content-Md5", md5_value);
req.headers_mut().insert("Content-Md5", md5_value);
}
if signer_type == SignatureType::SignatureAnonymous {
let req = match req_builder.body(Body::empty()) {
Ok(req) => req,
Err(err) => {
return Err(std::io::Error::other(err));
}
};
return Ok(req);
}
if signer_type == SignatureType::SignatureV2 {
req_builder =
rustfs_signer::sign_v2(req_builder, metadata.content_length, &access_key_id, &secret_access_key, is_virtual_host);
req = rustfs_signer::sign_v2(req, metadata.content_length, &access_key_id, &secret_access_key, is_virtual_host);
} else if metadata.stream_sha256 && !self.secure {
if metadata.trailer.len() > 0 {
//req.Trailer = metadata.trailer;
for (_, v) in &metadata.trailer {
req_builder = req_builder.header(http::header::TRAILER, v.clone());
req.headers_mut().insert(http::header::TRAILER, v.clone());
}
}
//req_builder = rustfs_signer::streaming_sign_v4(req_builder, &access_key_id,
// &secret_access_key, &session_token, &location, metadata.content_length, OffsetDateTime::now_utc(), self.sha256_hasher());
} else {
let mut sha_header = UNSIGNED_PAYLOAD.to_string();
if metadata.content_sha256_hex != "" {
@@ -523,11 +486,11 @@ impl TransitionClient {
} else if metadata.trailer.len() > 0 {
sha_header = UNSIGNED_PAYLOAD_TRAILER.to_string();
}
req_builder = req_builder
.header::<HeaderName, HeaderValue>("X-Amz-Content-Sha256".parse().unwrap(), sha_header.parse().expect("err"));
req.headers_mut()
.insert("X-Amz-Content-Sha256".parse::<HeaderName>().unwrap(), sha_header.parse().expect("err"));
req_builder = rustfs_signer::sign_v4_trailer(
req_builder,
req = rustfs_signer::sign_v4_trailer(
req,
&access_key_id,
&secret_access_key,
&session_token,
@@ -536,33 +499,23 @@ impl TransitionClient {
);
}
let req;
if metadata.content_length == 0 {
req = req_builder.body(Body::empty());
} else {
if metadata.content_length > 0 {
match &mut metadata.content_body {
ReaderImpl::Body(content_body) => {
req = req_builder.body(Body::from(content_body.clone()));
*req.body_mut() = Body::from(content_body.clone());
}
ReaderImpl::ObjectBody(content_body) => {
req = req_builder.body(Body::from(content_body.read_all().await?));
*req.body_mut() = Body::from(content_body.read_all().await?);
}
}
//req = req_builder.body(s3s::Body::from(metadata.content_body.read_all().await?));
}
match req {
Ok(req) => Ok(req),
Err(err) => Err(std::io::Error::other(err)),
}
Ok(req)
}
pub fn set_user_agent(&self, req: &mut Builder) {
let headers = req.headers_mut().expect("err");
pub fn set_user_agent(&self, req: &mut Request<Body>) {
let headers = req.headers_mut();
headers.insert("User-Agent", C_USER_AGENT.parse().expect("err"));
/*if self.app_info.app_name != "" && self.app_info.app_version != "" {
headers.insert("User-Agent", C_USER_AGENT+" "+self.app_info.app_name+"/"+self.app_info.app_version);
}*/
}
fn make_target_url(
@@ -945,7 +898,7 @@ pub struct ObjectMultipartInfo {
pub key: String,
pub size: i64,
pub upload_id: String,
//pub err error,
//pub err: Error,
}
pub struct UploadInfo {

View File

@@ -178,6 +178,16 @@ pub async fn remove_bucket_target(bucket: &str, arn_str: &str) {
}
}
pub async fn list_bucket_targets(bucket: &str) -> Result<BucketTargets, BucketRemoteTargetNotFound> {
if let Some(sys) = GLOBAL_Bucket_Target_Sys.get() {
sys.list_bucket_targets(bucket).await
} else {
Err(BucketRemoteTargetNotFound {
bucket: bucket.to_string(),
})
}
}
impl Default for BucketTargetSys {
fn default() -> Self {
Self::new()

View File

@@ -145,8 +145,8 @@ impl Debug for LocalDisk {
impl LocalDisk {
pub async fn new(ep: &Endpoint, cleanup: bool) -> Result<Self> {
debug!("Creating local disk");
let root = match fs::canonicalize(ep.get_file_path()).await {
Ok(path) => path,
let root = match PathBuf::from(ep.get_file_path()).absolutize() {
Ok(path) => path.into_owned(),
Err(e) => {
if e.kind() == ErrorKind::NotFound {
return Err(DiskError::VolumeNotFound);

View File

@@ -28,7 +28,7 @@
//! ## Example
//!
//! ```rust
//! use ecstore::erasure_coding::Erasure;
//! use rustfs_ecstore::erasure_coding::Erasure;
//!
//! let erasure = Erasure::new(4, 2, 1024); // 4 data shards, 2 parity shards, 1KB block size
//! let data = b"hello world";
@@ -263,7 +263,7 @@ impl ReedSolomonEncoder {
///
/// # Example
/// ```
/// use ecstore::erasure_coding::Erasure;
/// use rustfs_ecstore::erasure_coding::Erasure;
/// let erasure = Erasure::new(4, 2, 8);
/// let data = b"hello world";
/// let shards = erasure.encode_data(data).unwrap();

View File

@@ -62,7 +62,9 @@ static ref globalDeploymentIDPtr: OnceLock<Uuid> = OnceLock::new();
pub static ref GLOBAL_BOOT_TIME: OnceCell<SystemTime> = OnceCell::new();
pub static ref GLOBAL_LocalNodeName: String = "127.0.0.1:9000".to_string();
pub static ref GLOBAL_LocalNodeNameHex: String = rustfs_utils::crypto::hex(GLOBAL_LocalNodeName.as_bytes());
pub static ref GLOBAL_NodeNamesHex: HashMap<String, ()> = HashMap::new();}
pub static ref GLOBAL_NodeNamesHex: HashMap<String, ()> = HashMap::new();
pub static ref GLOBAL_REGION: OnceLock<String> = OnceLock::new();
}
static GLOBAL_ACTIVE_CRED: OnceLock<Credentials> = OnceLock::new();
@@ -182,3 +184,11 @@ pub async fn update_erasure_type(setup_type: SetupType) {
// }
type TypeLocalDiskSetDrives = Vec<Vec<Vec<Option<DiskStore>>>>;
pub fn set_global_region(region: String) {
GLOBAL_REGION.set(region).unwrap();
}
pub fn get_global_region() -> Option<String> {
GLOBAL_REGION.get().cloned()
}

View File

@@ -20,17 +20,18 @@ use std::{
path::{Path, PathBuf},
pin::Pin,
sync::{
Arc,
Arc, OnceLock,
atomic::{AtomicBool, AtomicU32, AtomicU64, Ordering},
},
time::{Duration, SystemTime},
};
use time::{self, OffsetDateTime};
use tokio_util::sync::CancellationToken;
use super::{
data_scanner_metric::{ScannerMetric, ScannerMetrics, globalScannerMetrics},
data_usage::{DATA_USAGE_BLOOM_NAME_PATH, store_data_usage_in_backend},
data_usage::{DATA_USAGE_BLOOM_NAME_PATH, DataUsageInfo, store_data_usage_in_backend},
data_usage_cache::{DataUsageCache, DataUsageEntry, DataUsageHash},
heal_commands::{HEAL_DEEP_SCAN, HEAL_NORMAL_SCAN, HealScanMode},
};
@@ -103,7 +104,7 @@ use tokio::{
},
time::sleep,
};
use tracing::{error, info};
use tracing::{debug, error, info};
const DATA_SCANNER_SLEEP_PER_FOLDER: Duration = Duration::from_millis(1); // Time to wait between folders.
const DATA_USAGE_UPDATE_DIR_CYCLES: u32 = 16; // Visit all folders every n cycles.
@@ -127,6 +128,8 @@ lazy_static! {
pub static ref globalHealConfig: Arc<RwLock<Config>> = Arc::new(RwLock::new(Config::default()));
}
static GLOBAL_SCANNER_CANCEL_TOKEN: OnceLock<CancellationToken> = OnceLock::new();
struct DynamicSleeper {
factor: f64,
max_sleep: Duration,
@@ -195,36 +198,66 @@ fn new_dynamic_sleeper(factor: f64, max_wait: Duration, is_scanner: bool) -> Dyn
/// - Minimum sleep duration to avoid excessive CPU usage
/// - Proper error handling and logging
///
/// # Returns
/// A CancellationToken that can be used to gracefully shutdown the scanner
///
/// # Architecture
/// 1. Initialize with random seed for sleep intervals
/// 2. Run scanner cycles in a loop
/// 3. Use randomized sleep between cycles to avoid thundering herd
/// 4. Ensure minimum sleep duration to prevent CPU thrashing
pub async fn init_data_scanner() {
pub async fn init_data_scanner() -> CancellationToken {
info!("Initializing data scanner background task");
let cancel_token = CancellationToken::new();
GLOBAL_SCANNER_CANCEL_TOKEN
.set(cancel_token.clone())
.expect("Scanner already initialized");
let cancel_clone = cancel_token.clone();
tokio::spawn(async move {
info!("Data scanner background task started");
loop {
// Run the data scanner
run_data_scanner().await;
tokio::select! {
_ = cancel_clone.cancelled() => {
info!("Data scanner received shutdown signal, exiting gracefully");
break;
}
_ = run_data_scanner_cycle() => {
// Calculate randomized sleep duration
let random_factor = {
let mut rng = rand::rng();
rng.random_range(1.0..10.0)
};
let base_cycle_duration = SCANNER_CYCLE.load(Ordering::SeqCst) as f64;
let sleep_duration_secs = random_factor * base_cycle_duration;
// Calculate randomized sleep duration
// Use random factor (0.0 to 1.0) multiplied by the scanner cycle duration
let random_factor = {
let mut rng = rand::rng();
rng.random_range(1.0..10.0)
};
let base_cycle_duration = SCANNER_CYCLE.load(Ordering::SeqCst) as f64;
let sleep_duration_secs = random_factor * base_cycle_duration;
let sleep_duration = Duration::from_secs_f64(sleep_duration_secs);
let sleep_duration = Duration::from_secs_f64(sleep_duration_secs);
debug!(
duration_secs = sleep_duration.as_secs(),
"Data scanner sleeping before next cycle"
);
info!(duration_secs = sleep_duration.as_secs(), "Data scanner sleeping before next cycle");
// Sleep with the calculated duration
sleep(sleep_duration).await;
// Interruptible sleep
tokio::select! {
_ = cancel_clone.cancelled() => {
info!("Data scanner received shutdown signal during sleep, exiting");
break;
}
_ = sleep(sleep_duration) => {
// Continue to next cycle
}
}
}
}
}
info!("Data scanner background task stopped gracefully");
});
cancel_token
}
/// Run a single data scanner cycle
@@ -239,8 +272,8 @@ pub async fn init_data_scanner() {
/// - Gracefully handles missing object layer
/// - Continues operation even if individual steps fail
/// - Logs errors appropriately without terminating the scanner
async fn run_data_scanner() {
info!("Starting data scanner cycle");
async fn run_data_scanner_cycle() {
debug!("Starting data scanner cycle");
// Get the object layer, return early if not available
let Some(store) = new_object_layer_fn() else {
@@ -248,6 +281,14 @@ async fn run_data_scanner() {
return;
};
// Check for cancellation before starting expensive operations
if let Some(token) = GLOBAL_SCANNER_CANCEL_TOKEN.get() {
if token.is_cancelled() {
debug!("Scanner cancelled before starting cycle");
return;
}
}
// Load current cycle information from persistent storage
let buf = read_config(store.clone(), &DATA_USAGE_BLOOM_NAME_PATH)
.await
@@ -293,7 +334,7 @@ async fn run_data_scanner() {
}
// Set up data usage storage channel
let (tx, rx) = mpsc::channel(100);
let (tx, rx) = mpsc::channel::<DataUsageInfo>(100);
tokio::spawn(async move {
let _ = store_data_usage_in_backend(rx).await;
});
@@ -308,8 +349,8 @@ async fn run_data_scanner() {
"Starting namespace scanner"
);
// Run the namespace scanner
match store.clone().ns_scanner(tx, cycle_info.current as usize, scan_mode).await {
// Run the namespace scanner with cancellation support
match execute_namespace_scan(&store, tx, cycle_info.current, scan_mode).await {
Ok(_) => {
info!(cycle = cycle_info.current, "Namespace scanner completed successfully");
@@ -349,6 +390,28 @@ async fn run_data_scanner() {
stop_fn(&scan_result);
}
/// Execute namespace scan with cancellation support
async fn execute_namespace_scan(
store: &Arc<ECStore>,
tx: Sender<DataUsageInfo>,
cycle: u64,
scan_mode: HealScanMode,
) -> Result<()> {
let cancel_token = GLOBAL_SCANNER_CANCEL_TOKEN
.get()
.ok_or_else(|| Error::other("Scanner not initialized"))?;
tokio::select! {
result = store.ns_scanner(tx, cycle as usize, scan_mode) => {
result.map_err(|e| Error::other(format!("Namespace scan failed: {e}")))
}
_ = cancel_token.cancelled() => {
info!("Namespace scan cancelled");
Err(Error::other("Scan cancelled"))
}
}
}
#[derive(Debug, Serialize, Deserialize)]
struct BackgroundHealInfo {
bitrot_start_time: SystemTime,
@@ -404,7 +467,7 @@ async fn get_cycle_scan_mode(current_cycle: u64, bitrot_start_cycle: u64, bitrot
return HEAL_DEEP_SCAN;
}
if bitrot_start_time.duration_since(SystemTime::now()).unwrap() > bitrot_cycle {
if SystemTime::now().duration_since(bitrot_start_time).unwrap_or_default() > bitrot_cycle {
return HEAL_DEEP_SCAN;
}
@@ -741,13 +804,18 @@ impl ScannerItem {
// Create a mutable clone if you need to modify fields
let mut oi = oi.clone();
oi.replication_status = ReplicationStatusType::from(
oi.user_defined
.get("x-amz-bucket-replication-status")
.unwrap_or(&"PENDING".to_string()),
);
info!("apply status is: {:?}", oi.replication_status);
self.heal_replication(&oi, _size_s).await;
let versioned = BucketVersioningSys::prefix_enabled(&oi.bucket, &oi.name).await;
if versioned {
oi.replication_status = ReplicationStatusType::from(
oi.user_defined
.get("x-amz-bucket-replication-status")
.unwrap_or(&"PENDING".to_string()),
);
debug!("apply status is: {:?}", oi.replication_status);
self.heal_replication(&oi, _size_s).await;
}
done();
if action.delete_all() {

View File

@@ -4099,6 +4099,8 @@ impl ObjectIO for SetDisks {
}
}
drop(writers); // drop writers to close all files, this is to prevent FileAccessDenied errors when renaming data
let (online_disks, _, op_old_dir) = Self::rename_data(
&shuffle_disks,
RUSTFS_META_TMP_BUCKET,
@@ -5039,6 +5041,8 @@ impl StorageAPI for SetDisks {
let fi_buff = fi.marshal_msg()?;
drop(writers); // drop writers to close all files
let part_path = format!("{}/{}/{}", upload_id_path, fi.data_dir.unwrap_or_default(), part_suffix);
let _ = Self::rename_part(
&disks,

View File

@@ -1372,7 +1372,8 @@ impl StorageAPI for ECStore {
}
if let Err(err) = self.peer_sys.make_bucket(bucket, opts).await {
if !is_err_bucket_exists(&err.into()) {
let err = err.into();
if !is_err_bucket_exists(&err) {
let _ = self
.delete_bucket(
bucket,
@@ -1384,6 +1385,8 @@ impl StorageAPI for ECStore {
)
.await;
}
return Err(err);
};
let mut meta = BucketMetadata::new(bucket);
@@ -2505,14 +2508,14 @@ fn check_object_name_for_length_and_slash(bucket: &str, object: &str) -> Result<
#[cfg(target_os = "windows")]
{
if object.contains('\\')
|| object.contains(':')
if object.contains(':')
|| object.contains('*')
|| object.contains('?')
|| object.contains('"')
|| object.contains('|')
|| object.contains('<')
|| object.contains('>')
// || object.contains('\\')
{
return Err(StorageError::ObjectNameInvalid(bucket.to_owned(), object.to_owned()));
}
@@ -2546,9 +2549,9 @@ fn check_bucket_and_object_names(bucket: &str, object: &str) -> Result<()> {
return Err(StorageError::ObjectNameInvalid(bucket.to_string(), object.to_string()));
}
if cfg!(target_os = "windows") && object.contains('\\') {
return Err(StorageError::ObjectNameInvalid(bucket.to_string(), object.to_string()));
}
// if cfg!(target_os = "windows") && object.contains('\\') {
// return Err(StorageError::ObjectNameInvalid(bucket.to_string(), object.to_string()));
// }
Ok(())
}

View File

@@ -1,4 +1,3 @@
#![allow(unused_imports)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
@@ -12,6 +11,7 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(unused_imports)]
#![allow(unused_variables)]
#![allow(unused_mut)]
#![allow(unused_assignments)]

View File

@@ -95,7 +95,6 @@ impl WarmBackendS3 {
..Default::default()
};
let client = TransitionClient::new(&u.host().expect("err").to_string(), opts).await?;
//client.set_appinfo(format!("s3-tier-{}", tier), ReleaseTag);
let client = Arc::new(client);
let core = TransitionCore(Arc::clone(&client));

View File

@@ -19,6 +19,11 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "File metadata management for RustFS, providing efficient storage and retrieval of file metadata in a distributed system."
keywords = ["file-metadata", "storage", "retrieval", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "filesystem"]
documentation = "https://docs.rs/rustfs-filemeta/latest/rustfs_filemeta/"
[dependencies]
crc32fast = { workspace = true }

View File

@@ -1,238 +1,37 @@
# RustFS FileMeta
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
A high-performance Rust implementation of xl-storage-format-v2, providing complete compatibility with S3-compatible metadata format while offering enhanced performance and safety.
# RustFS FileMeta - File Metadata Management
## Overview
<p align="center">
<strong>Advanced file metadata management and indexing module for RustFS distributed object storage</strong>
</p>
This crate implements the XL (Erasure Coded) metadata format used for distributed object storage. It provides:
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
- **Full S3 Compatibility**: 100% compatible with xl.meta file format
- **High Performance**: Optimized for speed with sub-microsecond parsing times
- **Memory Safety**: Written in safe Rust with comprehensive error handling
- **Comprehensive Testing**: Extensive test suite with real metadata validation
- **Cross-Platform**: Supports multiple CPU architectures (x86_64, aarch64)
---
## Features
## 📖 Overview
### Core Functionality
- ✅ XL v2 file format parsing and serialization
- ✅ MessagePack-based metadata encoding/decoding
- ✅ Version management with modification time sorting
- ✅ Erasure coding information storage
- ✅ Inline data support for small objects
- ✅ CRC32 integrity verification using xxHash64
- ✅ Delete marker handling
- ✅ Legacy version support
**RustFS FileMeta** provides advanced file metadata management and indexing capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
### Advanced Features
- ✅ Signature calculation for version integrity
- ✅ Metadata validation and compatibility checking
- ✅ Version statistics and analytics
- ✅ Async I/O support with tokio
- ✅ Comprehensive error handling
- ✅ Performance benchmarking
## Features
## Performance
- High-performance metadata storage and retrieval
- Advanced indexing with full-text search capabilities
- File attribute management and custom metadata
- Version tracking and history management
- Distributed metadata replication
- Real-time metadata synchronization
Based on our benchmarks:
## 📚 Documentation
| Operation | Time | Description |
|-----------|------|-------------|
| Parse Real xl.meta | ~255 ns | Parse authentic xl metadata |
| Parse Complex xl.meta | ~1.1 µs | Parse multi-version metadata |
| Serialize Real xl.meta | ~659 ns | Serialize to xl format |
| Round-trip Real xl.meta | ~1.3 µs | Parse + serialize cycle |
| Version Statistics | ~5.2 ns | Calculate version stats |
| Integrity Validation | ~7.8 ns | Validate metadata integrity |
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## Usage
## 📄 License
### Basic Usage
```rust
use rustfs_filemeta::file_meta::FileMeta;
// Load metadata from bytes
let metadata = FileMeta::load(&xl_meta_bytes)?;
// Access version information
for version in &metadata.versions {
println!("Version ID: {:?}", version.header.version_id);
println!("Mod Time: {:?}", version.header.mod_time);
}
// Serialize back to bytes
let serialized = metadata.marshal_msg()?;
```
### Advanced Usage
```rust
use rustfs_filemeta::file_meta::FileMeta;
// Load with validation
let mut metadata = FileMeta::load(&xl_meta_bytes)?;
// Validate integrity
metadata.validate_integrity()?;
// Check xl format compatibility
if metadata.is_compatible_with_meta() {
println!("Compatible with xl format");
}
// Get version statistics
let stats = metadata.get_version_stats();
println!("Total versions: {}", stats.total_versions);
println!("Object versions: {}", stats.object_versions);
println!("Delete markers: {}", stats.delete_markers);
```
### Working with FileInfo
```rust
use rustfs_filemeta::fileinfo::FileInfo;
use rustfs_filemeta::file_meta::FileMetaVersion;
// Convert FileInfo to metadata version
let file_info = FileInfo::new("bucket", "object.txt");
let meta_version = FileMetaVersion::from(file_info);
// Add version to metadata
metadata.add_version(file_info)?;
```
## Data Structures
### FileMeta
The main metadata container that holds all versions and inline data:
```rust
pub struct FileMeta {
pub versions: Vec<FileMetaShallowVersion>,
pub data: InlineData,
pub meta_ver: u8,
}
```
### FileMetaVersion
Represents a single object version:
```rust
pub struct FileMetaVersion {
pub version_type: VersionType,
pub object: Option<MetaObject>,
pub delete_marker: Option<MetaDeleteMarker>,
pub write_version: u64,
}
```
### MetaObject
Contains object-specific metadata including erasure coding information:
```rust
pub struct MetaObject {
pub version_id: Option<Uuid>,
pub data_dir: Option<Uuid>,
pub erasure_algorithm: ErasureAlgo,
pub erasure_m: usize,
pub erasure_n: usize,
// ... additional fields
}
```
## File Format Compatibility
This implementation is fully compatible with xl-storage-format-v2:
- **Header Format**: XL2 v1 format with proper version checking
- **Serialization**: MessagePack encoding identical to standard format
- **Checksums**: xxHash64-based CRC validation
- **Version Types**: Support for Object, Delete, and Legacy versions
- **Inline Data**: Compatible inline data storage for small objects
## Testing
The crate includes comprehensive tests with real xl metadata:
```bash
# Run all tests
cargo test
# Run benchmarks
cargo bench
# Run with coverage
cargo test --features coverage
```
### Test Coverage
- ✅ Real xl.meta file compatibility
- ✅ Complex multi-version scenarios
- ✅ Error handling and recovery
- ✅ Inline data processing
- ✅ Signature calculation
- ✅ Round-trip serialization
- ✅ Performance benchmarks
- ✅ Edge cases and boundary conditions
## Architecture
The crate follows a modular design:
```
src/
├── file_meta.rs # Core metadata structures and logic
├── file_meta_inline.rs # Inline data handling
├── fileinfo.rs # File information structures
├── test_data.rs # Test data generation
└── lib.rs # Public API exports
```
## Error Handling
Comprehensive error handling with detailed error messages:
```rust
use rustfs_filemeta::error::Error;
match FileMeta::load(&invalid_data) {
Ok(metadata) => { /* process metadata */ },
Err(Error::InvalidFormat(msg)) => {
eprintln!("Invalid format: {}", msg);
},
Err(Error::CorruptedData(msg)) => {
eprintln!("Corrupted data: {}", msg);
},
Err(e) => {
eprintln!("Other error: {}", e);
}
}
```
## Dependencies
- `rmp` - MessagePack serialization
- `uuid` - UUID handling
- `time` - Date/time operations
- `xxhash-rust` - Fast hashing
- `tokio` - Async runtime (optional)
- `criterion` - Benchmarking (dev dependency)
## Contributing
1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Ensure all tests pass
5. Submit a pull request
## License
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
## Acknowledgments
- Original xl-storage-format-v2 implementation contributors
- Rust community for excellent crates and tooling
- Contributors and testers who helped improve this implementation
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,11 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Identity and Access Management (IAM) for RustFS, providing user management, roles, and permissions."
keywords = ["iam", "identity", "access-management", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "authentication"]
documentation = "https://docs.rs/rustfs-iam/latest/rustfs_iam/"
[lints]
workspace = true

37
crates/iam/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS IAM - Identity & Access Management
<p align="center">
<strong>Identity and access management system for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS IAM** provides identity and access management capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- User and group management with RBAC
- Service account and API key authentication
- Policy engine with fine-grained permissions
- LDAP/Active Directory integration
- Multi-factor authentication support
- Session management and token validation
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,11 @@ edition.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
homepage.workspace = true
description = "Distributed locking mechanism for RustFS, providing synchronization and coordination across distributed systems."
keywords = ["locking", "asynchronous", "distributed", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "asynchronous"]
documentation = "https://docs.rs/rustfs-lock/latest/rustfs_lock/"
[lints]
workspace = true

37
crates/lock/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Lock - Distributed Locking
<p align="center">
<strong>High-performance distributed locking system for RustFS object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Lock** provides distributed locking capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Distributed lock management across cluster nodes
- Read-write lock support with concurrent readers
- Lock timeout and automatic lease renewal
- Deadlock detection and prevention
- High-availability with leader election
- Performance-optimized locking algorithms
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,11 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Management and administration tools for RustFS, providing a web interface and API for system management."
keywords = ["management", "administration", "web-interface", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "config"]
documentation = "https://docs.rs/rustfs-madmin/latest/rustfs_madmin/"
[lints]
workspace = true

37
crates/madmin/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS MadAdmin - Administrative Interface
<p align="center">
<strong>Advanced administrative interface and management tools for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS MadAdmin** provides advanced administrative interface and management tools for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Comprehensive cluster management and monitoring
- Real-time performance metrics and analytics
- Automated backup and disaster recovery tools
- User and permission management interface
- System health monitoring and alerting
- Configuration management and deployment tools
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,11 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "File system notification service for RustFS, providing real-time updates on file changes and events."
keywords = ["file-system", "notification", "real-time", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "filesystem"]
documentation = "https://docs.rs/rustfs-notify/latest/rustfs_notify/"
[dependencies]
rustfs-config = { workspace = true, features = ["notify"] }

37
crates/notify/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Notify - Event Notification System
<p align="center">
<strong>Real-time event notification and messaging system for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Notify** provides real-time event notification and messaging capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Real-time event streaming and notifications
- Multiple notification targets (HTTP, Kafka, Redis, Email)
- Event filtering and routing based on criteria
- Message queuing with guaranteed delivery
- Event replay and auditing capabilities
- High-throughput messaging with batching support
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,11 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Observability and monitoring tools for RustFS, providing metrics, logging, and tracing capabilities."
keywords = ["observability", "metrics", "logging", "tracing", "RustFS"]
categories = ["web-programming", "development-tools::profiling", "asynchronous", "api-bindings", "development-tools::debugging"]
documentation = "https://docs.rs/rustfs-obs/latest/rustfs_obs/"
[lints]
workspace = true

37
crates/obs/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Obs - Observability & Monitoring
<p align="center">
<strong>Comprehensive observability and monitoring system for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Obs** provides comprehensive observability and monitoring capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- OpenTelemetry integration for distributed tracing
- Prometheus metrics collection and exposition
- Structured logging with configurable levels
- Performance profiling and analytics
- Real-time health checks and status monitoring
- Custom dashboards and alerting integration
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -419,7 +419,7 @@ fn format_with_color(w: &mut dyn std::io::Write, now: &mut DeferredNow, record:
writeln!(
w,
"{} {} [{}] [{}:{}] [{}:{}] {}",
"[{}] {} [{}] [{}:{}] [{}:{}] {}",
now.now().format("%Y-%m-%d %H:%M:%S%.6f"),
level_style.paint(level.to_string()),
Color::Magenta.paint(record.target()),
@@ -443,7 +443,7 @@ fn format_for_file(w: &mut dyn std::io::Write, now: &mut DeferredNow, record: &R
writeln!(
w,
"{} {} [{}] [{}:{}] [{}:{}] {}",
"[{}] {} [{}] [{}:{}] [{}:{}] {}",
now.now().format("%Y-%m-%d %H:%M:%S%.6f"),
level,
record.target(),

View File

@@ -19,6 +19,11 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Policy management for RustFS, providing a framework for defining and enforcing policies across the system."
keywords = ["policy", "management", "enforcement", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "accessibility"]
documentation = "https://docs.rs/rustfs-policy/latest/rustfs_policy/"
[lints]
workspace = true

37
crates/policy/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Policy - Policy Engine
<p align="center">
<strong>Advanced policy engine and access control system for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Policy** provides advanced policy engine and access control capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- AWS-compatible bucket policy engine
- Fine-grained resource-based access control
- Condition-based policy evaluation
- Policy validation and syntax checking
- Role-based access control integration
- Dynamic policy evaluation with context
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -16,6 +16,14 @@
name = "rustfs-protos"
version.workspace = true
edition.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
homepage.workspace = true
description = "Protocol definitions for RustFS, providing gRPC and FlatBuffers interfaces for communication between components."
keywords = ["protocols", "gRPC", "FlatBuffers", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "data-structures", "asynchronous"]
documentation = "https://docs.rs/rustfs-protos/latest/rustfs_protos/"
[lints]
workspace = true

37
crates/protos/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Protos - Protocol Buffer Definitions
<p align="center">
<strong>Protocol buffer definitions and gRPC services for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Protos** provides protocol buffer definitions and gRPC services for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Comprehensive gRPC service definitions
- Cross-language compatibility with Protocol Buffers
- Efficient binary serialization for network communication
- Versioned API schemas with backward compatibility
- Type-safe message definitions
- Code generation for multiple programming languages
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,11 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Rio is a RustFS component that provides a high-performance, asynchronous I/O framework for building scalable and efficient applications."
keywords = ["asynchronous", "IO", "framework", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "asynchronous"]
documentation = "https://docs.rs/rustfs-rio/latest/rustfs_rio/"
[lints]
workspace = true
@@ -40,5 +45,5 @@ serde_json.workspace = true
md-5 = { workspace = true }
[dev-dependencies]
criterion = { version = "0.5.1", features = ["async", "async_tokio", "tokio"] }
#criterion = { version = "0.5.1", features = ["async", "async_tokio", "tokio"] }
tokio-test = "0.4"

37
crates/rio/README.md Normal file
View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Rio - High-Performance I/O
<p align="center">
<strong>High-performance asynchronous I/O operations for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS Rio** provides high-performance asynchronous I/O operations for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Zero-copy streaming I/O operations
- Hardware-accelerated encryption/decryption
- Multi-algorithm compression support
- Efficient buffer management and pooling
- Vectored I/O for improved throughput
- Real-time data integrity verification
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,11 @@ edition.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
homepage.workspace = true
description = "S3 Select API implementation for RustFS, enabling efficient data retrieval from S3-compatible object stores."
keywords = ["s3-select", "api", "rustfs", "Minio", "object-store"]
categories = ["web-programming", "development-tools", "asynchronous"]
documentation = "https://docs.rs/rustfs-s3select-api/latest/rustfs_s3select_api/"
[dependencies]
async-trait.workspace = true

View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS S3Select API - SQL Query Interface
<p align="center">
<strong>AWS S3 Select compatible SQL query API for RustFS distributed object storage</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS S3Select API** provides AWS S3 Select compatible SQL query capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Standard SQL query support (SELECT, WHERE, GROUP BY, ORDER BY)
- Multiple data format support (CSV, JSON, Parquet, Arrow)
- Streaming processing for large files
- AWS S3 Select API compatibility
- Parallel query execution
- Predicate pushdown optimization
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,11 @@ edition.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
homepage.workspace = true
description = "S3 Select query engine for RustFS, enabling efficient data retrieval from S3-compatible storage using SQL-like queries."
keywords = ["s3-select", "query-engine", "rustfs", "Minio", "data-retrieval"]
categories = ["web-programming", "development-tools", "data-structures"]
documentation = "https://docs.rs/rustfs-s3select-query/latest/rustfs_s3select_query/"
[dependencies]
rustfs-s3select-api = { workspace = true }

View File

@@ -0,0 +1,37 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS S3Select Query - SQL Query Engine
<p align="center">
<strong>Apache DataFusion-powered SQL query engine for RustFS S3 Select implementation</strong>
</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://docs.rustfs.com/en/">📖 Documentation</a>
· <a href="https://github.com/rustfs/rustfs/issues">🐛 Bug Reports</a>
· <a href="https://github.com/rustfs/rustfs/discussions">💬 Discussions</a>
</p>
---
## 📖 Overview
**RustFS S3Select Query** provides Apache DataFusion-powered SQL query engine capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
- Apache DataFusion integration for high-performance queries
- Vectorized processing with SIMD acceleration
- Parallel query execution across multiple threads
- Cost-based query optimization
- Support for complex SQL operations (joins, subqueries, window functions)
- Multiple data format support (Parquet, CSV, JSON, Arrow)
## 📚 Documentation
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

Some files were not shown because too many files have changed in this diff Show More