Compare commits

...

114 Commits

Author SHA1 Message Date
overtrue
2501d7d241 fix: remove branch restriction from Docker workflow_run trigger
The Docker workflow was not triggering for tag-based releases because it had
'branches: [main]' restriction in the workflow_run configuration. When pushing
tags, the triggering workflow runs on the tag, not on main branch.

Changes:
- Remove 'branches: [main]' from workflow_run trigger
- Simplify tag detection using github.event.workflow_run context instead of API calls
- Use official workflow_run event properties (head_branch, event) for reliable detection
- Support both 'refs/tags/VERSION' and direct 'VERSION' formats
- Add better logging for debugging workflow trigger issues

This fixes the issue where Docker images were not built for tagged releases.
2025-07-17 08:13:34 +08:00
overtrue
55b84262b5 fix: use GitHub API for reliable tag detection in Docker workflow
- Replace git commands with GitHub API calls for tag detection
- Add proper commit checkout for workflow_run events
- Use gh CLI and curl fallback for better reliability
- Add debug output to help troubleshoot tag detection issues

This should fix the issue where Docker builds were not triggered for tagged releases
due to missing tag information in the workflow_run environment.
2025-07-17 08:01:33 +08:00
overtrue
ce4252eb1a fix: correct Docker workflow trigger logic for tag-based releases
BREAKING CHANGE: Fixed Docker workflow that was incorrectly skipping builds for tagged releases
- Fix logic to detect tag pushes using git refs instead of branch names
- Properly identify tag pushes vs branch pushes using git show-ref
- Support both v-prefixed and bare version formats
- Ensure Docker images are built for all tagged releases including prereleases
2025-07-17 07:46:54 +08:00
overtrue
db708917b4 docs: update .docker/README.md to reflect simplified Makefile commands
- Add new Makefile Commands section with simplified docker-dev* commands
- Update Development Workflow to use new dev-env-* commands
- Update directory structure (remove deleted alpine/ directory)
- Reorganize build instructions to prioritize Makefile over direct scripts
- Add Common Development Tasks section with make help commands
2025-07-17 07:30:13 +08:00
overtrue
8ddb45627d refactor: simplify Docker build commands and fix version matching
- Remove obsolete .docker/alpine/Dockerfile.protoc (superseded by Dockerfile.source)
- Simplify Makefile commands by removing backward compatibility aliases
  * Replace docker-buildx-source* with shorter docker-dev* commands
  * Replace start/stop with explicit dev-env-start/dev-env-stop commands
- Fix Docker workflow version matching logic to correctly distinguish:
  * 1.0.0 vs 1.0.0-alpha.11 (prerelease detection)
  * Support both v1.0.0 and 1.0.0 formats (with/without v prefix)
  * Reorder case patterns to match prereleases before releases

BREAKING CHANGE: Removed legacy command aliases
- Use 'make docker-dev-local' instead of 'make docker-buildx-source-local'
- Use 'make dev-env-start' instead of 'make start'
2025-07-17 07:29:00 +08:00
overtrue
550c225b79 wip 2025-07-17 07:07:02 +08:00
overtrue
0d46b550a8 refactor: merge release workflow into build workflow and clean up
- Merge release logic into build.yml to avoid cross-workflow artifact access issues
- Add release jobs (create-release, upload-release-assets, update-latest-version, publish-release) that run only for tag pushes
- Use standard actions/download-artifact@v4 within the same workflow (no cross-workflow limitations)
- Deprecate standalone release.yml workflow with warning job and confirmation requirement
- Remove references to deleted release-notes-template.md file from both workflows
- Update build summary messages to reflect integrated release process

This resolves the 'Prepare release assets' failure by eliminating the need for cross-workflow artifact access.
2025-07-17 07:06:51 +08:00
overtrue
0693cca1a4 fix: resolve workflow_run artifact access issue in release pipeline
- Replace actions/download-artifact@v4 with GitHub API calls to access artifacts from triggering workflow
- Add proper permissions (contents: read, actions: read) to prepare-assets job
- Handle both workflow_run and workflow_dispatch trigger scenarios
- Fix the root cause: workflow_run events cannot access artifacts from triggering workflows using standard download-artifact action

Fixes the 'Prepare release assets' step failure by implementing cross-workflow artifact access through GitHub API.
2025-07-17 06:58:09 +08:00
安正超
0d9f9e381a refactor: use workflow_run trigger for release workflow to eliminate timing issues (#241)
* fix: use correct tag reference in release workflow wait-for-artifacts step

- Change ref from github.ref to needs.release-check.outputs.tag
- Fix issue where wait-on-check-action receives full git reference (refs/tags/1.0.0-alpha.21)
  instead of clean tag name (1.0.0-alpha.21)
- This resolves timeout errors when waiting for build artifacts during release process

Fixes the release workflow failure for tag 1.0.0-alpha.21

* refactor: use workflow_run trigger for release workflow instead of push

- Replace push trigger with workflow_run to eliminate timing issues
- Release workflow now triggers only after Build workflow completes successfully
- Remove wait-for-artifacts step completely (no longer needed)
- Add should_release condition to control release execution
- Support both tag pushes and manual releases via workflow_dispatch
- Align with docker.yml pattern for better reliability

This completely resolves the release workflow timeout issues by ensuring
build artifacts are always available before the release process starts.

Fixes the fundamental timing issue where release.yml and build.yml
were racing against each other when triggered by the same tag push.
2025-07-17 06:48:09 +08:00
安正超
6c7aa5a7ae fix: use correct tag reference in release workflow wait-for-artifacts step (#240)
- Change ref from github.ref to needs.release-check.outputs.tag
- Fix issue where wait-on-check-action receives full git reference (refs/tags/1.0.0-alpha.21)
  instead of clean tag name (1.0.0-alpha.21)
- This resolves timeout errors when waiting for build artifacts during release process

Fixes the release workflow failure for tag 1.0.0-alpha.21
2025-07-17 06:36:57 +08:00
overtrue
a27d935925 wip 2025-07-17 06:31:25 +08:00
安正超
b4f87a4fee feat: disable Docker builds for development versions (#239)
* feat: disable Docker builds for development versions

- Remove dev-latest, main-latest, and dev-* version options from manual triggers
- Skip Docker builds for development versions in workflow_run events
- Only build Docker images for releases (v1.0.0) and prereleases (v1.0.0-alpha1)
- Simplify tags generation logic by removing development branch handling
- Update workflow documentation to reflect release-only Docker strategy

BREAKING CHANGE: Development Docker images are no longer built automatically

* feat: remove dev channel support from Dockerfile

- Remove CHANNEL build argument (no longer needed)
- Simplify download logic to only support release channel
- Remove dev-specific package download paths
- Update BASE_URL to point directly to release directory
- Remove channel label from Docker image metadata
- Streamline version handling (latest vs specific release)

This aligns with the workflow changes that disabled dev Docker builds.
2025-07-17 06:06:40 +08:00
安正超
ee5f94a2e2 fix: use consistent short SHA generation across workflows (#238)
- Replace manual cut -c1-7 with git rev-parse --short in docker.yml
- Ensures consistent short SHA length between build.yml and docker.yml
- Git automatically adjusts length for uniqueness, preventing conflicts
2025-07-17 05:48:30 +08:00
安正超
9c3cf554d3 fix: correct Docker build logic for dev version downloads (#237) 2025-07-17 05:36:15 +08:00
安正超
addbfa5487 fix: resolve Docker workflow manual build parameter issues (#236)
- Remove unsupported 'scopes' parameter from docker/login-action@v3
  * Fixes 'Unexpected input(s) scopes' error during Docker Hub login

- Add version format conversion for Dockerfile compatibility
  * main-latest/dev-latest → RELEASE=latest + CHANNEL=dev
  * latest → RELEASE=latest + CHANNEL=release
  * dev-* → RELEASE=dev-* + CHANNEL=dev
  * v* → RELEASE={version without v} + CHANNEL=release

- Fix Docker build parameter passing
  * Use converted docker_release and docker_channel values
  * Ensures correct binary download URLs in Dockerfile

Resolves manual Docker build failures reported in:
https://github.com/rustfs/rustfs/actions/runs/16330398463/job/46131302262
2025-07-17 05:21:06 +08:00
安正超
5eb461d7b7 refactor: remove redundant linux_builds_success logic in docker workflow (#235)
- Remove linux_builds_success output and related variables
- Simplify build-docker condition to only check should_build
- The should_build check already includes workflow success verification
- Reduce code complexity while maintaining the same functionality
2025-07-17 05:09:41 +08:00
安正超
1ea45afcd7 feat: Implement precise Docker build triggering using workflow_run event (#233)
* fix: correct YAML indentation error in docker workflow

- Fix incorrect indentation at line 237 in .github/workflows/docker.yml
- Step 'Extract metadata and generate tags' had 12 spaces instead of 6
- This was causing YAML syntax validation to fail

* fix: restore unified build-rustfs task with correct YAML syntax

- Revert complex job separation back to single build-rustfs task
- Maintain Linux and macOS builds in unified matrix
- Fix YAML indentation and syntax issues
- Docker builds will use only Linux binaries as designed in Dockerfile

* feat: implement precise Docker build triggering using workflow_run

- Use workflow_run event to trigger Docker builds independently
- Add precise Linux build status checking via GitHub API
- Only trigger Docker builds when both Linux architectures succeed
- Remove coupling between build.yml and docker.yml workflows
- Improve TARGETPLATFORM consistency in Dockerfile

This resolves the issue where Docker builds would trigger even if
Linux ARM64 builds failed, causing missing binary artifacts during
multi-architecture Docker image creation.
2025-07-17 04:51:08 +08:00
安正超
dbd86f6aee fix: correct YAML indentation error in docker workflow (#232)
- Fix incorrect indentation at line 237 in .github/workflows/docker.yml
- Step 'Extract metadata and generate tags' had 12 spaces instead of 6
- This was causing YAML syntax validation to fail
2025-07-17 04:28:31 +08:00
overtrue
af693f7b3f refactor: restructure Docker build pipeline to depend on binary builds
- Change docker.yml to use workflow_call triggered by build.yml
- Remove redundant force_build parameter from build.yml
- Simplify build_docker parameter (build implies push in CI/CD)
- Add proper dependency chain: build.yml -> docker.yml -> registry
- Update documentation to reflect new architecture
- Mark Dockerfile.source as local development only
2025-07-17 04:19:20 +08:00
安正超
3be5ee6445 fix: simplify Dockerfile.source and resolve build issues (#231)
- Remove complex dependency caching to fix workspace structure issues
- Remove sccache to eliminate rustc wrapper errors
- Ensure target installation in build step for cross-compilation
- Add debug output and error handling for unsupported platforms
- Use simple COPY . . approach for more reliable builds
2025-07-16 23:53:28 +08:00
overtrue
0acc8fe26a fix: docker build from source 2025-07-16 23:46:30 +08:00
overtrue
ecf40eb86c fix: docker build from source 2025-07-16 23:43:34 +08:00
overtrue
48ce7055f8 fix: remove dockerhub username 2025-07-16 22:35:14 +08:00
weisd
749f55d688 feat: enhance version function with automatic version increment (#227) 2025-07-16 18:09:43 +08:00
loverustfs
e5d17f5382 Disable Dockerfile.source 2025-07-16 18:03:09 +08:00
weisd
982cc66c74 fix: Refactor session policy handling and fix owner permission check (#226) 2025-07-16 16:40:51 +08:00
loverustfs
74bf4909c8 Modify docker source file 2025-07-15 23:17:39 +08:00
loverustfs
9c956b4445 Disable other docker mode 2025-07-15 22:10:00 +08:00
weisd
4c1fc9317e fix: content-range (#216) 2025-07-15 17:23:33 +08:00
weisd
a9d77a618f feat: implement list_parts API for S3 multipart upload compatibility (#209)
* feat: add list_parts api
2025-07-15 16:04:03 +08:00
overtrue
38cdc87e93 fix 2025-07-15 02:36:07 +08:00
安正超
f5ff93b65e fix: restore working build configuration by removing cargo.config.toml (#206)
- Remove cargo.config.toml file that was causing build issues
- Restore .github/workflows/build.yml to working state from commit 2e9792577f
- These changes ensure the build system works correctly again
2025-07-15 02:24:13 +08:00
安正超
6ef6f188e5 fix: Restore working build configuration from 4fb4b353 (#204)
* fix: Resolve zstd-sys Zig compilation issues

- Remove specific Zig version constraint in action.yml to use default version
- Clean up duplicate environment variable settings in build-rustfs.sh
- Add CARGO_TARGET_*_LINKER environment variables for better cross-compilation support
- Optimize build configuration for consistent cross-platform compilation

Fixes compilation issues with zstd-sys when using Zig cross-compilation.
Aligns with previously working configuration that uses default Zig version.

* fix: Restore working build configuration from 4fb4b353

- Restore matrix.cross parameter to differentiate cross-compilation
- Use simple cargo zigbuild instead of complex build-rustfs.sh script
- Remove unnecessary zstd dependencies from action.yml
- Restore console asset download step
- Use correct target directory path for packaging
- Align with known working configuration from commit 4fb4b353

This reverts to the proven working build approach that successfully
performed cross-platform compilation.

* fix: Align build-rustfs.sh with working version logic

- Simplify build logic to match working version 4fb4b353
- Use exact same build commands as the working build.yml:
  * cargo build for native compilation
  * cargo zigbuild for Linux ARM64 cross-compilation
  * cross build for Windows ARM64 cross-compilation
- Remove complex environment variable setup that caused conflicts
- Add touch rustfs/build.rs to match working version
- Use -p rustfs --bins flag consistent with working version

This ensures build-rustfs.sh (if used) follows the proven working approach.
2025-07-14 20:22:29 +08:00
安正超
ccad91a4a9 fix: resolve zstd-sys compilation issues with zig cross-compilation (#203)
- Update to mlugg/setup-zig@v2 for better stability and features
- Use Zig 0.13.0 for improved musl target support
- Add system zstd libraries (libzstd-dev, zstd) to Ubuntu dependencies
- Configure environment variables for zstd-sys to use pkg-config
- Enable pkg-config feature for zstd dependency to prefer system library
- Add proper C/C++ compiler configuration for musl targets

Fixes the 'error: unable to parse target query x86_64-unknown-linux-musl: UnknownOperatingSystem'
compilation error in zstd-sys during cross-compilation.
2025-07-14 20:01:52 +08:00
安正超
63b79ae151 fix: add cross-platform SHA256 checksum generation (#202)
- Add generate_sha256() function to handle cross-platform SHA256 generation
- Use shasum -a 256 on macOS instead of sha256sum
- Use sha256sum on Linux with shasum as fallback
- Replace direct sha256sum usage in build script with new function
- Fixes 'sha256sum: command not found' error on macOS builds
2025-07-14 19:45:42 +08:00
安正超
9284f64e2a fix: resolve aarch64-unknown-linux-musl build issue with cargo-zigbuild integration (#201)
- Enable install-cross-tools in GitHub Actions build workflow
- Add cargo-zigbuild support for Linux targets in build-rustfs.sh
- Prioritize cargo-zigbuild over cross tool for better glibc compatibility
- Add musl-specific environment variables for proper static linking
- Update error messages with Linux-specific build suggestions
- Configure Zig compiler environment for musl targets
2025-07-14 19:31:30 +08:00
安正超
b9bbae27de fix: resolve macOS build issue by disabling cross tool for apple-darwin targets (#200) 2025-07-14 19:25:01 +08:00
安正超
36e3efb5a5 feat: implement Docker improvements and binary build scripts (#191)
* feat: implement Docker improvements and binary build scripts

This commit transforms the RustFS Docker build system to follow MinIO's best practices:

## 🏗️ Binary Build Script (build-rustfs.sh)
- Create independent binary compilation script for multi-platform builds
- Support x86_64 and aarch64 Linux musl targets
- Include checksum generation and optional binary signing
- Support cross-compilation and upload functionality
- Automated target installation and environment setup

## 🐳 Docker Improvements
- Rewrite Dockerfiles to download precompiled binaries instead of building from source
- Follow MinIO's approach for security and binary verification
- Add comprehensive LABEL metadata (version, build-date, vcs-ref)
- Implement proper environment variable management
- Add signature verification with minisign (commented for future use)
- Include static curl download for minimal runtime dependencies

## 🚀 Enhanced Build Script (docker-buildx.sh)
- Inspired by MinIO's docker-buildx.sh for consistency and reliability
- Support multiple platforms with proper build arguments
- Auto-detect git versions and pass metadata to containers
- Improved error messages with helpful troubleshooting hints
- Cleanup and cache management between builds

## 🛠️ Supporting Scripts
- scripts/download-static-curl.sh: Download statically compiled curl
- scripts/setup-test-binaries.sh: Create test binaries for local development

## 📋 Key Benefits
- Faster Docker builds (download vs compile)
- Better security with signature verification
- Consistent with industry standards (MinIO approach)
- Proper multi-platform support
- Enhanced metadata and traceability
- Independent binary distribution capability

* feat: update Docker files to use Aliyun OSS for binary downloads

* feat: merge stash with OSS binary download improvements

- Remove old build_rustfs.sh script
- Keep Aliyun OSS download URLs for binary retrieval
- Maintain Docker build improvements from stash
- Resolve merge conflicts between stash and OSS updates

* feat: improve build-rustfs.sh with auto platform detection

- Auto-detect current platform using uname (like old build_rustfs.sh)
- Default to building for current platform only
- Add --all-platforms flag for cross-compilation to Linux musl targets
- Support macOS (darwin) and Linux platforms
- Auto-enable cross compilation when needed
- Provide better usage examples and platform detection info

This makes the script much more user-friendly by default while
maintaining flexibility for cross-compilation scenarios.

* refactor: simplify build-rustfs.sh for CI/CD pipeline usage

- Remove cross-compilation complexity (each CI runner builds natively)
- Focus on single platform builds per runner
- Remove --all-platforms and --cross options
- Simplify to match CI/CD workflow where:
  * Linux x86_64 runner builds Linux x86_64 binary
  * Linux ARM64 runner builds Linux ARM64 binary
  * macOS x86_64 runner builds macOS x86_64 binary
  * macOS ARM64 runner builds macOS ARM64 binary
- Keep signing and upload functionality for release CI
- Make the script's purpose and usage clearer

This aligns with the user's understanding that build scripts should
focus on native compilation for the current platform only.

* feat: update download server domain to dl.rustfs.com

- Update Dockerfile to use dl.rustfs.com/dev/ for development binaries
- Update Dockerfile.release to use dl.rustfs.com/release/ for release binaries
- Update docker-buildx.sh error messages with new URLs
- Update build-rustfs.sh upload target to dl.rustfs.com
- Update test scripts to reference new domain
- Clean up remaining git conflict markers

This centralizes all binary downloads through the official
dl.rustfs.com domain instead of direct OSS access.

* fix: correct dl.rustfs.com path structure to include /artifacts/rustfs/

- Update all download URLs to use correct path structure:
  * Dev: https://dl.rustfs.com/artifacts/rustfs/dev/
  * Release: https://dl.rustfs.com/artifacts/rustfs/release/
- Test confirmed both paths return HTTP 200 with application/zip content-type
- Update Dockerfile, Dockerfile.release, docker-buildx.sh, and build-rustfs.sh
- Update test scripts with correct base path

The dl.rustfs.com domain requires the /artifacts/rustfs/ prefix
to access the binary files correctly.

* feat: refactor Dockerfile to download binaries from GitHub Releases

- Changed binary download source from dl.rustfs.com to GitHub Releases
- Added support for latest release auto-detection via GitHub API
- Enhanced error handling with detailed messages and helpful links
- Added optional checksum verification using SHA256SUMS
- Improved architecture support for amd64 and arm64
- Removed unnecessary minisign installation
- Added jq dependency for JSON parsing

* feat: consolidate Docker build to use single Dockerfile

- Removed Dockerfile.release and use unified Dockerfile instead
- Updated docker-buildx.sh to use single Dockerfile with build args
- Both latest and release variants now use GitHub Releases
- Simplified build process and reduced maintenance overhead
- Updated error messages to point to GitHub releases

* chore: remove unused Dockerfile.obs

- Removed Dockerfile.obs as it's no longer needed
- Simplified Docker build configuration

* feat: unify Docker prebuild variants to use GitHub Releases

- Updated .docker/alpine/Dockerfile.prebuild to download from GitHub Releases
- Updated .docker/ubuntu/Dockerfile.prebuild to download from GitHub Releases
- All prebuild variants now consistently use GitHub Releases as binary source
- Added checksum verification for all prebuild variants
- Updated .docker/README.md to reflect unified GitHub Releases approach
- Improved error handling and user guidance in all prebuild Dockerfiles

* feat: major Docker structure simplification and consolidation

## 🎯 Simplified Docker Structure

Moved from complex multi-directory structure to clean root-level organization:

### Before:
- Dockerfile (production)
- .docker/alpine/Dockerfile.prebuild (duplicate)
- .docker/alpine/Dockerfile.source
- .docker/ubuntu/Dockerfile.prebuild (duplicate)
- .docker/ubuntu/Dockerfile.source
- .docker/ubuntu/Dockerfile.dev

### After:
- Dockerfile (production - Alpine + GitHub Releases)
- Dockerfile.source (source build - Ubuntu + cross-compilation)
- Dockerfile.dev (development - Ubuntu + full toolchain)

## 🔧 Key Changes

- **Eliminated Duplicates**: Removed redundant prebuild variants
- **Moved Core Files**: Dockerfile.{source,dev} now in root directory
- **Unified Configuration**: cargo.config.toml moved to root
- **Updated References**: Fixed all GitHub Actions and docker-compose paths
- **Simplified CI Matrix**: Reduced from 5 to 3 Docker variants

## 📦 Preserved Valuable Diversity

- **Production**: Alpine-based for minimal size
- **Source**: Ubuntu-based with cross-compilation support
- **Development**: Ubuntu-based with full development tools

## 🚀 Benefits

-  Cleaner project structure
-  Easier maintenance and navigation
-  Reduced CI/CD complexity
-  Faster build matrix execution
-  Maintained functionality and flexibility

* chore: remove duplicate cargo.config.toml from .docker directory

The file is now in the root directory and no longer needed in .docker/

* fix: update all references to removed Dockerfile files

- Updated .docker/compose/README.md to reference Dockerfile.source instead of Dockerfile.obs
- Updated docker-compose.yml to use Dockerfile.source instead of Dockerfile.dev
- Updated scripts/build-docker-multiarch.sh to use Dockerfile.source for devenv builds
- Updated .github/workflows/docker.yml to use Dockerfile.source for dev builds
- Updated Makefile to use Dockerfile.source for init-devenv target
- Updated .docker/README.md to remove references to non-existent Dockerfile.dev
- Ensured all Docker configurations consistently use the unified Dockerfile structure

* chore: remove unnecessary console static assets download

- Remove obsolete download steps from build.yml and performance.yml
- Console static assets are already embedded via rust-embed in rustfs/static/
- The download from dl.rustfs.com is no longer needed as project contains complete console assets
- This improves build reliability and reduces external dependencies
- Replaced with verification steps that confirm embedded assets are present

* feat: update Makefile and README.md for new Docker build system

- Updated Makefile to use unified Docker build system:
  - Replace references to non-existent Dockerfile.ubuntu22.04 and Dockerfile.rockylinux9.3
  - Add new docker-buildx targets using docker-buildx.sh script
  - Deprecate old docker-build-multiarch targets with warnings
  - Add docker-build-production and docker-build-source targets
  - Update help-docker with new command structure

- Updated README.md with docker-buildx.sh usage:
  - Add comprehensive Docker build from source section
  - Document multi-architecture build capabilities
  - Include both script and Make target examples
  - Show registry flexibility and build optimization features
  - Update step numbers in quickstart guide

- Improve developer experience with clear documentation and updated tooling
- Maintain backward compatibility with deprecation warnings

* feat: integrate console assets download into build-rustfs.sh

- Added console download functionality to build-rustfs.sh:
  - New flags: --download-console, --force-console-update, --console-version
  - Intelligent detection of existing console assets
  - Retry logic with fallback error handling
  - Consistent with Docker build asset management

- Updated scripts to use unified build process:
  - scripts/static.sh: Now uses build-rustfs.sh for console downloads
  - scripts/run.sh: Uses build-rustfs.sh instead of direct curl
  - scripts/run.ps1: Updated with guidance for Windows users

- Benefits:
  - Unified asset management across all build processes
  - Consistent version handling and retry logic
  - Eliminates duplicate download logic
  - Better error handling and user feedback
  - Preparation for CI/CD integration

- Removed unused download-static-curl.sh script

This change centralizes console asset management and prepares for
streamlined CI/CD processes where build-rustfs.sh becomes the
single point of truth for binary and asset builds.

* fix: update PowerShell script to use unified console asset management

- Updated scripts/run.ps1 to use build-rustfs.sh for console asset downloads
- Added guidance for Windows users to use the unified build script
- Maintains consistency across all platform-specific scripts

* feat: add binary verification to build script

- Add verify_binary function to test built binaries
- Test --help and --version commands
- Verify binary structure with readelf/otool
- Add --skip-verification option for cross-compilation
- Include verification status in build output
- Automatic error handling if verification fails

* feat: add platform selection support to build script

- Add --platform parameter to build-rustfs.sh for target platform selection
- Implement cross-compilation support with automatic 'cross' tool detection
- Auto-enable --skip-verification for cross-compilation scenarios
- Update all Makefile build targets to use unified build-rustfs.sh script
- Add helpful error messages and suggestions for cross-compilation failures
- Update help documentation with platform selection examples
- Improve build consistency across different architectures

* feat: modernize CI/CD build process with build-rustfs.sh

- Replace manual cargo build commands with unified build-rustfs.sh script
- Simplify matrix configuration by removing cross-compilation flags
- Ensure consistency between local and CI/CD builds
- Automatic cross-compilation tool detection and installation
- Built-in binary verification for quality assurance
- Unified console asset management
- Better error handling and suggestions

Benefits:
- Consistent build process across all environments
- Automatic detection and handling of cross-compilation scenarios
- Built-in quality checks with binary verification
- Reduced CI/CD configuration complexity
- Better maintainability with single source of truth for build logic

* feat: optimize CI/CD workspace path management

- Add WORKSPACE_DIR environment variable to cache github.workspace
- Set default working-directory at job level for consistency
- Use explicit workspace paths in critical operations
- Improve reliability and maintainability of CI/CD paths
- Ensure consistent behavior across different GitHub Actions environments

Benefits:
- More explicit and reliable path handling
- Better maintainability with centralized workspace reference
- Reduced risk of path-related issues in CI/CD
- Consistent working directory across all job steps

* refactor: simplify CI/CD path management - remove redundant workspace references

- Remove unnecessary WORKSPACE_DIR environment variable
- Remove redundant defaults.run.working-directory setting
- Use relative paths since GITHUB_WORKSPACE is the default working directory
- Follow GitHub Actions best practices by leveraging default behavior

As per GitHub Actions documentation, GITHUB_WORKSPACE is already the default
working directory, so explicit specification is unnecessary in most cases.

* docs: update Docker README to reflect current project state

- Fix directory structure: remove non-existent nginx/ directory
- Correct base OS: Dockerfile.source uses Debian Bookworm, not Ubuntu 22.04
- Add docker-buildx.sh script documentation
- Update Docker tag examples to match actual CI/CD workflows
- Add CI/CD integration section explaining automated builds
- Document build variants and manual build options
- Reflect current project architecture and tooling

These updates ensure the documentation accurately represents the current
Docker build system and CI/CD workflows.

* fix: update Docker command in rustfs README

- Replace quay.io registry with Docker Hub (rustfs/rustfs:latest)
- Remove separate console port 9001, console now runs on main port 9000
- Add both Docker and Podman examples for user choice
- Fix console access URL to use unified port

This aligns with the recent console port consolidation changes
and the project's move to Docker Hub as the primary registry.

* wip

* fix: remove unnecessary entrypoint.sh and fix Docker paths

* Update Dockerfile

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* cleanup: remove unused DOCKERFILE_PATH variable from Makefile

* feat: update Docker build to use dl.rustfs.com for binary downloads

- Replace GitHub releases download with dl.rustfs.com
- Add CHANNEL parameter support (release/dev)
- Update docker-buildx.sh to support channel-specific builds
- Improve error messages with new download URLs
- Support both latest and specific version downloads
- Add channel validation in build script

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-14 19:15:46 +08:00
Nugine
04d1c8724d build: upgrade s3s (#193) 2025-07-14 15:19:01 +08:00
houseme
4fb4b353f8 improve code for cargo.toml 2025-07-13 23:13:08 +08:00
houseme
564a02f344 feat(obs, net): Add Tempo service and enable dual-stack listener (#192)
This commit introduces two key enhancements: the integration of Grafana Tempo for distributed tracing and the implementation of a dual-stack TCP listener for improved network compatibility.

- **Observability**:
  - Adds the `tempo` service to the `docker-compose.yml` observability stack.
  - Tempo is configured to collect and store traces, integrating with the existing OpenTelemetry setup.
  - A custom `tempo-entrypoint.sh` script is included to manage volume permissions on startup.

- **Networking**:
  - Modifies `http.rs` to support dual-stack (IPv4/IPv6) connections on a single socket.
  - By setting the `IPV6_V6ONLY` socket option to `false`, the server can now accept both IPv6 and IPv4-mapped IPv6 traffic, enhancing cross-platform support.
2025-07-13 20:22:46 +08:00
安正超
5b582a4234 feat: disable GitHub Packages uploads in Docker workflow (#189) 2025-07-12 19:07:56 +08:00
安正超
2e9792577f fix: correct SHA length matching in GitHub Actions workflow (#188)
* feat: enhance Docker build system with advanced version selection

## New Features
- Add force_rebuild parameter for Docker workflow manual triggers
- Improve version pattern matching with better regex validation
- Add comprehensive Docker Build Guide documentation
- Enhanced logging and error reporting for build process
- Support for prerelease version detection (alpha, beta, rc)

## Improvements
- Better version pattern validation for releases and dev builds
- More detailed build logs with context and warnings
- Clear documentation for all Docker image variants and use cases
- Updated README with Docker version examples and guide reference

## Documentation
- New comprehensive Docker Build Guide (docs/DOCKER_BUILD_GUIDE.md)
- Updated README with version-specific Docker examples
- Workflow dependency diagram and troubleshooting guide
- Complete reference for all supported version patterns

This enhancement provides a robust, well-documented Docker build system
that supports flexible version selection while maintaining deterministic
build behavior without fallback mechanisms.

* fix: simplify dev version regex pattern in docker workflow

* fix: simplify version number regex pattern in docker workflow

* feat: remove docs directory

* fix: correct SHA length matching in main-latest filename generation

* refactor: use bash string operations instead of sed for main-latest filename generation

* refactor: simplify filename generation by removing redundant intermediate variables

* feat: add dev-latest version generation for all development builds

* feat: add dev-latest support to Docker workflow
2025-07-12 18:46:37 +08:00
安正超
2066e0a03b fix: correct SHA length matching in main-latest filename generation (#187)
* feat: enhance Docker build system with advanced version selection

## New Features
- Add force_rebuild parameter for Docker workflow manual triggers
- Improve version pattern matching with better regex validation
- Add comprehensive Docker Build Guide documentation
- Enhanced logging and error reporting for build process
- Support for prerelease version detection (alpha, beta, rc)

## Improvements
- Better version pattern validation for releases and dev builds
- More detailed build logs with context and warnings
- Clear documentation for all Docker image variants and use cases
- Updated README with Docker version examples and guide reference

## Documentation
- New comprehensive Docker Build Guide (docs/DOCKER_BUILD_GUIDE.md)
- Updated README with version-specific Docker examples
- Workflow dependency diagram and troubleshooting guide
- Complete reference for all supported version patterns

This enhancement provides a robust, well-documented Docker build system
that supports flexible version selection while maintaining deterministic
build behavior without fallback mechanisms.

* fix: simplify dev version regex pattern in docker workflow

* fix: simplify version number regex pattern in docker workflow

* feat: remove docs directory

* fix: correct SHA length matching in main-latest filename generation
2025-07-12 11:43:17 +08:00
安正超
a4d49a500f feat: Enhanced Docker Build System with Advanced Version Selection (#186)
* feat: enhance Docker build system with advanced version selection

## New Features
- Add force_rebuild parameter for Docker workflow manual triggers
- Improve version pattern matching with better regex validation
- Add comprehensive Docker Build Guide documentation
- Enhanced logging and error reporting for build process
- Support for prerelease version detection (alpha, beta, rc)

## Improvements
- Better version pattern validation for releases and dev builds
- More detailed build logs with context and warnings
- Clear documentation for all Docker image variants and use cases
- Updated README with Docker version examples and guide reference

## Documentation
- New comprehensive Docker Build Guide (docs/DOCKER_BUILD_GUIDE.md)
- Updated README with version-specific Docker examples
- Workflow dependency diagram and troubleshooting guide
- Complete reference for all supported version patterns

This enhancement provides a robust, well-documented Docker build system
that supports flexible version selection while maintaining deterministic
build behavior without fallback mechanisms.

* fix: simplify dev version regex pattern in docker workflow

* fix: simplify version number regex pattern in docker workflow

* feat: remove docs directory
2025-07-12 11:31:00 +08:00
安正超
a8fbced928 feat: improve Docker build with version selection and remove fallback mechanism (#185)
- Add version input parameter to docker.yml workflow_dispatch
- Support main-latest, latest, dev-xxx, and specific version patterns
- Remove complex fallback mechanism from all Dockerfile variants
- Add clear error handling with helpful user guidance
- Create main-latest versions for development builds
- Ensure Docker builds require explicit VERSION parameter
- Update all Docker variants (production, alpine, ubuntu) consistently

This change solves the build dependency issue where Docker builds
could fail when expected binary artifacts don't exist, by providing
a clean version selection mechanism without unpredictable fallbacks.
2025-07-12 11:09:44 +08:00
安正超
99ca405279 feat: enhance cursor rules with strict main branch protection (#184) 2025-07-12 10:59:17 +08:00
overtrue
2e1d1018aa refactor: simplify version handling by removing unnecessary CLEAN_VERSION variable 2025-07-12 10:42:47 +08:00
overtrue
c57b4be1c7 refactor: use bash variable expansion for dev- prefix handling 2025-07-12 10:41:08 +08:00
overtrue
238a016242 chore: ignore .secrets 2025-07-12 10:30:37 +08:00
shiro.lee
2c0c7fafa3 fix: Optimized io::ErrorKind::NotFound error handling during Windows system startup (#181) 2025-07-12 06:54:21 +08:00
overtrue
ee4962fe31 fix: docker build 2025-07-11 23:42:46 +08:00
安正超
55895d0a10 fix: resolve Docker Hub authentication issues in multi-platform builds (#180) 2025-07-11 23:36:37 +08:00
overtrue
676897d389 fix: docker 2025-07-11 23:29:47 +08:00
loverustfs
5205ff6695 Add hellogithub icon 2025-07-11 23:24:55 +08:00
overtrue
15cf3ce92b fix: docker 2025-07-11 23:22:48 +08:00
安正超
c0441b2412 fix: resolve GitHub Actions workflow validation errors in docker.yml (#179)
* fix: resolve GitHub Actions workflow validation errors in docker.yml

- Fix usage of secrets context in conditional expressions
- Add environment variables to build-docker and create-manifest jobs
- Replace 'secrets.DOCKERHUB_USERNAME' with 'env.DOCKERHUB_USERNAME' in if conditions
- Maintain secure handling of Docker Hub credentials through proper env context

* Update .github/workflows/docker.yml

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-11 23:12:58 +08:00
安正超
6267872ddb feat: add latest version support for release builds (#178)
- Add automatic creation of latest version files for release and prerelease builds
- Simplify installation script by providing direct latest URLs
- Support rustfs-linux-{arch}-latest.zip naming convention
- Improve build artifact management and user experience
2025-07-11 23:01:36 +08:00
安正超
618779a89d feat: implement multi-channel release system with artifact naming (#176)
* feat: implement multi-channel release system with artifact naming

- Add dedicated release.yml workflow for handling GitHub releases
- Refactor build.yml to support dev/release/prerelease artifact naming
- Update docker.yml to support version-specific image tagging
- Implement artifact naming rules:
  - Dev: rustfs-{platform}-{arch}-dev-{sha}.zip
  - Release: rustfs-{platform}-{arch}-v{version}.zip
  - Prerelease: rustfs-{platform}-{arch}-v{version}.zip
- Add OSS upload directory separation (dev/ vs release/)
- Only stable releases update latest.json and create latest tags
- Separate GitHub Release creation from build workflow
- Add comprehensive build summaries and status reporting

This enables proper multi-channel distribution with clear artifact
identification and prevents confusion between dev and stable releases.

* fix: support version tags without v prefix (1.0.0 instead of v1.0.0)

- Update trigger patterns from 'v*.*.*' to '*.*.*' in all workflows
- Fix version extraction logic to handle tags without v prefix
- Maintain backward compatibility with existing logic

Note: Artifact naming still includes 'v' prefix for clarity
(e.g., tag '1.0.0' creates 'rustfs-linux-x86_64-v1.0.0.zip')

* feat: update Dockerfile to support multi-channel release system

- Add build arguments for VERSION, BUILD_TYPE, and TARGETARCH
- Support dynamic artifact download based on build type:
  - Development: downloads from artifacts/rustfs/dev/
  - Release: downloads from artifacts/rustfs/release/
- Auto-generate correct filenames based on new naming convention:
  - Dev: rustfs-linux-{arch}-dev-{sha}.zip
  - Release: rustfs-linux-{arch}-v{version}.zip
- Add architecture mapping for multi-platform builds
- Pass BUILD_TYPE parameter from docker.yml workflow
- Improve error handling with helpful download path suggestions

This ensures Docker images use the correct pre-built binaries
from the new multi-channel release system.

* feat: optimize and consolidate Dockerfile structure

## Major Improvements:

###  Created Missing Files
- Add .docker/Dockerfile.alpine for lightweight Alpine-based builds
- Support both pre-built binary download and source compilation

### 🔧 Fixed Critical Issues
- Fix Dockerfile.obs: ubuntu:latest → ubuntu:22.04 (stable version)
- Add proper security practices (non-root user, health checks)
- Add proper error handling and environment variables

### 🗑️ Eliminated Redundancy
- Remove .docker/Dockerfile.ubuntu22.04 (duplicate of devenv)
- Update docker.yml workflow to use devenv for ubuntu variant
- Consolidate similar functionality into fewer, better files

### 🚀 Enhanced Functionality
- Make devenv Dockerfile dual-purpose (dev environment + runtime)
- Add VERSION/BUILD_TYPE support for dynamic binary downloads
- Improve security with proper user management
- Add comprehensive health checks and error handling

### 📊 Final Dockerfile Structure:
1. Dockerfile (production, Alpine-based, pre-built binaries)
2. Dockerfile.multi-stage (full source builds, Ubuntu-based)
3. Dockerfile.obs (observability builds, Ubuntu-based)
4. .docker/Dockerfile.alpine (lightweight Alpine variant)
5. .docker/Dockerfile.devenv (development + ubuntu variant)
6. .docker/Dockerfile.rockylinux9.3 (RockyLinux variant)

This reduces redundancy while maintaining all necessary build variants
and improving maintainability across the entire container ecosystem.

* refactor: streamline Dockerfile structure and remove unused files

## 🎯 Major Cleanup:

### 🗑️ Removed Unused Files (2 files)
- Delete Dockerfile.obs (not referenced anywhere)
- Delete .docker/Dockerfile.rockylinux9.3 (not referenced anywhere)

### 📁 Reorganized File Layout
- Move Dockerfile.multi-stage → .docker/Dockerfile.multi-stage
- Update docker-compose.yml to use new path
- Keep main Dockerfile in root (production use)
- Consolidate variants in .docker/ directory

###  Final Clean Structure:

### 📊 Before vs After:
- **Before**: 7 files (1 missing, 2 unused, scattered layout)
- **After**: 4 files (all used, organized layout)
- **Reduction**: 43% fewer files, 100% utilization

This eliminates confusion and reduces maintenance overhead while
keeping all actually needed functionality intact.

* refactor: implement comprehensive Docker tag strategy with production variant

- Restore production variant as default with explicit naming
- Add support for prerelease channels (alpha, beta, rc)
- Implement rolling development tags (dev, dev-variant)
- Support semantic versioning with variant combinations
- Update documentation with complete tag strategy examples
- Align with GPT-suggested comprehensive tagging approach

Tag examples:
- rustfs/rustfs:1.2.3 (main production)
- rustfs/rustfs:1.2.3-production (explicit production)
- rustfs/rustfs:1.2.3-alpine (Alpine variant)
- rustfs/rustfs:alpha (latest alpha)
- rustfs/rustfs:dev (latest development)
- rustfs/rustfs:dev-13e4a0b (specific commit)

* perf: optimize Docker build speed with comprehensive caching and compilation improvements

- Add dual caching strategy: GitHub Actions + Registry cache
- Implement sccache for Rust compilation caching across builds
- Configure parallel compilation with all available CPU cores
- Add optimized cargo configuration for faster builds
- Enable sparse registry protocol for dependency resolution
- Configure LLD linker for faster linking
- Add BuildKit optimizations with inline cache
- Disable provenance/SBOM generation for faster builds
- Document build performance improvements and timings

Performance improvements:
- Source builds: ~40-50% faster with cache hits
- Pre-built binaries: ~30-40% faster
- Parallel matrix builds reduce total CI time significantly
- Registry cache provides persistent cross-run benefits

* refactor: consolidate Docker variants and eliminate duplication

- Replace root Dockerfile with enhanced Alpine prebuild version
- Remove redundant alpine variant from build matrix
- Root Dockerfile now includes:
  - Non-root user security
  - Health checks
  - Better error handling
  - protoc/flatc tool support
- Update documentation to reflect simplified 4-variant strategy
- Remove duplicate .docker/alpine/Dockerfile.prebuild

Build matrix now:
- production (root Dockerfile - Alpine prebuild)
- alpine-source (Alpine source build)
- ubuntu (Ubuntu prebuild)
- ubuntu-source (Ubuntu source build)

Benefits:
- Eliminates functional duplication
- Improves security with non-root execution
- Maintains same image variants with better quality
- Simplifies maintenance

* fix: restore alpine variant for better user choice

- Restore alpine variant (rustfs/rustfs:1.2.3-alpine)
- Re-add .docker/alpine/Dockerfile.prebuild
- Update build matrix to include 5 variants again:
  - production (default)
  - alpine (explicit Alpine choice)
  - alpine-source (Alpine source build)
  - ubuntu (Ubuntu pre-built)
  - ubuntu-source (Ubuntu source build)
- Update documentation to reflect restored alpine tags
- Fix build performance table to include all variants

User feedback: Alpine variant provides explicit choice even if
similar to production variant. Better UX with clear options.

* fix: remove redundant rustup target add commands in Alpine Dockerfiles

- Remove 'rustup target add x86_64-unknown-linux-musl' from Alpine source build
- Remove redundant target add from Alpine prebuild fallback path
- Remove redundant target add from root Dockerfile fallback path

Reason: rust:alpine base image already has x86_64-unknown-linux-musl
as the default target since Alpine uses musl libc by default.

Thanks to @houseme for spotting this redundancy in code review.

* fix: add missing RUSTFS_VOLUMES environment variable in Dockerfiles

- Add RUSTFS_VOLUMES=/data to all Dockerfile variants
- This fixes the issue where CMD ['/app/rustfs'] was used without providing the required volumes parameter
- The volumes parameter is required by the application and can be provided via command line or RUSTFS_VOLUMES environment variable

* fix: update docker-compose configurations to ensure all environments work correctly

- Added missing access key and secret key environment variables to docker-compose.yaml
- This ensures the distributed test environment has proper authentication credentials
- Complementary fix to the previous Dockerfile updates for consistent configuration

* fix: recreate missing Dockerfile.obs with complete content

- The file was accidentally left empty after initial creation
- Now contains proper Ubuntu-based configuration for observability environment
- Includes all necessary environment variables including RUSTFS_VOLUMES
- Supports docker-compose-obs.yaml configuration

* refactor: organize Docker Compose configurations and eliminate duplication

- Move specialized configurations to .docker/compose/ directory
- Rename docker-compose.yaml → docker-compose.cluster.yaml (distributed testing)
- Rename docker-compose-obs.yaml → docker-compose.observability.yaml (observability testing)
- Keep docker-compose.yml as the main production configuration
- Add comprehensive README explaining different configuration purposes
- Eliminates confusion between similar filenames
- Provides clear guidance on when to use each configuration

* fix: correct relative paths in moved Docker Compose configurations

- Fix binary volume mount paths in docker-compose.cluster.yaml (./target → ../../target)
- Fix Dockerfile.obs context path in docker-compose.observability.yaml (. → ../..)
- Fix observability config file paths (./.docker → ../../.docker)
- Update README.md with correct usage instructions for new locations
- All configurations now correctly reference files relative to their new positions

* refactor: move Dockerfile.obs to .docker/compose/ directory for better organization

- Move Dockerfile.obs from root to .docker/compose/ directory
- Update all dockerfile references in docker-compose.observability.yaml
- Keep related files (Dockerfile.obs + docker-compose.observability.yaml) together
- Clean up root directory by removing specialized-purpose Dockerfile
- Update README.md to document new file organization
- Improves project structure and file discoverability

* refactor: improve Docker build configuration for better clarity

- Move Dockerfile.obs back to project root for simpler build context
- Update docker-compose.observability.yaml to use cleaner dockerfile reference
- Change from '.docker/compose/Dockerfile.obs' to simply 'Dockerfile.obs'
- Maintain context as '../..' for access to project files
- Remove redundant Dockerfile.obs documentation from compose README
- This follows Docker best practices: simple context + Dockerfile at context root

* wip
2025-07-11 22:18:33 +08:00
houseme
b3ec2325ed improve docker comprose config file and remove docs dir (#174)
* refactor(config): Unify S3 API and Console ports

This commit streamlines the server configuration by unifying the S3 API and the WebUI (Console) to serve on a single port.

Previously, the console was managed by separate configuration options (`RUSTFS_CONSOLE_ENABLE` and `RUSTFS_CONSOLE_ADDRESS`), requiring a distinct port. This added complexity to deployment and configuration.

With this change:
- The `RUSTFS_CONSOLE_ADDRESS` and `RUSTFS_CONSOLE_FS_ENDPOINT` environment variables are removed.
- The WebUI is now always available and served directly from the main application port defined by `RUSTFS_ADDRESS`.
- This simplifies setup, reduces the number of exposed ports, and makes the application easier to manage and deploy, especially in containerized environments.

Users should update their startup scripts and remove the deprecated `RUSTFS_CONSOLE_*` variables.

* improve docker comprose config file and remove docs dir
2025-07-11 16:55:24 +08:00
houseme
49a5643e76 refactor(config): Unify S3 API and Console ports (#173)
This commit streamlines the server configuration by unifying the S3 API and the WebUI (Console) to serve on a single port.

Previously, the console was managed by separate configuration options (`RUSTFS_CONSOLE_ENABLE` and `RUSTFS_CONSOLE_ADDRESS`), requiring a distinct port. This added complexity to deployment and configuration.

With this change:
- The `RUSTFS_CONSOLE_ADDRESS` and `RUSTFS_CONSOLE_FS_ENDPOINT` environment variables are removed.
- The WebUI is now always available and served directly from the main application port defined by `RUSTFS_ADDRESS`.
- This simplifies setup, reduces the number of exposed ports, and makes the application easier to manage and deploy, especially in containerized environments.

Users should update their startup scripts and remove the deprecated `RUSTFS_CONSOLE_*` variables.
2025-07-11 14:20:22 +08:00
loverustfs
657395af8a fix docker quickstart 2025-07-11 10:59:11 +08:00
loverustfs
4de62ed77e fix quickstart 2025-07-11 10:58:22 +08:00
houseme
505f493729 chore: bump workspace dependencies versions (#168)
* upgrade package version

# Conflicts:
#	crates/rio/Cargo.toml

* fix

* upgrade version

* upgrade version

* cargo fmt
2025-07-11 10:35:27 +08:00
weisd
be05b704b0 feat: add Content-Length headers to admin API responses (#169) 2025-07-11 09:40:57 +08:00
安正超
b33c2fa3cf Update build.yml 2025-07-11 09:00:06 +08:00
安正超
98674c60d4 Update README.md 2025-07-11 08:44:50 +08:00
安正超
e39eb86967 fix: remove unused command 2025-07-11 08:03:29 +08:00
weisd
646070ae7a Feat/browser redirect layer (#167)
* feat: add browser redirect layer to route GET requests to console

* refactor: move RedirectLayer to separate layer.rs file

* feat: restrict redirect layer to only handle root path and index.html

* feat: restrict redirect layer to only handle root path /rustfs and index.html
2025-07-11 07:38:42 +08:00
Nugine
2525b66658 refactor: replace lazy_static with LazyLock (#164)
* refactor: replace `lazy_static` with `LazyLock`

* update cursorrules

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
2025-07-10 23:50:46 +08:00
Nugine
58c5a633e2 ci: fix cache (#165) 2025-07-10 23:50:26 +08:00
安正超
aefd894fc2 Update build.yml 2025-07-10 23:49:14 +08:00
安正超
1e1d4646a2 Update build.yml 2025-07-10 23:41:40 +08:00
loverustfs
b97845fffd Simplify user experience and integrate console and endpoint (#162)
* fix unzip error

* fix url change error

fix url change error

* Simplify user experience and integrate console and endpoint

Simplify user experience and integrate console and endpoint
2025-07-10 23:32:02 +08:00
weisd
84f5a4cb48 console web server and the s3 api share the same port (#163)
* merge console router

* make code happy

* Scanner (#156)

* feat: integrate CancellationToken for unified background services management

- Consolidate data scanner and auto heal cancellation tokens into single unified token
- Move GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN to global.rs for centralized management
- Add graceful shutdown support to MRF heal routine with MinIO-compatible logic
- Implement heal_routine_with_cancel method preserving original healing logic
- Update main.rs to use unified background services shutdown mechanism
- Enhance error handling with proper ecstore Result types
- Fix clippy warnings for needless return statements
- Maintain backward compatibility while adding modern cancellation support

This change provides a cleaner architecture for background service lifecycle management
and ensures all healing services can be gracefully shut down through a single token.

Signed-off-by: junxiang Mu <1948535941@qq.com>

* fix: Refact heal and scanner design

Signed-off-by: junxiang Mu <1948535941@qq.com>

* refact: step 2

Signed-off-by: junxiang Mu <1948535941@qq.com>

* feat: refactor scanner module and add data usage statistics

- Move scanner code to scanner/ subdirectory for better organization
- Add data usage statistics collection and persistence
- Implement histogram support for size and version distribution
- Add global cancel token management for scanner operations
- Integrate scanner with ECStore for comprehensive data analysis
- Update error handling and improve test isolation
- Add data usage API endpoints and backend integration

Signed-off-by: junxiang Mu <1948535941@qq.com>

* Chore: fix ref and fix comment

Signed-off-by: junxiang Mu <1948535941@qq.com>

* fix: fix clippy

Signed-off-by: junxiang Mu <1948535941@qq.com>

---------

Signed-off-by: junxiang Mu <1948535941@qq.com>
Co-authored-by: dandan <dandan@dandandeMac-Studio.local>

---------

Signed-off-by: junxiang Mu <1948535941@qq.com>
Co-authored-by: guojidan <63799833+guojidan@users.noreply.github.com>
Co-authored-by: dandan <dandan@dandandeMac-Studio.local>
2025-07-10 23:31:42 +08:00
guojidan
2832f0e089 Scanner (#156)
* feat: integrate CancellationToken for unified background services management

- Consolidate data scanner and auto heal cancellation tokens into single unified token
- Move GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN to global.rs for centralized management
- Add graceful shutdown support to MRF heal routine with MinIO-compatible logic
- Implement heal_routine_with_cancel method preserving original healing logic
- Update main.rs to use unified background services shutdown mechanism
- Enhance error handling with proper ecstore Result types
- Fix clippy warnings for needless return statements
- Maintain backward compatibility while adding modern cancellation support

This change provides a cleaner architecture for background service lifecycle management
and ensures all healing services can be gracefully shut down through a single token.

Signed-off-by: junxiang Mu <1948535941@qq.com>

* fix: Refact heal and scanner design

Signed-off-by: junxiang Mu <1948535941@qq.com>

* refact: step 2

Signed-off-by: junxiang Mu <1948535941@qq.com>

* feat: refactor scanner module and add data usage statistics

- Move scanner code to scanner/ subdirectory for better organization
- Add data usage statistics collection and persistence
- Implement histogram support for size and version distribution
- Add global cancel token management for scanner operations
- Integrate scanner with ECStore for comprehensive data analysis
- Update error handling and improve test isolation
- Add data usage API endpoints and backend integration

Signed-off-by: junxiang Mu <1948535941@qq.com>

* Chore: fix ref and fix comment

Signed-off-by: junxiang Mu <1948535941@qq.com>

* fix: fix clippy

Signed-off-by: junxiang Mu <1948535941@qq.com>

---------

Signed-off-by: junxiang Mu <1948535941@qq.com>
Co-authored-by: dandan <dandan@dandandeMac-Studio.local>
2025-07-10 17:10:44 +08:00
Nugine
a3b5445824 ci: use nextest (#148)
* ci: use nextest

* add doctests back
2025-07-10 11:33:12 +08:00
weisd
363e37c791 fix(iam):decrypt_data failed when password changed (#150) 2025-07-10 11:07:01 +08:00
houseme
1b0b041530 fix(gui): Configure application icon for Linux (#147)
* fix linux icon

* set linux icon
2025-07-10 01:09:06 +08:00
安正超
7d5fc87002 fix: extract release notes template to external file to resolve YAML syntax error (#143) 2025-07-09 23:07:10 +08:00
安正超
13130e9dd4 fix: add missing OSSUTIL_BIN variable in linux case branch (#141)
* fix: improve ossutil install logic in GitHub Actions workflow

* wip

* wip

* fix: add missing OSSUTIL_BIN variable in linux case branch
2025-07-09 22:36:37 +08:00
安正超
1061ce11a3 fix: improve ossutil install logic in GitHub Actions workflow (#139)
* fix: improve ossutil install logic in GitHub Actions workflow

* wip

* wip
2025-07-09 21:37:38 +08:00
loverustfs
9f9a74000d Fix dockerfile link error (#138)
* fix unzip error

* fix url change error

fix url change error
2025-07-09 21:04:10 +08:00
shiro.lee
d1863018df Merge pull request #137 from shiroleeee/windows_start
fix: troubleshooting startup failure in Windows System
2025-07-09 20:51:42 +08:00
shiro
166080aac8 fix: troubleshooting startup failure in Windows System 2025-07-09 20:32:20 +08:00
loverustfs
78b2487639 Delete GUI build workflow 2025-07-09 19:57:01 +08:00
loverustfs
79f4e81fea disable ubuntu & macos GUI
disable ubuntu & macos GUI
2025-07-09 19:50:42 +08:00
loverustfs
28da78d544 Add image fix build error 2025-07-09 19:03:01 +08:00
loverustfs
df2eb9bc6a docs: add status warning 2025-07-09 08:23:40 +00:00
loverustfs
7c20d92fe5 wip: fix ossutil 2025-07-09 08:18:31 +00:00
houseme
b4c316c662 fix logger format (#134)
* fix logger format

* fmt
2025-07-09 16:05:22 +08:00
loverustfs
411b511937 fix: oss utils 2025-07-09 07:48:23 +00:00
loverustfs
c902475443 fix: oss utils 2025-07-09 07:21:23 +00:00
weisd
00d8008a89 Feat/region (#132)
* add region config
2025-07-09 14:48:51 +08:00
houseme
36acb5bce9 feat(console): Enhance network address handling for WebUI (#129)
* add crates homepage,description,keywords,categories,documentation

* add readme

* modify version 0.0.3

* cargo fmt

* fix: yaml.docker-compose.security.no-new-privileges.no-new-privileges-docker-compose.yml (#63)

* Feature up/ilm (#61)

* fix delete-marker expiration. add api_restore.

* remove target return 204

* log level

* fix: make lint build and clippy happy (#71)

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: make ci and local use the same toolchain (#72)

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* feat: optimize GitHub Actions workflows with performance improvements (#77)

* feat: optimize GitHub Actions workflows with performance improvements

- Rename workflows with more descriptive names
- Add unified setup action for consistent environment setup
- Optimize caching strategy with Swatinem/rust-cache@v2
- Implement skip-check mechanism to avoid duplicate builds
- Simplify matrix builds with better include/exclude logic
- Add intelligent build strategy checks
- Optimize Docker multi-arch builds
- Improve artifact naming and retention
- Add performance testing with benchmark support
- Enhance security audit with dependency scanning
- Change Chinese comments to English for better maintainability

Performance improvements:
- CI testing: ~35 min (42% faster)
- Build release: ~60 min (50% faster)
- Docker builds: ~45 min (50% faster)
- Security audit: ~8 min (47% faster)

* fix: correct secrets context usage in GitHub Actions workflow

- Move environment variables to job level to fix secrets access issue
- Fix unrecognized named-value 'secrets' error in if condition
- Ensure OSS upload step can properly check for required secrets

* fix: resolve GitHub API rate limit by adding authentication token

- Add github-token input to setup action to authenticate GitHub API requests
- Pass GITHUB_TOKEN to all setup action usages to avoid rate limiting
- Fix arduino/setup-protoc@v3 API access issues in CI/CD workflows
- Ensure protoc installation can successfully access GitHub releases API

* fix:make bucket err (#85)

* Rename DEVELOPMENT.md to CONTRIBUTING.md

* Create issue-translator.yml (#89)

Enable Issues Translator

* fix(dockerfile): correct env variable names for access/secret key and improve compatibility (#90)

* fix: restore Zig and cargo-zigbuild caching in GitHub Actions setup action (#92)

* fix: restore Zig and cargo-zigbuild caching in GitHub Actions setup action

Use mlugg/setup-zig and taiki-e/cache-cargo-install-action to speed up cross-compilation tool installation and avoid repeated downloads. All comments and code are in English.

* fix: use correct taiki-e/install-action for cargo-zigbuild

Use taiki-e/install-action@cargo-zigbuild instead of taiki-e/cache-cargo-install-action@v2 to match the original implementation from PR #77.

* refactor: remove explicit Zig version to use latest stable

* Create CODE_OF_CONDUCT.md

* Create SECURITY.md

* Update issue templates

* Create CLA.md

* docs: update PR template to English version

* fix: improve data scanner random sleep calculation

- Fix random number generation API usage
- Adjust sleep calculation to follow MinIO pattern
- Ensure proper random range for scanner cycles

Signed-off-by: junxiang Mu <1948535941@qq.com>

* fix: soupprt ipv6

* improve log

* add client ip log

* Update rustfs/src/console.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* improve code

* feat: unify package format to zip for all platforms

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Signed-off-by: junxiang Mu <1948535941@qq.com>
Co-authored-by: kira-offgrid <kira@offgridsec.com>
Co-authored-by: likewu <likewu@126.com>
Co-authored-by: laoliu <lygn128@163.com>
Co-authored-by: yihong <zouzou0208@gmail.com>
Co-authored-by: 安正超 <anzhengchao@gmail.com>
Co-authored-by: weisd <im@weisd.in>
Co-authored-by: Yone <zhiyu@live.cn>
Co-authored-by: loverustfs <155562731+loverustfs@users.noreply.github.com>
Co-authored-by: junxiang Mu <1948535941@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-09 14:39:40 +08:00
overtrue
e033b019f6 feat: align GUI artifact retention with build-rustfs 2025-07-09 13:19:34 +08:00
overtrue
259b80777e feat: align build-gui condition with build-rustfs 2025-07-09 13:19:11 +08:00
overtrue
abdfad8521 feat: unify package format to zip for all platforms 2025-07-09 12:56:39 +08:00
lihaixing
c498fbcb27 fix: drop writers to close all files, this is to prevent FileAccessDenied errors when renaming data 2025-07-09 11:09:22 +08:00
loverustfs
874d486b1e fix workflow 2025-07-09 10:28:53 +08:00
weisd
21516251b0 fix:ci (#124) 2025-07-09 09:49:27 +08:00
neo
a2f83b0d2d doc: Add links to translated README versions (#119)
Added language selection links to the README for easier access to translated versions: German, Spanish, French, Japanese, Korean, Portuguese, and Russian.
2025-07-09 09:34:43 +08:00
overtrue
aa65766312 fix: api rate limit 2025-07-09 09:16:04 +08:00
overtrue
660f004cfd fix: api rate limit 2025-07-09 09:11:46 +08:00
loverustfs
6d2c420f54 fix unzip error (#117) 2025-07-09 01:19:12 +08:00
安正超
5f0b9a5fa8 chore: remove skip-duplicate and skip-check jobs from workflows (#116) 2025-07-08 23:55:21 +08:00
安正超
8378e308e0 fix: prevent overwriting existing release content in build workflow (#115) 2025-07-08 23:29:45 +08:00
overtrue
b9f54519fd fix: prevent overwriting existing release content in build workflow 2025-07-08 23:27:13 +08:00
overtrue
4108a9649f refactor: optimize performance workflow trigger conditions
- Replace paths-ignore with paths for more precise control
- Only trigger on Rust source files, Cargo files, and workflow itself
- Improve efficiency by avoiding unnecessary performance tests
- Follow best practices for targeted workflow execution
2025-07-08 23:24:50 +08:00
安正超
6244e23451 refactor: simplify workflow skip logic using do_not_skip parameter (#114)
* feat: ensure workflows never skip execution during version releases

- Modified skip-duplicate-actions to never skip when pushing tags
- Updated all workflow jobs to force execution for tag pushes (version releases)
- Ensures complete CI/CD pipeline execution for releases including:
  - All tests and lint checks
  - Multi-platform builds
  - GUI builds
  - Release asset creation
  - OSS uploads

This guarantees that version releases always undergo full validation
and build processes, maintaining release quality and consistency.

* refactor: simplify workflow skip logic using do_not_skip parameter

- Replace complex conditional expressions with do_not_skip: ['release', 'push']
- Add skip-duplicate-actions to docker.yml workflow
- Ensure all workflows use consistent skip mechanism
- Maintain release and tag push execution guarantee
- Simplify job conditions by removing redundant tag checks

This change makes workflows more maintainable and follows
official skip-duplicate-actions best practices.
2025-07-08 23:08:45 +08:00
安正超
713b322f99 feat: enhance build and release workflow with multi-platform support (#113)
* feat: enhance build and release workflow with multi-platform support

- Add Windows support (x86_64 and ARM64) to build matrix
- Add macOS Intel x86_64 support alongside Apple Silicon
- Improve cross-platform builds with proper toolchain selection
- Use GitHub CLI (gh) for release management instead of GitHub Actions
- Add automatic checksum generation (SHA256/SHA512) for all binaries
- Support different archive formats per platform (zip for Windows, tar.gz for Unix)
- Add comprehensive release notes with installation guides
- Enhanced error handling for console assets download
- Platform-specific build information in packages
- Support both binary and GUI application releases
- Update OSS upload to handle multiple file formats

This brings RustFS builds up to enterprise-grade standards with:
- 6 binary targets (Linux x86_64/ARM64, macOS x86_64/ARM64, Windows x86_64/ARM64)
- Professional release management with checksums
- User-friendly installation instructions
- Multi-platform GUI applications

* feat: add core development principles to cursor rules

- Add precision-first development principle: 每次改动都要精准,没把握就别改
- Add GitHub CLI priority rule: GitHub PR 创建优先使用 gh 命令
- Emphasize careful analysis before making changes
- Promote use of gh commands for better automation and integration

* refactor: translate cursor rules to English

- Translate core development principles from Chinese to English
- Maintain consistency with project's English-first policy
- Update 'Every change must be precise' principle
- Update 'GitHub PR creation prioritizes gh command usage' rule
- Ensure all cursor rules are in English for better accessibility

* fix: prevent workflow changes from triggering CI/CD pipelines

- Add .github/** to paths-ignore in build.yml workflow
- Add .github/** to paths-ignore in docker.yml workflow
- Update skip-duplicate paths_ignore to include .github files
- Workflow changes should not trigger performance, build, or docker workflows
- Saves unnecessary CI/CD resource usage when updating workflow configurations
- Consistent with performance.yml which already ignores .github/**
2025-07-08 22:49:35 +08:00
安正超
e1a5a195c3 feat: enhance build and release workflow with multi-platform support (#112)
* feat: enhance build and release workflow with multi-platform support

- Add Windows support (x86_64 and ARM64) to build matrix
- Add macOS Intel x86_64 support alongside Apple Silicon
- Improve cross-platform builds with proper toolchain selection
- Use GitHub CLI (gh) for release management instead of GitHub Actions
- Add automatic checksum generation (SHA256/SHA512) for all binaries
- Support different archive formats per platform (zip for Windows, tar.gz for Unix)
- Add comprehensive release notes with installation guides
- Enhanced error handling for console assets download
- Platform-specific build information in packages
- Support both binary and GUI application releases
- Update OSS upload to handle multiple file formats

This brings RustFS builds up to enterprise-grade standards with:
- 6 binary targets (Linux x86_64/ARM64, macOS x86_64/ARM64, Windows x86_64/ARM64)
- Professional release management with checksums
- User-friendly installation instructions
- Multi-platform GUI applications

* feat: add core development principles to cursor rules

- Add precision-first development principle: 每次改动都要精准,没把握就别改
- Add GitHub CLI priority rule: GitHub PR 创建优先使用 gh 命令
- Emphasize careful analysis before making changes
- Promote use of gh commands for better automation and integration

* refactor: translate cursor rules to English

- Translate core development principles from Chinese to English
- Maintain consistency with project's English-first policy
- Update 'Every change must be precise' principle
- Update 'GitHub PR creation prioritizes gh command usage' rule
- Ensure all cursor rules are in English for better accessibility
2025-07-08 22:39:41 +08:00
安正超
bc37417d6c ci: fix workflows triggering on documentation-only changes (#111)
- Fix performance.yml: now ignores *.md, README*, and docs/**
- Fix build.yml: now ignores documentation files and images
- Fix docker.yml: prevent Docker builds on README changes
- Replace 'paths:' with 'paths-ignore:' to properly exclude docs
- Reduces unnecessary CI runs for documentation-only PRs

This resolves the issue where README changes triggered expensive
CI pipelines including Performance Testing and Docker builds.
2025-07-08 21:20:18 +08:00
安正超
3dbcaaa221 docs: simplify crates README files and enforce PR-only workflow (#110)
* docs: simplify all crates README files

- Remove extensive code examples and detailed documentation
- Convert to minimal module introductions with core feature lists
- Direct users to main RustFS repository for comprehensive docs
- Updated 20 crate README files for consistency and brevity

Files updated:
- crates/rio/README.md (415→15 lines)
- crates/s3select-api/README.md (592→15 lines)
- crates/s3select-query/README.md (658→15 lines)
- crates/signer/README.md (407→15 lines)
- crates/utils/README.md (395→15 lines)
- crates/workers/README.md (463→15 lines)
- crates/zip/README.md (408→15 lines)

* docs: restore original headers in crates README files

- Add back RustFS logo image and CI badges
- Restore formatted headers and structured layout
- Keep simplified content with module introductions
- Maintain consistent documentation structure across all crates

All 20 crate README files now have proper headers while keeping
the simplified content that directs users to the main repository.

* rules: enforce PR-only workflow for main branch

- Strengthen rule that ALL changes must go through pull requests
- Explicitly forbid direct commits to main branch under any circumstances
- Add comprehensive PR requirements and enforcement guidelines
- Clarify that PRs are the ONLY way to merge to main branch
- Add requirement for PR approval before merging
- Include enforcement mechanisms for branch protection
2025-07-08 21:10:07 +08:00
183 changed files with 9251 additions and 12018 deletions

View File

@@ -1,19 +1,39 @@
# RustFS Project Cursor Rules
## ⚠️ CRITICAL DEVELOPMENT RULES ⚠️
## 🚨🚨🚨 CRITICAL DEVELOPMENT RULES - ZERO TOLERANCE 🚨🚨🚨
### 🚨 NEVER COMMIT DIRECTLY TO MASTER/MAIN BRANCH 🚨
### ⛔️ ABSOLUTE PROHIBITION: NEVER COMMIT DIRECTLY TO MASTER/MAIN BRANCH ⛔️
- **This is the most important rule - NEVER modify code directly on main or master branch**
- **Always work on feature branches and use pull requests for all changes**
- **Any direct commits to master/main branch are strictly forbidden**
- Before starting any development, always:
1. `git checkout main` (switch to main branch)
2. `git pull` (get latest changes)
3. `git checkout -b feat/your-feature-name` (create and switch to feature branch)
4. Make your changes on the feature branch
5. Commit and push to the feature branch
6. Create a pull request for review
**🔥 THIS IS THE MOST CRITICAL RULE - VIOLATION WILL RESULT IN IMMEDIATE REVERSAL 🔥**
- **🚫 ZERO DIRECT COMMITS TO MAIN/MASTER BRANCH - ABSOLUTELY FORBIDDEN**
- **🚫 ANY DIRECT COMMIT TO MAIN BRANCH MUST BE IMMEDIATELY REVERTED**
- **🚫 NO EXCEPTIONS FOR HOTFIXES, EMERGENCIES, OR URGENT CHANGES**
- **🚫 NO EXCEPTIONS FOR SMALL CHANGES, TYPOS, OR DOCUMENTATION UPDATES**
- **🚫 NO EXCEPTIONS FOR ANYONE - MAINTAINERS, CONTRIBUTORS, OR ADMINS**
### 📋 MANDATORY WORKFLOW - STRICTLY ENFORCED
**EVERY SINGLE CHANGE MUST FOLLOW THIS WORKFLOW:**
1. **Check current branch**: `git branch` (MUST NOT be on main/master)
2. **Switch to main**: `git checkout main`
3. **Pull latest**: `git pull origin main`
4. **Create feature branch**: `git checkout -b feat/your-feature-name`
5. **Make changes ONLY on feature branch**
6. **Test thoroughly before committing**
7. **Commit and push to feature branch**: `git push origin feat/your-feature-name`
8. **Create Pull Request**: Use `gh pr create` (MANDATORY)
9. **Wait for PR approval**: NO self-merging allowed
10. **Merge through GitHub interface**: ONLY after approval
### 🔒 ENFORCEMENT MECHANISMS
- **Branch protection rules**: Main branch is protected
- **Pre-commit hooks**: Will block direct commits to main
- **CI/CD checks**: All PRs must pass before merging
- **Code review requirement**: At least one approval needed
- **Automated reversal**: Direct commits to main will be automatically reverted
## Project Overview
@@ -514,7 +534,7 @@ let results = join_all(futures).await;
### 3. Caching Strategy
- Use `lazy_static` or `OnceCell` for global caching
- Use `LazyLock` for global caching
- Implement LRU cache to avoid memory leaks
## Testing Guidelines
@@ -817,6 +837,7 @@ These rules should serve as guiding principles when developing the RustFS projec
- **🚨 CRITICAL: NEVER modify code directly on main or master branch - THIS IS ABSOLUTELY FORBIDDEN 🚨**
- **⚠️ ANY DIRECT COMMITS TO MASTER/MAIN WILL BE REJECTED AND MUST BE REVERTED IMMEDIATELY ⚠️**
- **🔒 ALL CHANGES MUST GO THROUGH PULL REQUESTS - NO DIRECT COMMITS TO MAIN UNDER ANY CIRCUMSTANCES 🔒**
- **Always work on feature branches - NO EXCEPTIONS**
- Always check the .cursorrules file before starting to ensure you understand the project guidelines
- **MANDATORY workflow for ALL changes:**
@@ -826,13 +847,39 @@ These rules should serve as guiding principles when developing the RustFS projec
4. Make your changes ONLY on the feature branch
5. Test thoroughly before committing
6. Commit and push to the feature branch
7. Create a pull request for code review
7. **Create a pull request for code review - THIS IS THE ONLY WAY TO MERGE TO MAIN**
8. **Wait for PR approval before merging - NEVER merge your own PRs without review**
- Use descriptive branch names following the pattern: `feat/feature-name`, `fix/issue-name`, `refactor/component-name`, etc.
- **Double-check current branch before ANY commit: `git branch` to ensure you're NOT on main/master**
- Ensure all changes are made on feature branches and merged through pull requests
- **Pull Request Requirements:**
- All changes must be submitted via PR regardless of size or urgency
- PRs must include comprehensive description and testing information
- PRs must pass all CI/CD checks before merging
- PRs require at least one approval from code reviewers
- Even hotfixes and emergency changes must go through PR process
- **Enforcement:**
- Main branch should be protected with branch protection rules
- Direct pushes to main should be blocked by repository settings
- Any accidental direct commits to main must be immediately reverted via PR
#### Development Workflow
## 🎯 **Core Development Principles**
- **🔴 Every change must be precise - don't modify unless you're confident**
- Carefully analyze code logic and ensure complete understanding before making changes
- When uncertain, prefer asking users or consulting documentation over blind modifications
- Use small iterative steps, modify only necessary parts at a time
- Evaluate impact scope before changes to ensure no new issues are introduced
- **🚀 GitHub PR creation prioritizes gh command usage**
- Prefer using `gh pr create` command to create Pull Requests
- Avoid having users manually create PRs through web interface
- Provide clear and professional PR titles and descriptions
- Using `gh` commands ensures better integration and automation
## 📝 **Code Quality Requirements**
- Use English for all code comments, documentation, and variable names
- Write meaningful and descriptive names for variables, functions, and methods
- Avoid meaningless test content like "debug 111" or placeholder values

View File

@@ -1,27 +0,0 @@
FROM ubuntu:22.04
ENV LANG C.UTF-8
RUN sed -i s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g /etc/apt/sources.list
RUN apt-get clean && apt-get update && apt-get install wget git curl unzip gcc pkg-config libssl-dev lld libdbus-1-dev libwayland-dev libwebkit2gtk-4.1-dev libxdo-dev -y
# install protoc
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v31.1/protoc-31.1-linux-x86_64.zip \
&& unzip protoc-31.1-linux-x86_64.zip -d protoc3 \
&& mv protoc3/bin/* /usr/local/bin/ && chmod +x /usr/local/bin/protoc \
&& mv protoc3/include/* /usr/local/include/ && rm -rf protoc-31.1-linux-x86_64.zip protoc3
# install flatc
RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.flatc.binary.g++-13.zip \
&& unzip Linux.flatc.binary.g++-13.zip \
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc && rm -rf Linux.flatc.binary.g++-13.zip
# install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
COPY .docker/cargo.config.toml /root/.cargo/config.toml
WORKDIR /root/s3-rustfs
CMD [ "bash", "-c", "while true; do sleep 1; done" ]

View File

@@ -1,32 +0,0 @@
FROM rockylinux:9.3 AS builder
ENV LANG C.UTF-8
RUN sed -e 's|^mirrorlist=|#mirrorlist=|g' \
-e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.ustc.edu.cn/rocky|g' \
-i.bak \
/etc/yum.repos.d/rocky-extras.repo \
/etc/yum.repos.d/rocky.repo
RUN dnf makecache
RUN yum install wget git unzip gcc openssl-devel pkgconf-pkg-config -y
# install protoc
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v31.1/protoc-31.1-linux-x86_64.zip \
&& unzip protoc-31.1-linux-x86_64.zip -d protoc3 \
&& mv protoc3/bin/* /usr/local/bin/ && chmod +x /usr/local/bin/protoc \
&& mv protoc3/include/* /usr/local/include/ && rm -rf protoc-31.1-linux-x86_64.zip protoc3
# install flatc
RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.flatc.binary.g++-13.zip \
&& unzip Linux.flatc.binary.g++-13.zip \
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc \
&& rm -rf Linux.flatc.binary.g++-13.zip
# install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
COPY .docker/cargo.config.toml /root/.cargo/config.toml
WORKDIR /root/s3-rustfs

View File

@@ -1,25 +0,0 @@
FROM ubuntu:22.04
ENV LANG C.UTF-8
RUN sed -i s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g /etc/apt/sources.list
RUN apt-get clean && apt-get update && apt-get install wget git curl unzip gcc pkg-config libssl-dev lld libdbus-1-dev libwayland-dev libwebkit2gtk-4.1-dev libxdo-dev -y
# install protoc
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v31.1/protoc-31.1-linux-x86_64.zip \
&& unzip protoc-31.1-linux-x86_64.zip -d protoc3 \
&& mv protoc3/bin/* /usr/local/bin/ && chmod +x /usr/local/bin/protoc \
&& mv protoc3/include/* /usr/local/include/ && rm -rf protoc-31.1-linux-x86_64.zip protoc3
# install flatc
RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.flatc.binary.g++-13.zip \
&& unzip Linux.flatc.binary.g++-13.zip \
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc && rm -rf Linux.flatc.binary.g++-13.zip
# install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
COPY .docker/cargo.config.toml /root/.cargo/config.toml
WORKDIR /root/s3-rustfs

261
.docker/README.md Normal file
View File

@@ -0,0 +1,261 @@
# RustFS Docker Images
This directory contains Docker configuration files and supporting infrastructure for building and running RustFS container images.
## 📁 Directory Structure
```
rustfs/
├── Dockerfile # Production image (Alpine + pre-built binaries)
├── Dockerfile.source # Development image (Debian + source build)
├── docker-buildx.sh # Multi-architecture build script
├── Makefile # Build automation with simplified commands
└── .docker/ # Supporting infrastructure
├── observability/ # Monitoring and observability configs
├── compose/ # Docker Compose configurations
├── mqtt/ # MQTT broker configs
└── openobserve-otel/ # OpenObserve + OpenTelemetry configs
```
## 🎯 Image Variants
### Core Images
| Image | Base OS | Build Method | Size | Use Case |
|-------|---------|--------------|------|----------|
| `production` (default) | Alpine 3.18 | GitHub Releases | Smallest | Production deployment |
| `source` | Debian Bookworm | Source build | Medium | Custom builds with cross-compilation |
| `dev` | Debian Bookworm | Development tools | Large | Interactive development |
## 🚀 Usage Examples
### Quick Start (Production)
```bash
# Default production image (Alpine + GitHub Releases)
docker run -p 9000:9000 rustfs/rustfs:latest
# Specific version
docker run -p 9000:9000 rustfs/rustfs:1.2.3
```
### Complete Tag Strategy Examples
```bash
# Stable Releases
docker run rustfs/rustfs:1.2.3 # Main version (production)
docker run rustfs/rustfs:1.2.3-production # Explicit production variant
docker run rustfs/rustfs:1.2.3-source # Source build variant
docker run rustfs/rustfs:latest # Latest stable
# Prerelease Versions
docker run rustfs/rustfs:1.3.0-alpha.2 # Specific alpha version
docker run rustfs/rustfs:alpha # Latest alpha
docker run rustfs/rustfs:beta # Latest beta
docker run rustfs/rustfs:rc # Latest release candidate
# Development Versions
docker run rustfs/rustfs:dev # Latest main branch development
docker run rustfs/rustfs:dev-13e4a0b # Specific commit
docker run rustfs/rustfs:dev-latest # Latest development
docker run rustfs/rustfs:main-latest # Main branch latest
```
### Development Environment
```bash
# Quick setup using Makefile (recommended)
make docker-dev-local # Build development image locally
make dev-env-start # Start development container
# Manual Docker commands
docker run -it -v $(pwd):/workspace -p 9000:9000 rustfs/rustfs:latest-dev
# Build from source locally
docker build -f Dockerfile.source -t rustfs:custom .
# Development with hot reload
docker-compose up rustfs-dev
```
## 🏗️ Build Arguments and Scripts
### Using Makefile Commands (Recommended)
The easiest way to build images using simplified commands:
```bash
# Development images (build from source)
make docker-dev-local # Build for local use (single arch)
make docker-dev # Build multi-arch (for CI/CD)
make docker-dev-push REGISTRY=xxx # Build and push to registry
# Production images (using pre-built binaries)
make docker-buildx # Build multi-arch production images
make docker-buildx-push # Build and push production images
make docker-buildx-version VERSION=v1.0.0 # Build specific version
# Development environment
make dev-env-start # Start development container
make dev-env-stop # Stop development container
make dev-env-restart # Restart development container
# Help
make help-docker # Show all Docker-related commands
```
### Using docker-buildx.sh (Advanced)
For direct script usage and advanced scenarios:
```bash
# Build latest version for all architectures
./docker-buildx.sh
# Build and push to registry
./docker-buildx.sh --push
# Build specific version
./docker-buildx.sh --release v1.2.3
# Build and push specific version
./docker-buildx.sh --release v1.2.3 --push
```
### Manual Docker Builds
All images support dynamic version selection:
```bash
# Build production image with latest release
docker build --build-arg RELEASE="latest" -t rustfs:latest .
# Build from source with specific target
docker build -f Dockerfile.source \
--build-arg TARGETPLATFORM="linux/amd64" \
-t rustfs:source .
# Development build
docker build -f Dockerfile.source -t rustfs:dev .
```
## 🔧 Binary Download Sources
### Unified GitHub Releases
The production image downloads from GitHub Releases for reliability and transparency:
-**production** → GitHub Releases API with automatic latest detection
-**Checksum verification** → SHA256SUMS validation when available
-**Multi-architecture** → Supports amd64 and arm64
### Source Build
The source variant compiles from source code with advanced features:
- 🔧 **Cross-compilation** → Supports multiple target platforms via `TARGETPLATFORM`
-**Build caching** → sccache for faster compilation
- 🎯 **Optimized builds** → Release optimizations with LTO and symbol stripping
## 📋 Architecture Support
All variants support multi-architecture builds:
- **linux/amd64** (x86_64)
- **linux/arm64** (aarch64)
Architecture is automatically detected during build using Docker's `TARGETARCH` build argument.
## 🔐 Security Features
- **Checksum Verification**: Production image verifies SHA256SUMS when available
- **Non-root User**: All images run as user `rustfs` (UID 1000)
- **Minimal Runtime**: Production image only includes necessary dependencies
- **Secure Defaults**: No hardcoded credentials or keys
## 🛠️ Development Workflow
### Quick Start with Makefile (Recommended)
```bash
# 1. Start development environment
make dev-env-start
# 2. Your development container is now running with:
# - Port 9000 exposed for RustFS
# - Port 9010 exposed for admin console
# - Current directory mounted as /workspace
# 3. Stop when done
make dev-env-stop
```
### Manual Development Setup
```bash
# Build development image from source
make docker-dev-local
# Or use traditional Docker commands
docker build -f Dockerfile.source -t rustfs:dev .
# Run with development tools
docker run -it -v $(pwd):/workspace -p 9000:9000 rustfs:dev bash
# Or use docker-compose for complex setups
docker-compose up rustfs-dev
```
### Common Development Tasks
```bash
# Build and test locally
make build # Build binary natively
make docker-dev-local # Build development Docker image
make test # Run tests
make fmt # Format code
make clippy # Run linter
# Get help
make help # General help
make help-docker # Docker-specific help
make help-build # Build-specific help
```
## 🚀 CI/CD Integration
The project uses GitHub Actions for automated multi-architecture Docker builds:
### Automated Builds
- **Tags**: Automatic builds triggered on version tags (e.g., `v1.2.3`)
- **Main Branch**: Development builds with `dev-latest` and `main-latest` tags
- **Pull Requests**: Test builds without registry push
### Build Variants
Each build creates three image variants:
- `rustfs/rustfs:v1.2.3` (production - Alpine-based)
- `rustfs/rustfs:v1.2.3-source` (source build - Debian-based)
- `rustfs/rustfs:v1.2.3-dev` (development - Debian-based with tools)
### Manual Builds
Trigger custom builds via GitHub Actions:
```bash
# Use workflow_dispatch to build specific versions
# Available options: latest, main-latest, dev-latest, v1.2.3, dev-abc123
```
## 📦 Supporting Infrastructure
The `.docker/` directory contains supporting configuration files:
- **observability/** - Prometheus, Grafana, OpenTelemetry configs
- **compose/** - Multi-service Docker Compose setups
- **mqtt/** - MQTT broker configurations
- **openobserve-otel/** - Log aggregation and tracing setup
See individual README files in each subdirectory for specific usage instructions.

View File

@@ -1,19 +0,0 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[source.crates-io]
registry = "https://github.com/rust-lang/crates.io-index"
[net]
git-fetch-with-cli = true

80
.docker/compose/README.md Normal file
View File

@@ -0,0 +1,80 @@
# Docker Compose Configurations
This directory contains specialized Docker Compose configurations for different use cases.
## 📁 Configuration Files
This directory contains specialized Docker Compose configurations and their associated Dockerfiles, keeping related files organized together.
### Main Configuration (Root Directory)
- **`../../docker-compose.yml`** - **Default Production Setup**
- Complete production-ready configuration
- Includes RustFS server + full observability stack
- Supports multiple profiles: `dev`, `observability`, `cache`, `proxy`
- Recommended for most users
### Specialized Configurations
- **`docker-compose.cluster.yaml`** - **Distributed Testing**
- 4-node cluster setup for testing distributed storage
- Uses local compiled binaries
- Simulates multi-node environment
- Ideal for development and cluster testing
- **`docker-compose.observability.yaml`** - **Observability Focus**
- Specialized setup for testing observability features
- Includes OpenTelemetry, Jaeger, Prometheus, Loki, Grafana
- Uses `../../Dockerfile.source` for builds
- Perfect for observability development
## 🚀 Usage Examples
### Production Setup
```bash
# Start main service
docker-compose up -d
# Start with development profile
docker-compose --profile dev up -d
# Start with full observability
docker-compose --profile observability up -d
```
### Cluster Testing
```bash
# Build and start 4-node cluster (run from project root)
cd .docker/compose
docker-compose -f docker-compose.cluster.yaml up -d
# Or run directly from project root
docker-compose -f .docker/compose/docker-compose.cluster.yaml up -d
```
### Observability Testing
```bash
# Start observability-focused environment (run from project root)
cd .docker/compose
docker-compose -f docker-compose.observability.yaml up -d
# Or run directly from project root
docker-compose -f .docker/compose/docker-compose.observability.yaml up -d
```
## 🔧 Configuration Overview
| Configuration | Nodes | Storage | Observability | Use Case |
|---------------|-------|---------|---------------|----------|
| **Main** | 1 | Volume mounts | Full stack | Production |
| **Cluster** | 4 | HTTP endpoints | Basic | Testing |
| **Observability** | 4 | Local data | Advanced | Development |
## 📝 Notes
- Always ensure you have built the required binaries before starting cluster tests
- The main configuration is sufficient for most use cases
- Specialized configurations are for specific testing scenarios

View File

@@ -14,70 +14,69 @@
services:
node0:
image: rustfs:v1 # 替换为你的镜像名称和标签
image: rustfs/rustfs:latest # Replace with your image name and label
container_name: node0
hostname: node0
environment:
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
- RUSTFS_ACCESS_KEY=rustfsadmin
- RUSTFS_SECRET_KEY=rustfsadmin
platform: linux/amd64
ports:
- "9000:9000" # 映射宿主机的 9001 端口到容器的 9000 端口
- "8000:9001" # 映射宿主机的 9001 端口到容器的 9000 端口
- "9000:9000" # Map port 9001 of the host to port 9000 of the container
volumes:
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
# - ./data/node0:/data # 将当前路径挂载到容器内的 /root/data
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
command: "/app/rustfs"
node1:
image: rustfs:v1
image: rustfs/rustfs:latest
container_name: node1
hostname: node1
environment:
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
- RUSTFS_ACCESS_KEY=rustfsadmin
- RUSTFS_SECRET_KEY=rustfsadmin
platform: linux/amd64
ports:
- "9001:9000" # 映射宿主机的 9002 端口到容器的 9000 端口
- "9001:9000" # Map port 9002 of the host to port 9000 of the container
volumes:
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
# - ./data/node1:/data
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
command: "/app/rustfs"
node2:
image: rustfs:v1
image: rustfs/rustfs:latest
container_name: node2
hostname: node2
environment:
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
- RUSTFS_ACCESS_KEY=rustfsadmin
- RUSTFS_SECRET_KEY=rustfsadmin
platform: linux/amd64
ports:
- "9002:9000" # 映射宿主机的 9003 端口到容器的 9000 端口
- "9002:9000" # Map port 9003 of the host to port 9000 of the container
volumes:
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
# - ./data/node2:/data
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
command: "/app/rustfs"
node3:
image: rustfs:v1
image: rustfs/rustfs:latest
container_name: node3
hostname: node3
environment:
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
- RUSTFS_ACCESS_KEY=rustfsadmin
- RUSTFS_SECRET_KEY=rustfsadmin
platform: linux/amd64
ports:
- "9003:9000" # 映射宿主机的 9004 端口到容器的 9000 端口
- "9003:9000" # Map port 9004 of the host to port 9000 of the container
volumes:
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
# - ./data/node3:/data
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
command: "/app/rustfs"

View File

@@ -14,11 +14,11 @@
services:
otel-collector:
image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.127.0
image: otel/opentelemetry-collector-contrib:0.129.1
environment:
- TZ=Asia/Shanghai
volumes:
- ./.docker/observability/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
- ../../.docker/observability/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
ports:
- 1888:1888
- 8888:8888
@@ -30,7 +30,7 @@ services:
networks:
- rustfs-network
jaeger:
image: jaegertracing/jaeger:2.6.0
image: jaegertracing/jaeger:2.8.0
environment:
- TZ=Asia/Shanghai
ports:
@@ -40,11 +40,11 @@ services:
networks:
- rustfs-network
prometheus:
image: prom/prometheus:v3.4.1
image: prom/prometheus:v3.4.2
environment:
- TZ=Asia/Shanghai
volumes:
- ./.docker/observability/prometheus.yml:/etc/prometheus/prometheus.yml
- ../../.docker/observability/prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
networks:
@@ -54,16 +54,16 @@ services:
environment:
- TZ=Asia/Shanghai
volumes:
- ./.docker/observability/loki-config.yaml:/etc/loki/local-config.yaml
- ../../.docker/observability/loki-config.yaml:/etc/loki/local-config.yaml
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
networks:
- rustfs-network
grafana:
image: grafana/grafana:12.0.1
image: grafana/grafana:12.0.2
ports:
- "3000:3000" # Web UI
- "3000:3000" # Web UI
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
- TZ=Asia/Shanghai
@@ -72,85 +72,69 @@ services:
node1:
build:
context: .
dockerfile: Dockerfile.obs
context: ../..
dockerfile: Dockerfile.source
container_name: node1
environment:
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
- RUSTFS_ADDRESS=:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=:9002
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
- RUSTFS_OBS_LOGGER_LEVEL=debug
platform: linux/amd64
ports:
- "9001:9000" # 映射宿主机的 9001 端口到容器的 9000 端口
- "9101:9002"
volumes:
# - ./data:/root/data # 将当前路径挂载到容器内的 /root/data
- ./.docker/observability/config:/etc/observability/config
- "9001:9000" # Map port 9001 of the host to port 9000 of the container
networks:
- rustfs-network
node2:
build:
context: .
dockerfile: Dockerfile.obs
context: ../..
dockerfile: Dockerfile.source
container_name: node2
environment:
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
- RUSTFS_ADDRESS=:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=:9002
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
- RUSTFS_OBS_LOGGER_LEVEL=debug
platform: linux/amd64
ports:
- "9002:9000" # 映射宿主机的 9002 端口到容器的 9000 端口
- "9102:9002"
volumes:
# - ./data:/root/data
- ./.docker/observability/config:/etc/observability/config
- "9002:9000" # Map port 9002 of the host to port 9000 of the container
networks:
- rustfs-network
node3:
build:
context: .
dockerfile: Dockerfile.obs
context: ../..
dockerfile: Dockerfile.source
container_name: node3
environment:
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
- RUSTFS_ADDRESS=:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=:9002
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
- RUSTFS_OBS_LOGGER_LEVEL=debug
platform: linux/amd64
ports:
- "9003:9000" # 映射宿主机的 9003 端口到容器的 9000 端口
- "9103:9002"
volumes:
# - ./data:/root/data
- ./.docker/observability/config:/etc/observability/config
- "9003:9000" # Map port 9003 of the host to port 9000 of the container
networks:
- rustfs-network
node4:
build:
context: .
dockerfile: Dockerfile.obs
context: ../..
dockerfile: Dockerfile.source
container_name: node4
environment:
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
- RUSTFS_ADDRESS=:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=:9002
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
- RUSTFS_OBS_LOGGER_LEVEL=debug
platform: linux/amd64
ports:
- "9004:9000" # 映射宿主机的 9004 端口到容器的 9000 端口
- "9104:9002"
volumes:
# - ./data:/root/data
- ./.docker/observability/config:/etc/observability/config
- "9004:9000" # Map port 9004 of the host to port 9000 of the container
networks:
- rustfs-network

View File

@@ -13,24 +13,40 @@
# limitations under the License.
services:
tempo:
image: grafana/tempo:latest
#user: root # The container must be started with root to execute chown in the script
#entrypoint: [ "/etc/tempo/entrypoint.sh" ] # Specify a custom entry point
command: [ "-config.file=/etc/tempo.yaml" ] # This is passed as a parameter to the entry point script
volumes:
- ./tempo-entrypoint.sh:/etc/tempo/entrypoint.sh # Mount entry point script
- ./tempo.yaml:/etc/tempo.yaml
- ./tempo-data:/var/tempo
ports:
- "3200:3200" # tempo
- "24317:4317" # otlp grpc
networks:
- otel-network
otel-collector:
image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.127.0
image: otel/opentelemetry-collector-contrib:0.129.1
environment:
- TZ=Asia/Shanghai
volumes:
- ./otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
ports:
- 1888:1888
- 8888:8888
- 8889:8889
- 13133:13133
- 4317:4317
- 4318:4318
- 55679:55679
- "1888:1888"
- "8888:8888"
- "8889:8889"
- "13133:13133"
- "4317:4317"
- "4318:4318"
- "55679:55679"
networks:
- otel-network
jaeger:
image: jaegertracing/jaeger:2.7.0
image: jaegertracing/jaeger:2.8.0
environment:
- TZ=Asia/Shanghai
ports:
@@ -40,7 +56,7 @@ services:
networks:
- otel-network
prometheus:
image: prom/prometheus:v3.4.1
image: prom/prometheus:v3.4.2
environment:
- TZ=Asia/Shanghai
volumes:
@@ -64,6 +80,8 @@ services:
image: grafana/grafana:12.0.2
ports:
- "3000:3000" # Web UI
volumes:
- ./grafana-datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
- TZ=Asia/Shanghai

View File

@@ -0,0 +1,32 @@
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
uid: prometheus
access: proxy
orgId: 1
url: http://prometheus:9090
basicAuth: false
isDefault: false
version: 1
editable: false
jsonData:
httpMethod: GET
- name: Tempo
type: tempo
access: proxy
orgId: 1
url: http://tempo:3200
basicAuth: false
isDefault: true
version: 1
editable: false
apiVersion: 1
uid: tempo
jsonData:
httpMethod: GET
serviceMap:
datasourceUid: prometheus
streamingEnabled:
search: true

View File

@@ -33,6 +33,10 @@ exporters:
endpoint: "jaeger:4317" # Jaeger 的 OTLP gRPC 端点
tls:
insecure: true # 开发环境禁用 TLS生产环境需配置证书
otlp/tempo: # OTLP 导出器,用于跟踪数据
endpoint: "tempo:4317" # tempo 的 OTLP gRPC 端点
tls:
insecure: true # 开发环境禁用 TLS生产环境需配置证书
prometheus: # Prometheus 导出器,用于指标数据
endpoint: "0.0.0.0:8889" # Prometheus 刮取端点
namespace: "rustfs" # 指标前缀
@@ -53,7 +57,7 @@ service:
traces:
receivers: [ otlp ]
processors: [ memory_limiter,batch ]
exporters: [ otlp/traces ]
exporters: [ otlp/traces,otlp/tempo ]
metrics:
receivers: [ otlp ]
processors: [ batch ]
@@ -66,6 +70,12 @@ service:
logs:
level: "info" # Collector 日志级别
metrics:
address: "0.0.0.0:8888" # Collector 自身指标暴露
level: "detailed" # 可以是 basic, normal, detailed
readers:
- periodic:
exporter:
otlp:
protocol: http/protobuf
endpoint: http://otel-collector:4318

View File

@@ -18,8 +18,11 @@ global:
scrape_configs:
- job_name: 'otel-collector'
static_configs:
- targets: ['otel-collector:8888'] # 从 Collector 刮取指标
- targets: [ 'otel-collector:8888' ] # 从 Collector 刮取指标
- job_name: 'otel-metrics'
static_configs:
- targets: ['otel-collector:8889'] # 应用指标
- targets: [ 'otel-collector:8889' ] # 应用指标
- job_name: 'tempo'
static_configs:
- targets: [ 'tempo:3200' ]

View File

@@ -0,0 +1 @@
*

View File

@@ -0,0 +1,8 @@
#!/bin/sh
# Run as root to fix directory permissions
chown -R 10001:10001 /var/tempo
# Use su-exec (a lightweight sudo/gosu alternative, commonly used in Alpine mirroring)
# Switch to user 10001 and execute the original command (CMD) passed to the script
# "$@" represents all parameters passed to this script, i.e. command in docker-compose
exec su-exec 10001:10001 /tempo "$@"

View File

@@ -0,0 +1,55 @@
stream_over_http_enabled: true
server:
http_listen_port: 3200
log_level: info
query_frontend:
search:
duration_slo: 5s
throughput_bytes_slo: 1.073741824e+09
metadata_slo:
duration_slo: 5s
throughput_bytes_slo: 1.073741824e+09
trace_by_id:
duration_slo: 5s
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: "tempo:4317"
ingester:
max_block_duration: 5m # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally
compactor:
compaction:
block_retention: 1h # overall Tempo trace retention. set for demo purposes
metrics_generator:
registry:
external_labels:
source: tempo
cluster: docker-compose
storage:
path: /var/tempo/generator/wal
remote_write:
- url: http://prometheus:9090/api/v1/write
send_exemplars: true
traces_storage:
path: /var/tempo/generator/traces
storage:
trace:
backend: local # backend configuration to use
wal:
path: /var/tempo/wal # where to store the wal locally
local:
path: /var/tempo/blocks
overrides:
defaults:
metrics_generator:
processors: [ service-graphs, span-metrics, local-blocks ] # enables metrics generator
generate_native_histograms: both

View File

@@ -60,18 +60,11 @@ runs:
pkg-config \
libssl-dev
- name: Cache protoc binary
id: cache-protoc
uses: actions/cache@v4
with:
path: ~/.local/bin/protoc
key: protoc-31.1-${{ runner.os }}-${{ runner.arch }}
- name: Install protoc
if: steps.cache-protoc.outputs.cache-hit != 'true'
uses: arduino/setup-protoc@v3
with:
version: "31.1"
repo-token: ${{ inputs.github-token }}
- name: Install flatc
uses: Nugine/setup-flatc@v1
@@ -93,6 +86,9 @@ runs:
if: inputs.install-cross-tools == 'true'
uses: taiki-e/install-action@cargo-zigbuild
- name: Install cargo-nextest
uses: taiki-e/install-action@cargo-nextest
- name: Setup Rust cache
uses: Swatinem/rust-cache@v2
with:
@@ -100,7 +96,3 @@ runs:
cache-on-failure: true
shared-key: ${{ inputs.cache-shared-key }}
save-if: ${{ inputs.cache-save-if }}
# Cache workspace dependencies
workspaces: |
. -> target
cli/rustfs-gui -> cli/rustfs-gui/target

View File

@@ -12,36 +12,62 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Build and Release Workflow
#
# This workflow builds RustFS binaries and automatically triggers Docker image builds.
#
# Flow:
# 1. Build binaries for multiple platforms
# 2. Upload binaries to OSS storage
# 3. Trigger docker.yml to build and push images using the uploaded binaries
#
# Manual Parameters:
# - build_docker: Build and push Docker images (default: true)
name: Build and Release
on:
push:
tags: ["*"]
tags: ["*.*.*"]
branches: [main]
paths:
- "rustfs/**"
- "cli/**"
- "crates/**"
- "Cargo.toml"
- "Cargo.lock"
- ".github/workflows/build.yml"
paths-ignore:
- "**.md"
- "**.txt"
- ".github/**"
- "docs/**"
- "deploy/**"
- "scripts/dev_*.sh"
- "LICENSE*"
- "README*"
- "**/*.png"
- "**/*.jpg"
- "**/*.svg"
- ".gitignore"
- ".dockerignore"
pull_request:
branches: [main]
paths:
- "rustfs/**"
- "cli/**"
- "crates/**"
- "Cargo.toml"
- "Cargo.lock"
- ".github/workflows/build.yml"
paths-ignore:
- "**.md"
- "**.txt"
- ".github/**"
- "docs/**"
- "deploy/**"
- "scripts/dev_*.sh"
- "LICENSE*"
- "README*"
- "**/*.png"
- "**/*.jpg"
- "**/*.svg"
- ".gitignore"
- ".dockerignore"
schedule:
- cron: "0 0 * * 0" # Weekly on Sunday at midnight UTC
workflow_dispatch:
inputs:
force_build:
description: "Force build even without changes"
build_docker:
description: "Build and push Docker images after binary build"
required: false
default: false
default: true
type: boolean
env:
@@ -51,60 +77,84 @@ env:
CARGO_INCREMENTAL: 0
jobs:
# First layer: GitHub Actions level optimization (handling duplicates and concurrency)
skip-duplicate:
name: Skip Duplicate Actions
runs-on: ubuntu-latest
outputs:
should_skip: ${{ steps.skip_check.outputs.should_skip }}
steps:
- name: Skip duplicate actions
id: skip_check
uses: fkirc/skip-duplicate-actions@v5
with:
concurrent_skipping: "same_content_newer"
cancel_others: true
paths_ignore: '["*.md", "docs/**", "deploy/**", "scripts/dev_*.sh"]'
# Second layer: Business logic level checks (handling build strategy)
# Build strategy check - determine build type based on trigger
build-check:
name: Build Strategy Check
needs: skip-duplicate
if: needs.skip-duplicate.outputs.should_skip != 'true'
runs-on: ubuntu-latest
outputs:
should_build: ${{ steps.check.outputs.should_build }}
build_type: ${{ steps.check.outputs.build_type }}
version: ${{ steps.check.outputs.version }}
short_sha: ${{ steps.check.outputs.short_sha }}
is_prerelease: ${{ steps.check.outputs.is_prerelease }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Determine build strategy
id: check
run: |
should_build=false
build_type="none"
version=""
short_sha=""
is_prerelease=false
# Business logic: when we need to build
if [[ "${{ github.event_name }}" == "schedule" ]] || \
[[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
[[ "${{ github.event.inputs.force_build }}" == "true" ]] || \
[[ "${{ contains(github.event.head_commit.message, '--build') }}" == "true" ]]; then
# Get short SHA for all builds
short_sha=$(git rev-parse --short HEAD)
# Determine build type based on trigger
if [[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]]; then
# Tag push - release or prerelease
should_build=true
tag_name="${GITHUB_REF#refs/tags/}"
version="${tag_name}"
# Check if this is a prerelease
if [[ "$tag_name" == *"alpha"* ]] || [[ "$tag_name" == *"beta"* ]] || [[ "$tag_name" == *"rc"* ]]; then
build_type="prerelease"
is_prerelease=true
echo "🚀 Prerelease build detected: $tag_name"
else
build_type="release"
echo "📦 Release build detected: $tag_name"
fi
elif [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
# Main branch push - development build
should_build=true
build_type="development"
fi
if [[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]]; then
version="dev-${short_sha}"
echo "🛠️ Development build detected"
elif [[ "${{ github.event_name }}" == "schedule" ]] || \
[[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
[[ "${{ contains(github.event.head_commit.message, '--build') }}" == "true" ]]; then
# Scheduled or manual build
should_build=true
build_type="release"
build_type="development"
version="dev-${short_sha}"
echo "⚡ Manual/scheduled build detected"
fi
echo "should_build=$should_build" >> $GITHUB_OUTPUT
echo "build_type=$build_type" >> $GITHUB_OUTPUT
echo "Build needed: $should_build (type: $build_type)"
echo "version=$version" >> $GITHUB_OUTPUT
echo "short_sha=$short_sha" >> $GITHUB_OUTPUT
echo "is_prerelease=$is_prerelease" >> $GITHUB_OUTPUT
echo "📊 Build Summary:"
echo " - Should build: $should_build"
echo " - Build type: $build_type"
echo " - Version: $version"
echo " - Short SHA: $short_sha"
echo " - Is prerelease: $is_prerelease"
# Build RustFS binaries
build-rustfs:
name: Build RustFS
needs: [skip-duplicate, build-check]
if: needs.skip-duplicate.outputs.should_skip != 'true' && needs.build-check.outputs.should_build == 'true'
needs: [build-check]
if: needs.build-check.outputs.should_build == 'true'
runs-on: ${{ matrix.os }}
timeout-minutes: 60
env:
@@ -113,15 +163,33 @@ jobs:
fail-fast: false
matrix:
include:
# Linux builds
- os: ubuntu-latest
target: x86_64-unknown-linux-musl
cross: false
platform: linux
- os: ubuntu-latest
target: aarch64-unknown-linux-musl
cross: true
platform: linux
# macOS builds
- os: macos-latest
target: aarch64-apple-darwin
cross: false
platform: macos
- os: macos-latest
target: x86_64-apple-darwin
cross: false
platform: macos
# # Windows builds (temporarily disabled)
# - os: windows-latest
# target: x86_64-pc-windows-msvc
# cross: false
# platform: windows
# - os: windows-latest
# target: aarch64-pc-windows-msvc
# cross: true
# platform: windows
steps:
- name: Checkout repository
uses: actions/checkout@v4
@@ -141,136 +209,501 @@ jobs:
- name: Download static console assets
run: |
mkdir -p ./rustfs/static
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
-o console.zip --retry 3 --retry-delay 5 --max-time 300
unzip -o console.zip -d ./rustfs/static
rm console.zip
if [[ "${{ matrix.platform }}" == "windows" ]]; then
curl.exe -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" -o console.zip --retry 3 --retry-delay 5 --max-time 300
if [[ $? -eq 0 ]]; then
unzip -o console.zip -d ./rustfs/static
rm console.zip
else
echo "Warning: Failed to download console assets, continuing without them"
echo "// Static assets not available" > ./rustfs/static/empty.txt
fi
else
chmod +w ./rustfs/static/LICENSE || true
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
-o console.zip --retry 3 --retry-delay 5 --max-time 300
if [[ $? -eq 0 ]]; then
unzip -o console.zip -d ./rustfs/static
rm console.zip
else
echo "Warning: Failed to download console assets, continuing without them"
echo "// Static assets not available" > ./rustfs/static/empty.txt
fi
fi
- name: Build RustFS
run: |
# Force rebuild by touching build.rs
touch rustfs/build.rs
if [[ "${{ matrix.cross }}" == "true" ]]; then
cargo zigbuild --release --target ${{ matrix.target }} -p rustfs --bins
if [[ "${{ matrix.platform }}" == "windows" ]]; then
# Use cross for Windows ARM64
cargo install cross --git https://github.com/cross-rs/cross
cross build --release --target ${{ matrix.target }} -p rustfs --bins
else
# Use zigbuild for other cross-compilation
cargo zigbuild --release --target ${{ matrix.target }} -p rustfs --bins
fi
else
cargo build --release --target ${{ matrix.target }} -p rustfs --bins
fi
- name: Create release package
id: package
shell: bash
run: |
PACKAGE_NAME="rustfs-${{ matrix.target }}"
mkdir -p "${PACKAGE_NAME}"/{bin,docs}
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
VERSION="${{ needs.build-check.outputs.version }}"
SHORT_SHA="${{ needs.build-check.outputs.short_sha }}"
# Copy binary
if [[ "${{ matrix.target }}" == *"windows"* ]]; then
cp target/${{ matrix.target }}/release/rustfs.exe "${PACKAGE_NAME}/bin/"
# Extract platform and arch from target
TARGET="${{ matrix.target }}"
PLATFORM="${{ matrix.platform }}"
# Map target to architecture
case "$TARGET" in
*x86_64*)
ARCH="x86_64"
;;
*aarch64*|*arm64*)
ARCH="aarch64"
;;
*armv7*)
ARCH="armv7"
;;
*)
ARCH="unknown"
;;
esac
# Generate package name based on build type
if [[ "$BUILD_TYPE" == "development" ]]; then
# Development build: rustfs-${platform}-${arch}-dev-${short_sha}.zip
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-dev-${SHORT_SHA}"
else
cp target/${{ matrix.target }}/release/rustfs "${PACKAGE_NAME}/bin/"
chmod +x "${PACKAGE_NAME}/bin/rustfs"
# Release/Prerelease build: rustfs-${platform}-${arch}-v${version}.zip
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-v${VERSION}"
fi
# Copy documentation
[ -f "LICENSE" ] && cp LICENSE "${PACKAGE_NAME}/docs/"
[ -f "README.md" ] && cp README.md "${PACKAGE_NAME}/docs/"
# Create zip packages for all platforms
# Ensure zip is available
if ! command -v zip &> /dev/null; then
if [[ "${{ matrix.os }}" == "ubuntu-latest" ]]; then
sudo apt-get update && sudo apt-get install -y zip
fi
fi
# Create archive
tar -czf "${PACKAGE_NAME}.tar.gz" "${PACKAGE_NAME}"
cd target/${{ matrix.target }}/release
zip "../../../${PACKAGE_NAME}.zip" rustfs
cd ../../..
echo "package_name=${PACKAGE_NAME}" >> $GITHUB_OUTPUT
echo "Package created: ${PACKAGE_NAME}.tar.gz"
echo "package_file=${PACKAGE_NAME}.zip" >> $GITHUB_OUTPUT
echo "build_type=${BUILD_TYPE}" >> $GITHUB_OUTPUT
echo "version=${VERSION}" >> $GITHUB_OUTPUT
echo "📦 Package created: ${PACKAGE_NAME}.zip"
echo "🔧 Build type: ${BUILD_TYPE}"
echo "📊 Version: ${VERSION}"
- name: Upload artifacts
uses: actions/upload-artifact@v4
with:
name: ${{ steps.package.outputs.package_name }}
path: ${{ steps.package.outputs.package_name }}.tar.gz
path: ${{ steps.package.outputs.package_file }}
retention-days: ${{ startsWith(github.ref, 'refs/tags/') && 30 || 7 }}
# Build GUI (only for releases)
build-gui:
name: Build GUI
needs: [skip-duplicate, build-check, build-rustfs]
if: needs.skip-duplicate.outputs.should_skip != 'true' && needs.build-check.outputs.build_type == 'release'
runs-on: ${{ matrix.os }}
timeout-minutes: 45
strategy:
fail-fast: false
matrix:
include:
- os: ubuntu-latest
target: x86_64-unknown-linux-musl
platform: linux
- os: macos-latest
target: aarch64-apple-darwin
platform: macos
- name: Upload to Aliyun OSS
if: env.OSS_ACCESS_KEY_ID != '' && (needs.build-check.outputs.build_type == 'release' || needs.build-check.outputs.build_type == 'prerelease' || needs.build-check.outputs.build_type == 'development')
env:
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
OSS_REGION: cn-beijing
OSS_ENDPOINT: https://oss-cn-beijing.aliyuncs.com
run: |
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
# Install ossutil (platform-specific)
OSSUTIL_VERSION="2.1.1"
case "${{ matrix.platform }}" in
linux)
if [[ "$(uname -m)" == "arm64" ]]; then
ARCH="arm64"
else
ARCH="amd64"
fi
OSSUTIL_ZIP="ossutil-${OSSUTIL_VERSION}-linux-${ARCH}.zip"
OSSUTIL_DIR="ossutil-${OSSUTIL_VERSION}-linux-${ARCH}"
curl -o "$OSSUTIL_ZIP" "https://gosspublic.alicdn.com/ossutil/v2/${OSSUTIL_VERSION}/${OSSUTIL_ZIP}"
unzip "$OSSUTIL_ZIP"
mv "${OSSUTIL_DIR}/ossutil" /usr/local/bin/
rm -rf "$OSSUTIL_DIR" "$OSSUTIL_ZIP"
chmod +x /usr/local/bin/ossutil
OSSUTIL_BIN=ossutil
;;
macos)
if [[ "$(uname -m)" == "arm64" ]]; then
ARCH="arm64"
else
ARCH="amd64"
fi
OSSUTIL_ZIP="ossutil-${OSSUTIL_VERSION}-mac-${ARCH}.zip"
OSSUTIL_DIR="ossutil-${OSSUTIL_VERSION}-mac-${ARCH}"
curl -o "$OSSUTIL_ZIP" "https://gosspublic.alicdn.com/ossutil/v2/${OSSUTIL_VERSION}/${OSSUTIL_ZIP}"
unzip "$OSSUTIL_ZIP"
mv "${OSSUTIL_DIR}/ossutil" /usr/local/bin/
rm -rf "$OSSUTIL_DIR" "$OSSUTIL_ZIP"
chmod +x /usr/local/bin/ossutil
OSSUTIL_BIN=ossutil
;;
esac
# Determine upload path based on build type
if [[ "$BUILD_TYPE" == "development" ]]; then
OSS_PATH="oss://rustfs-artifacts/artifacts/rustfs/dev/"
echo "📤 Uploading development build to OSS dev directory"
else
OSS_PATH="oss://rustfs-artifacts/artifacts/rustfs/release/"
echo "📤 Uploading release build to OSS release directory"
fi
# Upload the package file to OSS
echo "Uploading ${{ steps.package.outputs.package_file }} to $OSS_PATH..."
$OSSUTIL_BIN cp "${{ steps.package.outputs.package_file }}" "$OSS_PATH" --force
# For release and prerelease builds, also create a latest version
if [[ "$BUILD_TYPE" == "release" ]] || [[ "$BUILD_TYPE" == "prerelease" ]]; then
# Extract platform and arch from package name
PACKAGE_NAME="${{ steps.package.outputs.package_name }}"
# Create latest version filename
# Convert from rustfs-linux-x86_64-v1.0.0 to rustfs-linux-x86_64-latest
LATEST_FILE="${PACKAGE_NAME%-v*}-latest.zip"
# Copy the original file to latest version
cp "${{ steps.package.outputs.package_file }}" "$LATEST_FILE"
# Upload the latest version
echo "Uploading latest version: $LATEST_FILE to $OSS_PATH..."
$OSSUTIL_BIN cp "$LATEST_FILE" "$OSS_PATH" --force
echo "✅ Latest version uploaded: $LATEST_FILE"
fi
# For development builds, create dev-latest version
if [[ "$BUILD_TYPE" == "development" ]]; then
# Extract platform and arch from package name
PACKAGE_NAME="${{ steps.package.outputs.package_name }}"
# Create dev-latest version filename
# Convert from rustfs-linux-x86_64-dev-abc123 to rustfs-linux-x86_64-dev-latest
DEV_LATEST_FILE="${PACKAGE_NAME%-*}-latest.zip"
# Copy the original file to dev-latest version
cp "${{ steps.package.outputs.package_file }}" "$DEV_LATEST_FILE"
# Upload the dev-latest version
echo "Uploading dev-latest version: $DEV_LATEST_FILE to $OSS_PATH..."
$OSSUTIL_BIN cp "$DEV_LATEST_FILE" "$OSS_PATH" --force
echo "✅ Dev-latest version uploaded: $DEV_LATEST_FILE"
# For main branch builds, also create a main-latest version
if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
# Create main-latest version filename
# Convert from rustfs-linux-x86_64-dev-abc123 to rustfs-linux-x86_64-main-latest
MAIN_LATEST_FILE="${PACKAGE_NAME%-dev-*}-main-latest.zip"
# Copy the original file to main-latest version
cp "${{ steps.package.outputs.package_file }}" "$MAIN_LATEST_FILE"
# Upload the main-latest version
echo "Uploading main-latest version: $MAIN_LATEST_FILE to $OSS_PATH..."
$OSSUTIL_BIN cp "$MAIN_LATEST_FILE" "$OSS_PATH" --force
echo "✅ Main-latest version uploaded: $MAIN_LATEST_FILE"
# Also create a generic main-latest for Docker builds
if [[ "${{ matrix.platform }}" == "linux" ]]; then
DOCKER_MAIN_LATEST_FILE="rustfs-linux-${{ matrix.target == 'x86_64-unknown-linux-musl' && 'x86_64' || 'aarch64' }}-main-latest.zip"
cp "${{ steps.package.outputs.package_file }}" "$DOCKER_MAIN_LATEST_FILE"
$OSSUTIL_BIN cp "$DOCKER_MAIN_LATEST_FILE" "$OSS_PATH" --force
echo "✅ Docker main-latest version uploaded: $DOCKER_MAIN_LATEST_FILE"
fi
fi
fi
echo "✅ Upload completed successfully"
# Build summary
build-summary:
name: Build Summary
needs: [build-check, build-rustfs]
if: always() && needs.build-check.outputs.should_build == 'true'
runs-on: ubuntu-latest
steps:
- name: Build completion summary
run: |
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
VERSION="${{ needs.build-check.outputs.version }}"
echo "🎉 Build completed successfully!"
echo "📦 Build type: $BUILD_TYPE"
echo "🔢 Version: $VERSION"
echo ""
# Check build status
BUILD_STATUS="${{ needs.build-rustfs.result }}"
echo "📊 Build Results:"
echo " 📦 All platforms: $BUILD_STATUS"
echo ""
case "$BUILD_TYPE" in
"development")
echo "🛠️ Development build artifacts have been uploaded to OSS dev directory"
echo "⚠️ This is a development build - not suitable for production use"
;;
"release")
echo "🚀 Release build artifacts have been uploaded to OSS release directory"
echo "✅ This build is ready for production use"
echo "🏷️ GitHub Release will be created in this workflow"
;;
"prerelease")
echo "🧪 Prerelease build artifacts have been uploaded to OSS release directory"
echo "⚠️ This is a prerelease build - use with caution"
echo "🏷️ GitHub Release will be created in this workflow"
;;
esac
echo ""
echo "🐳 Docker Images:"
if [[ "${{ github.event.inputs.build_docker }}" == "false" ]]; then
echo "⏭️ Docker image build was skipped (binary only build)"
elif [[ "$BUILD_STATUS" == "success" ]]; then
echo "🔄 Docker images will be built and pushed automatically via workflow_run event"
else
echo "❌ Docker image build will be skipped due to build failure"
fi
# Create GitHub Release (only for tag pushes)
create-release:
name: Create GitHub Release
needs: [build-check, build-rustfs]
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
runs-on: ubuntu-latest
permissions:
contents: write
outputs:
release_id: ${{ steps.create.outputs.release_id }}
release_url: ${{ steps.create.outputs.release_url }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Create GitHub Release
id: create
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ needs.build-check.outputs.version }}"
VERSION="${{ needs.build-check.outputs.version }}"
IS_PRERELEASE="${{ needs.build-check.outputs.is_prerelease }}"
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
# Determine release type for title
if [[ "$BUILD_TYPE" == "prerelease" ]]; then
if [[ "$TAG" == *"alpha"* ]]; then
RELEASE_TYPE="alpha"
elif [[ "$TAG" == *"beta"* ]]; then
RELEASE_TYPE="beta"
elif [[ "$TAG" == *"rc"* ]]; then
RELEASE_TYPE="rc"
else
RELEASE_TYPE="prerelease"
fi
else
RELEASE_TYPE="release"
fi
# Check if release already exists
if gh release view "$TAG" >/dev/null 2>&1; then
echo "Release $TAG already exists"
RELEASE_ID=$(gh release view "$TAG" --json databaseId --jq '.databaseId')
RELEASE_URL=$(gh release view "$TAG" --json url --jq '.url')
else
# Get release notes from tag message
RELEASE_NOTES=$(git tag -l --format='%(contents)' "${TAG}")
if [[ -z "$RELEASE_NOTES" || "$RELEASE_NOTES" =~ ^[[:space:]]*$ ]]; then
if [[ "$IS_PRERELEASE" == "true" ]]; then
RELEASE_NOTES="Pre-release ${VERSION} (${RELEASE_TYPE})"
else
RELEASE_NOTES="Release ${VERSION}"
fi
fi
# Create release title
if [[ "$IS_PRERELEASE" == "true" ]]; then
TITLE="RustFS $VERSION (${RELEASE_TYPE})"
else
TITLE="RustFS $VERSION"
fi
# Create the release
PRERELEASE_FLAG=""
if [[ "$IS_PRERELEASE" == "true" ]]; then
PRERELEASE_FLAG="--prerelease"
fi
gh release create "$TAG" \
--title "$TITLE" \
--notes "$RELEASE_NOTES" \
$PRERELEASE_FLAG \
--draft
RELEASE_ID=$(gh release view "$TAG" --json databaseId --jq '.databaseId')
RELEASE_URL=$(gh release view "$TAG" --json url --jq '.url')
fi
echo "release_id=$RELEASE_ID" >> $GITHUB_OUTPUT
echo "release_url=$RELEASE_URL" >> $GITHUB_OUTPUT
echo "Created release: $RELEASE_URL"
# Prepare and upload release assets
upload-release-assets:
name: Upload Release Assets
needs: [build-check, build-rustfs, create-release]
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
runs-on: ubuntu-latest
permissions:
contents: write
actions: read
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Rust environment
uses: ./.github/actions/setup
with:
rust-version: stable
target: ${{ matrix.target }}
cache-shared-key: gui-${{ matrix.target }}-${{ hashFiles('**/Cargo.lock') }}
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Download RustFS binary
- name: Download all build artifacts
uses: actions/download-artifact@v4
with:
name: rustfs-${{ matrix.target }}
path: ./artifacts
pattern: rustfs-*
merge-multiple: true
- name: Prepare embedded binary
- name: Prepare release assets
id: prepare
run: |
mkdir -p ./cli/rustfs-gui/embedded-rustfs/
tar -xzf ./artifacts/rustfs-${{ matrix.target }}.tar.gz -C ./artifacts/
cp ./artifacts/rustfs-${{ matrix.target }}/bin/rustfs ./cli/rustfs-gui/embedded-rustfs/
VERSION="${{ needs.build-check.outputs.version }}"
TAG="${{ needs.build-check.outputs.version }}"
- name: Install Dioxus CLI
uses: taiki-e/cache-cargo-install-action@v2
with:
tool: dioxus-cli
mkdir -p ./release-assets
- name: Build GUI
working-directory: ./cli/rustfs-gui
run: |
case "${{ matrix.platform }}" in
"linux")
dx bundle --platform linux --package-types deb --package-types appimage --release
;;
"macos")
dx bundle --platform macos --package-types dmg --release
;;
esac
# Copy and verify artifacts
ASSETS_COUNT=0
for file in ./artifacts/*.zip; do
if [[ -f "$file" ]]; then
cp "$file" ./release-assets/
ASSETS_COUNT=$((ASSETS_COUNT + 1))
fi
done
- name: Package GUI
id: gui_package
run: |
GUI_PACKAGE="rustfs-gui-${{ matrix.target }}"
mkdir -p "${GUI_PACKAGE}"
# Copy GUI bundles
if [[ -d "cli/rustfs-gui/dist/bundle" ]]; then
cp -r cli/rustfs-gui/dist/bundle/* "${GUI_PACKAGE}/"
if [[ $ASSETS_COUNT -eq 0 ]]; then
echo "❌ No artifacts found!"
exit 1
fi
tar -czf "${GUI_PACKAGE}.tar.gz" "${GUI_PACKAGE}"
echo "gui_package=${GUI_PACKAGE}" >> $GITHUB_OUTPUT
cd ./release-assets
- name: Upload GUI artifacts
uses: actions/upload-artifact@v4
with:
name: ${{ steps.gui_package.outputs.gui_package }}
path: ${{ steps.gui_package.outputs.gui_package }}.tar.gz
retention-days: 30
# Generate checksums
if ls *.zip >/dev/null 2>&1; then
sha256sum *.zip > SHA256SUMS
sha512sum *.zip > SHA512SUMS
fi
# Release management
release:
name: GitHub Release
needs: [skip-duplicate, build-check, build-rustfs, build-gui]
if: always() && needs.skip-duplicate.outputs.should_skip != 'true' && needs.build-check.outputs.build_type == 'release'
# Create signature placeholder files
for file in *.zip; do
echo "# Signature for $file" > "${file}.asc"
echo "# GPG signature will be added in future versions" >> "${file}.asc"
done
echo "📦 Prepared assets:"
ls -la
echo "🔢 Asset count: $ASSETS_COUNT"
- name: Upload to GitHub Release
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ needs.build-check.outputs.version }}"
cd ./release-assets
# Upload all files
for file in *; do
if [[ -f "$file" ]]; then
echo "📤 Uploading $file..."
gh release upload "$TAG" "$file" --clobber
fi
done
echo "✅ All assets uploaded successfully"
# Update latest.json for stable releases only
update-latest-version:
name: Update Latest Version
needs: [build-check, upload-release-assets]
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.is_prerelease == 'false'
runs-on: ubuntu-latest
steps:
- name: Update latest.json
env:
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
run: |
if [[ -z "$OSS_ACCESS_KEY_ID" ]]; then
echo "⚠️ OSS credentials not available, skipping latest.json update"
exit 0
fi
VERSION="${{ needs.build-check.outputs.version }}"
TAG="${{ needs.build-check.outputs.version }}"
# Install ossutil
OSSUTIL_VERSION="2.1.1"
OSSUTIL_ZIP="ossutil-${OSSUTIL_VERSION}-linux-amd64.zip"
OSSUTIL_DIR="ossutil-${OSSUTIL_VERSION}-linux-amd64"
curl -o "$OSSUTIL_ZIP" "https://gosspublic.alicdn.com/ossutil/v2/${OSSUTIL_VERSION}/${OSSUTIL_ZIP}"
unzip "$OSSUTIL_ZIP"
chmod +x "${OSSUTIL_DIR}/ossutil"
# Create latest.json
cat > latest.json << EOF
{
"version": "${VERSION}",
"tag": "${TAG}",
"release_date": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"release_type": "stable",
"download_url": "https://github.com/${{ github.repository }}/releases/tag/${TAG}"
}
EOF
# Upload to OSS
./${OSSUTIL_DIR}/ossutil cp latest.json oss://rustfs-version/latest.json --force
echo "✅ Updated latest.json for stable release $VERSION"
# Publish release (remove draft status)
publish-release:
name: Publish Release
needs: [build-check, create-release, upload-release-assets]
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
runs-on: ubuntu-latest
permissions:
contents: write
@@ -278,88 +711,42 @@ jobs:
- name: Checkout repository
uses: actions/checkout@v4
- name: Download all artifacts
uses: actions/download-artifact@v4
with:
path: ./release-artifacts
- name: Prepare release
id: release_prep
- name: Update release notes and publish
env:
GH_TOKEN: ${{ github.token }}
run: |
VERSION="${GITHUB_REF#refs/tags/}"
VERSION_CLEAN="${VERSION#v}"
TAG="${{ needs.build-check.outputs.version }}"
VERSION="${{ needs.build-check.outputs.version }}"
IS_PRERELEASE="${{ needs.build-check.outputs.is_prerelease }}"
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
echo "version=${VERSION}" >> $GITHUB_OUTPUT
echo "version_clean=${VERSION_CLEAN}" >> $GITHUB_OUTPUT
# Determine release type
if [[ "$BUILD_TYPE" == "prerelease" ]]; then
if [[ "$TAG" == *"alpha"* ]]; then
RELEASE_TYPE="alpha"
elif [[ "$TAG" == *"beta"* ]]; then
RELEASE_TYPE="beta"
elif [[ "$TAG" == *"rc"* ]]; then
RELEASE_TYPE="rc"
else
RELEASE_TYPE="prerelease"
fi
else
RELEASE_TYPE="release"
fi
# Organize artifacts
mkdir -p ./release-files
find ./release-artifacts -name "*.tar.gz" -exec cp {} ./release-files/ \;
# Get original release notes from tag
ORIGINAL_NOTES=$(git tag -l --format='%(contents)' "${TAG}")
if [[ -z "$ORIGINAL_NOTES" || "$ORIGINAL_NOTES" =~ ^[[:space:]]*$ ]]; then
if [[ "$IS_PRERELEASE" == "true" ]]; then
ORIGINAL_NOTES="Pre-release ${VERSION} (${RELEASE_TYPE})"
else
ORIGINAL_NOTES="Release ${VERSION}"
fi
fi
# Create release notes
cat > release_notes.md << EOF
## RustFS ${VERSION_CLEAN}
# Publish the release (remove draft status)
gh release edit "$TAG" --draft=false
### 🚀 Downloads
**Linux:**
- \`rustfs-x86_64-unknown-linux-musl.tar.gz\` - Linux x86_64 (static)
- \`rustfs-aarch64-unknown-linux-musl.tar.gz\` - Linux ARM64 (static)
**macOS:**
- \`rustfs-aarch64-apple-darwin.tar.gz\` - macOS Apple Silicon
**GUI Applications:**
- \`rustfs-gui-*.tar.gz\` - GUI applications
### 📦 Installation
1. Download the appropriate binary for your platform
2. Extract: \`tar -xzf rustfs-*.tar.gz\`
3. Install: \`sudo cp rustfs-*/bin/rustfs /usr/local/bin/\`
### 🔗 Mirror Downloads
- [OSS Mirror](https://rustfs-artifacts.oss-cn-beijing.aliyuncs.com/artifacts/rustfs/)
EOF
- name: Create GitHub Release
uses: softprops/action-gh-release@v2
with:
tag_name: ${{ steps.release_prep.outputs.version }}
name: "RustFS ${{ steps.release_prep.outputs.version_clean }}"
body_path: release_notes.md
files: ./release-files/*.tar.gz
draft: false
prerelease: ${{ contains(steps.release_prep.outputs.version, 'alpha') || contains(steps.release_prep.outputs.version, 'beta') || contains(steps.release_prep.outputs.version, 'rc') }}
# Upload to OSS (optional)
upload-oss:
name: Upload to OSS
needs: [skip-duplicate, build-check, build-rustfs]
if: always() && needs.skip-duplicate.outputs.should_skip != 'true' && needs.build-check.outputs.build_type == 'release' && needs.build-rustfs.result == 'success'
runs-on: ubuntu-latest
env:
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
steps:
- name: Download artifacts
uses: actions/download-artifact@v4
with:
path: ./artifacts
- name: Upload to Aliyun OSS
if: ${{ env.OSS_ACCESS_KEY_ID != '' }}
run: |
# Install ossutil
curl -o ossutil.zip https://gosspublic.alicdn.com/ossutil/v2/2.1.1/ossutil-2.1.1-linux-amd64.zip
unzip ossutil.zip
sudo mv ossutil-*/ossutil /usr/local/bin/
# Upload files
find ./artifacts -name "*.tar.gz" -exec ossutil cp {} oss://rustfs-artifacts/artifacts/rustfs/ --force \;
# Create latest.json
VERSION="${GITHUB_REF#refs/tags/v}"
echo "{\"version\":\"${VERSION}\",\"release_date\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\"}" > latest.json
ossutil cp latest.json oss://rustfs-version/latest.json --force
echo "🎉 Released $TAG successfully!"
echo "📄 Release URL: ${{ needs.create-release.outputs.release_url }}"

View File

@@ -80,6 +80,8 @@ jobs:
concurrent_skipping: "same_content_newer"
cancel_others: true
paths_ignore: '["*.md", "docs/**", "deploy/**"]'
# Never skip release events and tag pushes
do_not_skip: '["workflow_dispatch", "schedule", "merge_group", "release", "push"]'
test-and-lint:
name: Test and Lint
@@ -100,7 +102,9 @@ jobs:
cache-save-if: ${{ github.ref == 'refs/heads/main' }}
- name: Run tests
run: cargo test --all --exclude e2e_test
run: |
cargo nextest run --all --exclude e2e_test
cargo test --all --doc
- name: Check code formatting
run: cargo fmt --all --check

View File

@@ -12,30 +12,33 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Docker Images Workflow
#
# This workflow builds Docker images using pre-built binaries from the build workflow.
#
# Trigger Types:
# 1. workflow_run: Automatically triggered when "Build and Release" workflow completes
# 2. workflow_dispatch: Manual trigger for standalone Docker builds
#
# Key Features:
# - Only triggers when Linux builds (x86_64 + aarch64) are successful
# - Independent of macOS/Windows build status
# - Uses workflow_run event for precise control
# - Only builds Docker images for releases and prereleases (development builds are skipped)
name: Docker Images
# Permissions needed for workflow_run event and Docker registry access
permissions:
contents: read
packages: write
on:
push:
tags: ["*"]
branches: [main]
paths:
- "rustfs/**"
- "crates/**"
- "Dockerfile*"
- ".docker/**"
- "Cargo.toml"
- "Cargo.lock"
- ".github/workflows/docker.yml"
pull_request:
branches: [main]
paths:
- "rustfs/**"
- "crates/**"
- "Dockerfile*"
- ".docker/**"
- "Cargo.toml"
- "Cargo.lock"
- ".github/workflows/docker.yml"
# Automatically triggered when build workflow completes
workflow_run:
workflows: ["Build and Release"]
types: [completed]
# Manual trigger with same parameters for consistency
workflow_dispatch:
inputs:
push_images:
@@ -43,156 +46,372 @@ on:
required: false
default: true
type: boolean
version:
description: "Version to build (latest for stable release, or specific version like v1.0.0, v1.0.0-alpha1)"
required: false
default: "latest"
type: string
force_rebuild:
description: "Force rebuild even if binary exists (useful for testing)"
required: false
default: false
type: boolean
env:
DOCKERHUB_USERNAME: rustfs
CARGO_TERM_COLOR: always
REGISTRY_DOCKERHUB: rustfs/rustfs
REGISTRY_GHCR: ghcr.io/${{ github.repository }}
DOCKER_PLATFORMS: linux/amd64,linux/arm64
jobs:
# Check if we should build
# Check if we should build Docker images
build-check:
name: Build Check
name: Docker Build Check
runs-on: ubuntu-latest
outputs:
should_build: ${{ steps.check.outputs.should_build }}
should_push: ${{ steps.check.outputs.should_push }}
build_type: ${{ steps.check.outputs.build_type }}
version: ${{ steps.check.outputs.version }}
short_sha: ${{ steps.check.outputs.short_sha }}
is_prerelease: ${{ steps.check.outputs.is_prerelease }}
create_latest: ${{ steps.check.outputs.create_latest }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
# For workflow_run events, checkout the specific commit that triggered the workflow
ref: ${{ github.event.workflow_run.head_sha || github.sha }}
- name: Check build conditions
id: check
run: |
should_build=false
should_push=false
build_type="none"
version=""
short_sha=""
is_prerelease=false
create_latest=false
# Always build on workflow_dispatch or when changes detected
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
[[ "${{ github.event_name }}" == "push" ]] || \
[[ "${{ github.event_name }}" == "pull_request" ]]; then
if [[ "${{ github.event_name }}" == "workflow_run" ]]; then
# Triggered by build workflow completion
echo "🔗 Triggered by build workflow completion"
# Check if the triggering workflow was successful
# If the workflow succeeded, it means ALL builds (including Linux x86_64 and aarch64) succeeded
if [[ "${{ github.event.workflow_run.conclusion }}" == "success" ]]; then
echo "✅ Build workflow succeeded, all builds including Linux are successful"
should_build=true
should_push=true
else
echo "❌ Build workflow failed (conclusion: ${{ github.event.workflow_run.conclusion }}), skipping Docker build"
should_build=false
fi
# Extract version info from commit message or use commit SHA
# Use Git to generate consistent short SHA (ensures uniqueness like build.yml)
short_sha=$(git rev-parse --short "${{ github.event.workflow_run.head_sha }}")
# Determine build type based on triggering workflow event and ref
triggering_event="${{ github.event.workflow_run.event }}"
head_branch="${{ github.event.workflow_run.head_branch }}"
echo "🔍 Analyzing triggering workflow:"
echo " 📋 Event: $triggering_event"
echo " 🌿 Head branch: $head_branch"
echo " 📎 Head SHA: ${{ github.event.workflow_run.head_sha }}"
# Check if this was triggered by a tag push
if [[ "$triggering_event" == "push" ]]; then
# For tag pushes, head_branch will be like "refs/tags/v1.0.0" or just "v1.0.0"
if [[ "$head_branch" == refs/tags/* ]]; then
# Extract tag name from refs/tags/TAG_NAME
tag_name="${head_branch#refs/tags/}"
version="$tag_name"
elif [[ "$head_branch" =~ ^v?[0-9]+\.[0-9]+\.[0-9]+ ]]; then
# Direct tag name like "v1.0.0" or "1.0.0-alpha.1"
version="$head_branch"
elif [[ "$head_branch" == "main" ]]; then
# Regular branch push to main
build_type="development"
version="dev-${short_sha}"
should_build=false
echo "⏭️ Skipping Docker build for development version (main branch push)"
else
# Other branch push
build_type="development"
version="dev-${short_sha}"
should_build=false
echo "⏭️ Skipping Docker build for development version (branch: $head_branch)"
fi
# If we extracted a version (tag), determine release type
if [[ -n "$version" ]] && [[ "$version" != "dev-${short_sha}" ]]; then
# Remove 'v' prefix if present for consistent version format
if [[ "$version" == v* ]]; then
version="${version#v}"
fi
if [[ "$version" == *"alpha"* ]] || [[ "$version" == *"beta"* ]] || [[ "$version" == *"rc"* ]]; then
build_type="prerelease"
is_prerelease=true
echo "🧪 Building Docker image for prerelease: $version"
else
build_type="release"
create_latest=true
echo "🚀 Building Docker image for release: $version"
fi
fi
else
# Non-push events
build_type="development"
version="dev-${short_sha}"
should_build=false
echo "⏭️ Skipping Docker build for development version (event: $triggering_event)"
fi
echo "🔄 Build triggered by workflow_run:"
echo " 📋 Conclusion: ${{ github.event.workflow_run.conclusion }}"
echo " 🌿 Branch: ${{ github.event.workflow_run.head_branch }}"
echo " 📎 SHA: ${{ github.event.workflow_run.head_sha }}"
echo " 🎯 Event: ${{ github.event.workflow_run.event }}"
elif [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
# Manual trigger
input_version="${{ github.event.inputs.version }}"
version="${input_version}"
should_push="${{ github.event.inputs.push_images }}"
should_build=true
fi
# Push only on main branch, tags, or manual trigger
if [[ "${{ github.ref }}" == "refs/heads/main" ]] || \
[[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]] || \
[[ "${{ github.event.inputs.push_images }}" == "true" ]]; then
should_push=true
# Get short SHA
short_sha=$(git rev-parse --short HEAD)
echo "🎯 Manual Docker build triggered:"
echo " 📋 Requested version: $input_version"
echo " 🔧 Force rebuild: ${{ github.event.inputs.force_rebuild }}"
echo " 🚀 Push images: $should_push"
case "$input_version" in
"latest")
build_type="release"
create_latest=true
echo "🚀 Building with latest stable release version"
;;
# Prerelease versions (must match first, more specific)
v*alpha*|v*beta*|v*rc*|*alpha*|*beta*|*rc*)
build_type="prerelease"
is_prerelease=true
echo "🧪 Building with prerelease version: $input_version"
;;
# Release versions (match after prereleases, more general)
v[0-9]*|[0-9]*.*.*)
build_type="release"
create_latest=true
echo "📦 Building with specific release version: $input_version"
;;
*)
# Invalid version for Docker build
should_build=false
echo "❌ Invalid version for Docker build: $input_version"
echo "⚠️ Only release versions (latest, v1.0.0, 1.0.0) and prereleases (v1.0.0-alpha1, 1.0.0-beta2) are supported"
;;
esac
fi
echo "should_build=$should_build" >> $GITHUB_OUTPUT
echo "should_push=$should_push" >> $GITHUB_OUTPUT
echo "Build: $should_build, Push: $should_push"
echo "build_type=$build_type" >> $GITHUB_OUTPUT
echo "version=$version" >> $GITHUB_OUTPUT
echo "short_sha=$short_sha" >> $GITHUB_OUTPUT
echo "is_prerelease=$is_prerelease" >> $GITHUB_OUTPUT
echo "create_latest=$create_latest" >> $GITHUB_OUTPUT
echo "🐳 Docker Build Summary:"
echo " - Should build: $should_build"
echo " - Should push: $should_push"
echo " - Build type: $build_type"
echo " - Version: $version"
echo " - Short SHA: $short_sha"
echo " - Is prerelease: $is_prerelease"
echo " - Create latest: $create_latest"
# Build multi-arch Docker images
# Strategy: Build images using pre-built binaries from dl.rustfs.com
# Supports both release and dev channel binaries based on build context
# Only runs when should_build is true (which includes workflow success check)
build-docker:
name: Build Docker Images
needs: build-check
if: needs.build-check.outputs.should_build == 'true'
runs-on: ubuntu-latest
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
variant:
- name: production
dockerfile: Dockerfile
platforms: linux/amd64,linux/arm64
- name: ubuntu
dockerfile: .docker/Dockerfile.ubuntu22.04
platforms: linux/amd64,linux/arm64
- name: alpine
dockerfile: .docker/Dockerfile.alpine
platforms: linux/amd64,linux/arm64
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# - name: Login to GitHub Container Registry
# uses: docker/login-action@v3
# with:
# registry: ghcr.io
# username: ${{ github.actor }}
# password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Login to Docker Hub
if: needs.build-check.outputs.should_push == 'true' && secrets.DOCKERHUB_USERNAME != ''
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
if: needs.build-check.outputs.should_push == 'true'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
- name: Extract metadata and generate tags
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ env.REGISTRY_DOCKERHUB }}
${{ env.REGISTRY_GHCR }}
tags: |
type=ref,event=branch,suffix=-${{ matrix.variant.name }}
type=ref,event=pr,suffix=-${{ matrix.variant.name }}
type=semver,pattern={{version}},suffix=-${{ matrix.variant.name }}
type=semver,pattern={{major}}.{{minor}},suffix=-${{ matrix.variant.name }}
type=raw,value=latest,suffix=-${{ matrix.variant.name }},enable={{is_default_branch}}
flavor: |
latest=false
run: |
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
VERSION="${{ needs.build-check.outputs.version }}"
SHORT_SHA="${{ needs.build-check.outputs.short_sha }}"
CREATE_LATEST="${{ needs.build-check.outputs.create_latest }}"
# Convert version format for Dockerfile compatibility
case "$VERSION" in
"latest")
# For stable latest, use RELEASE=latest + release CHANNEL
DOCKER_RELEASE="latest"
DOCKER_CHANNEL="release"
;;
v*)
# For versioned releases (v1.0.0), remove 'v' prefix for Dockerfile
DOCKER_RELEASE="${VERSION#v}"
DOCKER_CHANNEL="release"
;;
*)
# For other versions, pass as-is
DOCKER_RELEASE="${VERSION}"
DOCKER_CHANNEL="release"
;;
esac
echo "docker_release=$DOCKER_RELEASE" >> $GITHUB_OUTPUT
echo "docker_channel=$DOCKER_CHANNEL" >> $GITHUB_OUTPUT
echo "🐳 Docker build parameters:"
echo " - Original version: $VERSION"
echo " - Docker RELEASE: $DOCKER_RELEASE"
echo " - Docker CHANNEL: $DOCKER_CHANNEL"
# Generate tags based on build type
# Only support release and prerelease builds (no development builds)
TAGS="${{ env.REGISTRY_DOCKERHUB }}:${VERSION}"
# Add channel tags for prereleases and latest for stable
if [[ "$CREATE_LATEST" == "true" ]]; then
# Stable release
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:latest"
elif [[ "$BUILD_TYPE" == "prerelease" ]]; then
# Prerelease channel tags (alpha, beta, rc)
if [[ "$VERSION" == *"alpha"* ]]; then
CHANNEL="alpha"
elif [[ "$VERSION" == *"beta"* ]]; then
CHANNEL="beta"
elif [[ "$VERSION" == *"rc"* ]]; then
CHANNEL="rc"
fi
if [[ -n "$CHANNEL" ]]; then
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:${CHANNEL}"
fi
fi
# Output tags
echo "tags=$TAGS" >> $GITHUB_OUTPUT
# Generate labels
LABELS="org.opencontainers.image.title=RustFS"
LABELS="$LABELS,org.opencontainers.image.description=RustFS distributed object storage system"
LABELS="$LABELS,org.opencontainers.image.version=$VERSION"
LABELS="$LABELS,org.opencontainers.image.revision=${{ github.sha }}"
LABELS="$LABELS,org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}"
LABELS="$LABELS,org.opencontainers.image.created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')"
LABELS="$LABELS,org.opencontainers.image.build-type=$BUILD_TYPE"
echo "labels=$LABELS" >> $GITHUB_OUTPUT
echo "🐳 Generated Docker tags:"
echo "$TAGS" | tr ',' '\n' | sed 's/^/ - /'
echo "📋 Build type: $BUILD_TYPE"
echo "🔖 Version: $VERSION"
- name: Build and push Docker image
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
context: .
file: ${{ matrix.variant.dockerfile }}
platforms: ${{ matrix.variant.platforms }}
file: Dockerfile
platforms: ${{ env.DOCKER_PLATFORMS }}
push: ${{ needs.build-check.outputs.should_push == 'true' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha,scope=docker-${{ matrix.variant.name }}
cache-to: type=gha,mode=max,scope=docker-${{ matrix.variant.name }}
cache-from: |
type=gha,scope=docker-binary
cache-to: |
type=gha,mode=max,scope=docker-binary
build-args: |
BUILDTIME=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.created'] }}
VERSION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.version'] }}
REVISION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }}
BUILDTIME=$(date -u +'%Y-%m-%dT%H:%M:%SZ')
VERSION=${{ needs.build-check.outputs.version }}
BUILD_TYPE=${{ needs.build-check.outputs.build_type }}
REVISION=${{ github.sha }}
RELEASE=${{ steps.meta.outputs.docker_release }}
CHANNEL=${{ steps.meta.outputs.docker_channel }}
BUILDKIT_INLINE_CACHE=1
# Enable advanced BuildKit features for better performance
provenance: false
sbom: false
# Add retry mechanism by splitting the build process
no-cache: false
pull: true
# Create manifest for main production image
create-manifest:
name: Create Manifest
# Note: Manifest creation is no longer needed as we only build one variant
# Multi-arch manifests are automatically created by docker/build-push-action
# Docker build summary
docker-summary:
name: Docker Build Summary
needs: [build-check, build-docker]
if: needs.build-check.outputs.should_push == 'true' && startsWith(github.ref, 'refs/tags/')
if: always() && needs.build-check.outputs.should_build == 'true'
runs-on: ubuntu-latest
steps:
- name: Login to Docker Hub
if: secrets.DOCKERHUB_USERNAME != ''
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create and push manifest
- name: Docker build completion summary
run: |
VERSION=${GITHUB_REF#refs/tags/}
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
VERSION="${{ needs.build-check.outputs.version }}"
CREATE_LATEST="${{ needs.build-check.outputs.create_latest }}"
# Create main image tag (without variant suffix)
if [[ -n "${{ secrets.DOCKERHUB_USERNAME }}" ]]; then
docker buildx imagetools create \
-t ${{ env.REGISTRY_DOCKERHUB }}:${VERSION} \
-t ${{ env.REGISTRY_DOCKERHUB }}:latest \
${{ env.REGISTRY_DOCKERHUB }}:${VERSION}-production
fi
echo "🐳 Docker build completed successfully!"
echo "📦 Build type: $BUILD_TYPE"
echo "🔢 Version: $VERSION"
echo "🚀 Strategy: Images using pre-built binaries (release channel only)"
echo ""
docker buildx imagetools create \
-t ${{ env.REGISTRY_GHCR }}:${VERSION} \
-t ${{ env.REGISTRY_GHCR }}:latest \
${{ env.REGISTRY_GHCR }}:${VERSION}-production
case "$BUILD_TYPE" in
"release")
echo "🚀 Release Docker image has been built with ${VERSION} tags"
echo "✅ This image is ready for production use"
if [[ "$CREATE_LATEST" == "true" ]]; then
echo "🏷️ Latest tag has been created for stable release"
fi
;;
"prerelease")
echo "🧪 Prerelease Docker image has been built with ${VERSION} tags"
echo "⚠️ This is a prerelease image - use with caution"
echo "🚫 Latest tag NOT created for prerelease"
;;
*)
echo "❌ Unexpected build type: $BUILD_TYPE"
;;
esac

View File

@@ -1,8 +1,22 @@
name: 'issue-translator'
on:
issue_comment:
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: "issue-translator"
on:
issue_comment:
types: [created]
issues:
issues:
types: [opened]
jobs:
@@ -14,5 +28,5 @@ jobs:
IS_MODIFY_TITLE: false
# not require, default false, . Decide whether to modify the issue title
# if true, the robot account @Issues-translate-bot must have modification permissions, invite @Issues-translate-bot to your project or use your custom bot.
CUSTOM_BOT_NOTE: Bot detected the issue body's language is not English, translate it automatically.
CUSTOM_BOT_NOTE: Bot detected the issue body's language is not English, translate it automatically.
# not require. Customize the translation robot prefix message.

View File

@@ -18,10 +18,10 @@ on:
push:
branches: [main]
paths:
- "rustfs/**"
- "crates/**"
- "Cargo.toml"
- "Cargo.lock"
- "**/*.rs"
- "**/Cargo.toml"
- "**/Cargo.lock"
- ".github/workflows/performance.yml"
workflow_dispatch:
inputs:
profile_duration:
@@ -73,12 +73,11 @@ jobs:
echo "RUSTFS_VOLUMES=./target/volume/test{0...4}" >> $GITHUB_ENV
echo "RUST_LOG=rustfs=info,ecstore=info,s3s=info,iam=info,rustfs-obs=info" >> $GITHUB_ENV
- name: Download static files
- name: Verify console static assets
run: |
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
-o tempfile.zip --retry 3 --retry-delay 5
unzip -o tempfile.zip -d ./rustfs/static
rm tempfile.zip
# Console static assets are already embedded in the repository
echo "Console static assets size: $(du -sh rustfs/static/)"
echo "Console static assets are embedded via rust-embed, no external download needed"
- name: Build with profiling optimizations
run: |

1
.gitignore vendored
View File

@@ -19,3 +19,4 @@ deploy/certs/*
profile.json
.docker/openobserve-otel/data
*.zst
.secrets

689
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -36,6 +36,7 @@ members = [
"crates/utils", # Utility functions and helpers
"crates/workers", # Worker thread pools and task scheduling
"crates/zip", # ZIP file handling and compression
"crates/ahm",
]
resolver = "2"
@@ -44,7 +45,11 @@ edition = "2024"
license = "Apache-2.0"
repository = "https://github.com/rustfs/rustfs"
rust-version = "1.85"
version = "0.0.3"
version = "0.0.5"
homepage = "https://rustfs.com"
description = "RustFS is a high-performance distributed object storage software built using Rust, one of the most popular languages worldwide. "
keywords = ["RustFS", "Minio", "object-storage", "filesystem", "s3"]
categories = ["web-programming", "development-tools", "filesystem", "network-programming"]
[workspace.lints.rust]
unsafe_code = "deny"
@@ -52,33 +57,39 @@ unsafe_code = "deny"
[workspace.lints.clippy]
all = "warn"
[patch.crates-io]
rustfs-utils = { path = "crates/utils" }
rustfs-filemeta = { path = "crates/filemeta" }
rustfs-rio = { path = "crates/rio" }
[workspace.dependencies]
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.3" }
rustfs-appauth = { path = "crates/appauth", version = "0.0.3" }
rustfs-common = { path = "crates/common", version = "0.0.3" }
rustfs-crypto = { path = "crates/crypto", version = "0.0.3" }
rustfs-ecstore = { path = "crates/ecstore", version = "0.0.3" }
rustfs-iam = { path = "crates/iam", version = "0.0.3" }
rustfs-lock = { path = "crates/lock", version = "0.0.3" }
rustfs-madmin = { path = "crates/madmin", version = "0.0.3" }
rustfs-policy = { path = "crates/policy", version = "0.0.3" }
rustfs-protos = { path = "crates/protos", version = "0.0.3" }
rustfs-s3select-query = { path = "crates/s3select-query", version = "0.0.3" }
rustfs = { path = "./rustfs", version = "0.0.3" }
rustfs-zip = { path = "./crates/zip", version = "0.0.3" }
rustfs-config = { path = "./crates/config", version = "0.0.3" }
rustfs-obs = { path = "crates/obs", version = "0.0.3" }
rustfs-notify = { path = "crates/notify", version = "0.0.3" }
rustfs-utils = { path = "crates/utils", version = "0.0.3" }
rustfs-rio = { path = "crates/rio", version = "0.0.3" }
rustfs-filemeta = { path = "crates/filemeta", version = "0.0.3" }
rustfs-signer = { path = "crates/signer", version = "0.0.3" }
rustfs-workers = { path = "crates/workers", version = "0.0.3" }
rustfs-ahm = { path = "crates/ahm", version = "0.0.5" }
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.5" }
rustfs-appauth = { path = "crates/appauth", version = "0.0.5" }
rustfs-common = { path = "crates/common", version = "0.0.5" }
rustfs-crypto = { path = "crates/crypto", version = "0.0.5" }
rustfs-ecstore = { path = "crates/ecstore", version = "0.0.5" }
rustfs-iam = { path = "crates/iam", version = "0.0.5" }
rustfs-lock = { path = "crates/lock", version = "0.0.5" }
rustfs-madmin = { path = "crates/madmin", version = "0.0.5" }
rustfs-policy = { path = "crates/policy", version = "0.0.5" }
rustfs-protos = { path = "crates/protos", version = "0.0.5" }
rustfs-s3select-query = { path = "crates/s3select-query", version = "0.0.5" }
rustfs = { path = "./rustfs", version = "0.0.5" }
rustfs-zip = { path = "./crates/zip", version = "0.0.5" }
rustfs-config = { path = "./crates/config", version = "0.0.5" }
rustfs-obs = { path = "crates/obs", version = "0.0.5" }
rustfs-notify = { path = "crates/notify", version = "0.0.5" }
rustfs-utils = { path = "crates/utils", version = "0.0.5" }
rustfs-rio = { path = "crates/rio", version = "0.0.5" }
rustfs-filemeta = { path = "crates/filemeta", version = "0.0.5" }
rustfs-signer = { path = "crates/signer", version = "0.0.5" }
rustfs-workers = { path = "crates/workers", version = "0.0.5" }
aes-gcm = { version = "0.10.3", features = ["std"] }
arc-swap = "1.7.1"
argon2 = { version = "0.5.3", features = ["std"] }
atoi = "2.0.0"
async-channel = "2.4.0"
async-channel = "2.5.0"
async-recursion = "1.1.1"
async-trait = "0.1.88"
async-compression = { version = "0.4.0" }
@@ -96,7 +107,7 @@ byteorder = "1.5.0"
cfg-if = "1.0.1"
chacha20poly1305 = { version = "0.10.1" }
chrono = { version = "0.4.41", features = ["serde"] }
clap = { version = "4.5.40", features = ["derive", "env"] }
clap = { version = "4.5.41", features = ["derive", "env"] }
const-str = { version = "0.6.2", features = ["std", "proc"] }
crc32fast = "1.4.2"
criterion = { version = "0.5", features = ["html_reports"] }
@@ -105,7 +116,7 @@ datafusion = "46.0.1"
derive_builder = "0.20.2"
dioxus = { version = "0.6.3", features = ["router"] }
dirs = "6.0.0"
enumset = "1.1.6"
enumset = "1.1.7"
flatbuffers = "25.2.10"
flate2 = "1.1.2"
flexi_logger = { version = "0.31.2", features = ["trc", "dont_minimize_extra_stacks"] }
@@ -119,7 +130,7 @@ hex-simd = "0.8.0"
highway = { version = "1.3.0" }
hmac = "0.12.1"
hyper = "1.6.0"
hyper-util = { version = "0.1.14", features = [
hyper-util = { version = "0.1.15", features = [
"tokio",
"server-auto",
"server-graceful",
@@ -171,10 +182,9 @@ pbkdf2 = "0.12.2"
percent-encoding = "2.3.1"
pin-project-lite = "0.2.16"
prost = "0.13.5"
prost-build = "0.13.5"
quick-xml = "0.37.5"
quick-xml = "0.38.0"
rand = "0.9.1"
rdkafka = { version = "0.37.0", features = ["tokio"] }
rdkafka = { version = "0.38.0", features = ["tokio"] }
reed-solomon-simd = { version = "3.0.1" }
regex = { version = "1.11.1" }
reqwest = { version = "0.12.22", default-features = false, features = [
@@ -197,10 +207,10 @@ rumqttc = { version = "0.24" }
rust-embed = { version = "8.7.2" }
rust-i18n = { version = "3.1.5" }
rustfs-rsc = "2025.506.1"
rustls = { version = "0.23.28" }
rustls = { version = "0.23.29" }
rustls-pki-types = "1.12.0"
rustls-pemfile = "2.2.0"
s3s = { version = "0.12.0-minio-preview.1" }
s3s = { version = "0.12.0-minio-preview.2" }
shadow-rs = { version = "1.2.0", default-features = false }
serde = { version = "1.0.219", features = ["derive"] }
serde_json = { version = "1.0.140", features = ["raw_value"] }
@@ -212,10 +222,12 @@ siphasher = "1.0.1"
smallvec = { version = "1.15.1", features = ["serde"] }
snafu = "0.8.6"
snap = "1.1.1"
socket2 = "0.5.10"
socket2 = "0.6.0"
strum = { version = "0.27.1", features = ["derive"] }
sysinfo = "0.35.2"
sysinfo = "0.36.0"
sysctl = "0.6.0"
tempfile = "3.20.0"
temp-env = "0.3.6"
test-case = "3.3.1"
thiserror = "2.0.12"
time = { version = "0.3.41", features = [
@@ -225,10 +237,11 @@ time = { version = "0.3.41", features = [
"macros",
"serde",
] }
tokio = { version = "1.46.0", features = ["fs", "rt-multi-thread"] }
tokio = { version = "1.46.1", features = ["fs", "rt-multi-thread"] }
tokio-rustls = { version = "0.26.2", default-features = false }
tokio-stream = { version = "0.1.17" }
tokio-tar = "0.3.1"
tokio-test = "0.4.4"
tokio-util = { version = "0.7.15", features = ["io", "compat"] }
tonic = { version = "0.13.1", features = ["gzip"] }
tonic-build = { version = "0.13.1" }
@@ -253,6 +266,7 @@ winapi = { version = "0.3.9" }
xxhash-rust = { version = "0.8.15", features = ["xxh64", "xxh3"] }
zip = "2.4.2"
zstd = "0.13.3"
anyhow = "1.0.98"
[profile.wasm-dev]
inherits = "dev"

View File

@@ -1,49 +1,121 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Multi-stage build for RustFS production image
FROM alpine:latest AS build
FROM alpine:3.18 AS builder
# Build arguments - use TARGETPLATFORM for consistency with Dockerfile.source
ARG TARGETPLATFORM
ARG BUILDPLATFORM
ARG RELEASE=latest
RUN apk add -U --no-cache \
# Install dependencies for downloading and verifying binaries
RUN apk add --no-cache \
ca-certificates \
curl \
bash \
unzip
wget \
unzip \
jq
RUN curl -Lo /tmp/rustfs.zip https://dl.rustfs.com/artifacts/rustfs/rustfs-release-x86_64-unknown-linux-musl.latest.zip && \
unzip /tmp/rustfs.zip -d /tmp && \
mv /tmp/rustfs-release-x86_64-unknown-linux-musl/bin/rustfs /rustfs && \
chmod +x /rustfs && \
rm -rf /tmp/*
# Create build directory
WORKDIR /build
FROM alpine:3.18
# Map TARGETPLATFORM to architecture format used in builds
RUN case "${TARGETPLATFORM}" in \
"linux/amd64") ARCH="x86_64" ;; \
"linux/arm64") ARCH="aarch64" ;; \
*) echo "Unsupported platform: ${TARGETPLATFORM}" && exit 1 ;; \
esac && \
echo "ARCH=${ARCH}" > /build/arch.env
RUN apk add -U --no-cache \
# Download rustfs binary from dl.rustfs.com (release channel only)
RUN . /build/arch.env && \
BASE_URL="https://dl.rustfs.com/artifacts/rustfs/release" && \
PLATFORM="linux" && \
if [ "${RELEASE}" = "latest" ]; then \
# Download latest release version \
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-latest.zip"; \
DOWNLOAD_URL="${BASE_URL}/${PACKAGE_NAME}"; \
echo "📥 Downloading latest release build: ${PACKAGE_NAME}"; \
else \
# Download specific release version \
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-v${RELEASE}.zip"; \
DOWNLOAD_URL="${BASE_URL}/${PACKAGE_NAME}"; \
echo "📥 Downloading specific release version: ${PACKAGE_NAME}"; \
fi && \
echo "🔗 Download URL: ${DOWNLOAD_URL}" && \
curl -f -L "${DOWNLOAD_URL}" -o /build/rustfs.zip && \
if [ ! -f /build/rustfs.zip ] || [ ! -s /build/rustfs.zip ]; then \
echo "❌ Failed to download binary package"; \
echo "💡 Make sure the package ${PACKAGE_NAME} exists"; \
echo "🔗 Check: ${DOWNLOAD_URL}"; \
exit 1; \
fi && \
unzip /build/rustfs.zip -d /build && \
chmod +x /build/rustfs && \
rm /build/rustfs.zip && \
echo "✅ Successfully downloaded and extracted rustfs binary"
# Runtime stage
FROM alpine:latest
# Set build arguments and labels
ARG RELEASE=latest
ARG BUILD_DATE
ARG VCS_REF
LABEL name="RustFS" \
vendor="RustFS Team" \
maintainer="RustFS Team <dev@rustfs.com>" \
version="${RELEASE}" \
release="${RELEASE}" \
build-date="${BUILD_DATE}" \
vcs-ref="${VCS_REF}" \
summary="RustFS is a high-performance distributed object storage system written in Rust, compatible with S3 API." \
description="RustFS is a high-performance distributed object storage software built using Rust. It supports erasure coding storage, multi-tenant management, observability, and other enterprise-level features." \
url="https://rustfs.com" \
license="Apache-2.0"
# Install runtime dependencies
RUN apk add --no-cache \
ca-certificates \
bash
COPY --from=builder /rustfs /usr/local/bin/rustfs
curl \
tzdata \
bash \
&& addgroup -g 1000 rustfs \
&& adduser -u 1000 -G rustfs -s /bin/sh -D rustfs
# Environment variables
ENV RUSTFS_ACCESS_KEY=rustfsadmin \
RUSTFS_SECRET_KEY=rustfsadmin \
RUSTFS_ADDRESS=":9000" \
RUSTFS_CONSOLE_ADDRESS=":9001" \
RUSTFS_CONSOLE_ENABLE=true \
RUSTFS_VOLUMES=/data \
RUST_LOG=warn
EXPOSE 9000 9001
# Set permissions for /usr/bin (similar to MinIO's approach)
RUN chmod -R 755 /usr/bin
RUN mkdir -p /data
VOLUME /data
# Copy CA certificates and binaries from build stage
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /build/rustfs /usr/bin/
CMD ["rustfs", "/data"]
# Set executable permissions
RUN chmod +x /usr/bin/rustfs
# Create data directory
RUN mkdir -p /data /config && chown -R rustfs:rustfs /data /config
# Switch to non-root user
USER rustfs
# Set working directory
WORKDIR /data
# Expose port
EXPOSE 9000
# Volume for data
VOLUME ["/data"]
# Set entrypoint
ENTRYPOINT ["/usr/bin/rustfs"]

View File

@@ -1,21 +0,0 @@
FROM ubuntu:latest
# RUN apk add --no-cache <package-name>
# 如果 rustfs 有依赖,可以在这里添加,例如:
# RUN apk add --no-cache openssl
# RUN apk add --no-cache bash # 安装 Bash
WORKDIR /app
# 创建与 RUSTFS_VOLUMES 一致的目录
RUN mkdir -p /root/data/target/volume/test1 /root/data/target/volume/test2 /root/data/target/volume/test3 /root/data/target/volume/test4
# COPY ./target/x86_64-unknown-linux-musl/release/rustfs /app/rustfs
COPY ./target/x86_64-unknown-linux-gnu/release/rustfs /app/rustfs
RUN chmod +x /app/rustfs
EXPOSE 9000
EXPOSE 9002
CMD ["/app/rustfs"]

View File

@@ -1,10 +1,26 @@
# Multi-stage Dockerfile for RustFS
# Multi-stage Dockerfile for RustFS - LOCAL DEVELOPMENT ONLY
#
# ⚠️ IMPORTANT: This Dockerfile is for local development and testing only.
# ⚠️ It builds RustFS from source code and is NOT used in CI/CD pipelines.
# ⚠️ CI/CD pipeline uses pre-built binaries from Dockerfile instead.
#
# Usage for local development:
# docker build -f Dockerfile.source -t rustfs:dev-local .
# docker run --rm -p 9000:9000 rustfs:dev-local
#
# Supports cross-compilation for amd64 and arm64 architectures
ARG TARGETPLATFORM
ARG BUILDPLATFORM
# Build stage
FROM --platform=$BUILDPLATFORM rust:1.85-bookworm AS builder
FROM --platform=$BUILDPLATFORM rust:1.88-bookworm AS builder
# Re-declare build arguments after FROM (required for multi-stage builds)
ARG TARGETPLATFORM
ARG BUILDPLATFORM
# Debug: Print platform information
RUN echo "🐳 Build Info: BUILDPLATFORM=$BUILDPLATFORM, TARGETPLATFORM=$TARGETPLATFORM"
# Install required build dependencies
RUN apt-get update && apt-get install -y \
@@ -18,6 +34,8 @@ RUN apt-get update && apt-get install -y \
lld \
&& rm -rf /var/lib/apt/lists/*
# Note: sccache removed for simpler builds
# Install cross-compilation tools for ARM64
RUN if [ "$TARGETPLATFORM" = "linux/arm64" ]; then \
apt-get update && \
@@ -37,10 +55,13 @@ RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc && rm -rf Linux.flatc.binary.g++-13.zip
# Set up Rust targets based on platform
RUN case "$TARGETPLATFORM" in \
RUN set -e && \
PLATFORM="${TARGETPLATFORM:-linux/amd64}" && \
echo "🎯 Setting up Rust target for platform: $PLATFORM" && \
case "$PLATFORM" in \
"linux/amd64") rustup target add x86_64-unknown-linux-gnu ;; \
"linux/arm64") rustup target add aarch64-unknown-linux-gnu ;; \
*) echo "Unsupported platform: $TARGETPLATFORM" && exit 1 ;; \
*) echo "Unsupported platform: $PLATFORM" && exit 1 ;; \
esac
# Set up environment for cross-compilation
@@ -50,37 +71,37 @@ ENV CXX_aarch64_unknown_linux_gnu=aarch64-linux-gnu-g++
WORKDIR /usr/src/rustfs
# Copy Cargo files for dependency caching
COPY Cargo.toml Cargo.lock ./
COPY */Cargo.toml ./*/
# Create dummy main.rs files for dependency compilation
RUN find . -name "Cargo.toml" -not -path "./Cargo.toml" | \
xargs -I {} dirname {} | \
xargs -I {} sh -c 'mkdir -p {}/src && echo "fn main() {}" > {}/src/main.rs'
# Build dependencies only (cache layer)
RUN case "$TARGETPLATFORM" in \
"linux/amd64") cargo build --release --target x86_64-unknown-linux-gnu ;; \
"linux/arm64") cargo build --release --target aarch64-unknown-linux-gnu ;; \
esac
# Copy source code
# Copy all source code
COPY . .
# Configure cargo for optimized builds
ENV CARGO_NET_GIT_FETCH_WITH_CLI=true \
CARGO_REGISTRIES_CRATES_IO_PROTOCOL=sparse \
CARGO_INCREMENTAL=0 \
CARGO_PROFILE_RELEASE_DEBUG=false \
CARGO_PROFILE_RELEASE_SPLIT_DEBUGINFO=off \
CARGO_PROFILE_RELEASE_STRIP=symbols
# Generate protobuf code
RUN cargo run --bin gproto
# Build the actual application
# Build the actual application with optimizations
RUN case "$TARGETPLATFORM" in \
"linux/amd64") \
cargo build --release --target x86_64-unknown-linux-gnu --bin rustfs && \
echo "🔨 Building for amd64..." && \
rustup target add x86_64-unknown-linux-gnu && \
cargo build --release --target x86_64-unknown-linux-gnu --bin rustfs -j $(nproc) && \
cp target/x86_64-unknown-linux-gnu/release/rustfs /usr/local/bin/rustfs \
;; \
"linux/arm64") \
cargo build --release --target aarch64-unknown-linux-gnu --bin rustfs && \
echo "🔨 Building for arm64..." && \
rustup target add aarch64-unknown-linux-gnu && \
cargo build --release --target aarch64-unknown-linux-gnu --bin rustfs -j $(nproc) && \
cp target/aarch64-unknown-linux-gnu/release/rustfs /usr/local/bin/rustfs \
;; \
*) \
echo "❌ Unsupported platform: $TARGETPLATFORM" && exit 1 \
;; \
esac
# Runtime stage - Ubuntu minimal for better compatibility
@@ -111,11 +132,19 @@ RUN chmod +x /app/rustfs && chown rustfs:rustfs /app/rustfs
USER rustfs
# Expose ports
EXPOSE 9000 9001
EXPOSE 9000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:9000/health || exit 1
# Environment variables
ENV RUSTFS_ACCESS_KEY=rustfsadmin \
RUSTFS_SECRET_KEY=rustfsadmin \
RUSTFS_ADDRESS=":9000" \
RUSTFS_CONSOLE_ENABLE=true \
RUSTFS_VOLUMES=/data \
RUST_LOG=warn
# Volume for data
VOLUME ["/data"]
# Set default command
CMD ["/app/rustfs"]

325
Makefile
View File

@@ -5,7 +5,9 @@
DOCKER_CLI ?= docker
IMAGE_NAME ?= rustfs:v1.0.0
CONTAINER_NAME ?= rustfs-dev
DOCKERFILE_PATH = $(shell pwd)/.docker
# Docker build configurations
DOCKERFILE_PRODUCTION = Dockerfile
DOCKERFILE_SOURCE = Dockerfile.source
# Code quality and formatting targets
.PHONY: fmt
@@ -31,7 +33,8 @@ check:
.PHONY: test
test:
@echo "🧪 Running tests..."
cargo test --all --exclude e2e_test
cargo nextest run --all --exclude e2e_test
cargo test --all --doc
.PHONY: pre-commit
pre-commit: fmt clippy check test
@@ -43,21 +46,6 @@ setup-hooks:
chmod +x .git/hooks/pre-commit
@echo "✅ Git hooks setup complete!"
.PHONY: init-devenv
init-devenv:
$(DOCKER_CLI) build -t $(IMAGE_NAME) -f $(DOCKERFILE_PATH)/Dockerfile.devenv .
$(DOCKER_CLI) stop $(CONTAINER_NAME)
$(DOCKER_CLI) rm $(CONTAINER_NAME)
$(DOCKER_CLI) run -d --name $(CONTAINER_NAME) -p 9010:9010 -p 9000:9000 -v $(shell pwd):/root/s3-rustfs -it $(IMAGE_NAME)
.PHONY: start
start:
$(DOCKER_CLI) start $(CONTAINER_NAME)
.PHONY: stop
stop:
$(DOCKER_CLI) stop $(CONTAINER_NAME)
.PHONY: e2e-server
e2e-server:
sh $(shell pwd)/scripts/run.sh
@@ -66,86 +54,184 @@ e2e-server:
probe-e2e:
sh $(shell pwd)/scripts/probe.sh
# make BUILD_OS=ubuntu22.04 build
# in target/ubuntu22.04/release/rustfs
# make BUILD_OS=rockylinux9.3 build
# in target/rockylinux9.3/release/rustfs
BUILD_OS ?= rockylinux9.3
# Native build using build-rustfs.sh script
.PHONY: build
build: ROCKYLINUX_BUILD_IMAGE_NAME = rustfs-$(BUILD_OS):v1
build: ROCKYLINUX_BUILD_CONTAINER_NAME = rustfs-$(BUILD_OS)-build
build: BUILD_CMD = /root/.cargo/bin/cargo build --release --bin rustfs --target-dir /root/s3-rustfs/target/$(BUILD_OS)
build:
$(DOCKER_CLI) build -t $(ROCKYLINUX_BUILD_IMAGE_NAME) -f $(DOCKERFILE_PATH)/Dockerfile.$(BUILD_OS) .
$(DOCKER_CLI) run --rm --name $(ROCKYLINUX_BUILD_CONTAINER_NAME) -v $(shell pwd):/root/s3-rustfs -it $(ROCKYLINUX_BUILD_IMAGE_NAME) $(BUILD_CMD)
@echo "🔨 Building RustFS using build-rustfs.sh script..."
./build-rustfs.sh
.PHONY: build-dev
build-dev:
@echo "🔨 Building RustFS in development mode..."
./build-rustfs.sh --dev
# Docker-based build (alternative approach)
# Usage: make BUILD_OS=ubuntu22.04 build-docker
# Output: target/ubuntu22.04/release/rustfs
BUILD_OS ?= rockylinux9.3
.PHONY: build-docker
build-docker: SOURCE_BUILD_IMAGE_NAME = rustfs-$(BUILD_OS):v1
build-docker: SOURCE_BUILD_CONTAINER_NAME = rustfs-$(BUILD_OS)-build
build-docker: BUILD_CMD = /root/.cargo/bin/cargo build --release --bin rustfs --target-dir /root/s3-rustfs/target/$(BUILD_OS)
build-docker:
@echo "🐳 Building RustFS using Docker ($(BUILD_OS))..."
$(DOCKER_CLI) build -t $(SOURCE_BUILD_IMAGE_NAME) -f $(DOCKERFILE_SOURCE) .
$(DOCKER_CLI) run --rm --name $(SOURCE_BUILD_CONTAINER_NAME) -v $(shell pwd):/root/s3-rustfs -it $(SOURCE_BUILD_IMAGE_NAME) $(BUILD_CMD)
.PHONY: build-musl
build-musl:
@echo "🔨 Building rustfs for x86_64-unknown-linux-musl..."
cargo build --target x86_64-unknown-linux-musl --bin rustfs -r
@echo "💡 On macOS/Windows, use 'make build-docker' or 'make docker-dev' instead"
./build-rustfs.sh --platform x86_64-unknown-linux-musl
.PHONY: build-gnu
build-gnu:
@echo "🔨 Building rustfs for x86_64-unknown-linux-gnu..."
cargo build --target x86_64-unknown-linux-gnu --bin rustfs -r
@echo "💡 On macOS/Windows, use 'make build-docker' or 'make docker-dev' instead"
./build-rustfs.sh --platform x86_64-unknown-linux-gnu
.PHONY: deploy-dev
deploy-dev: build-musl
@echo "🚀 Deploying to dev server: $${IP}"
./scripts/dev_deploy.sh $${IP}
# Multi-architecture Docker build targets
.PHONY: docker-build-multiarch
docker-build-multiarch:
@echo "🏗️ Building multi-architecture Docker images..."
./scripts/build-docker-multiarch.sh
# ========================================================================================
# Docker Multi-Architecture Builds (Primary Methods)
# ========================================================================================
.PHONY: docker-build-multiarch-push
docker-build-multiarch-push:
@echo "🚀 Building and pushing multi-architecture Docker images..."
./scripts/build-docker-multiarch.sh --push
# Production builds using docker-buildx.sh (for CI/CD and production)
.PHONY: docker-buildx
docker-buildx:
@echo "🏗️ Building multi-architecture production Docker images with buildx..."
./docker-buildx.sh
.PHONY: docker-build-multiarch-version
docker-build-multiarch-version:
.PHONY: docker-buildx-push
docker-buildx-push:
@echo "🚀 Building and pushing multi-architecture production Docker images with buildx..."
./docker-buildx.sh --push
.PHONY: docker-buildx-version
docker-buildx-version:
@if [ -z "$(VERSION)" ]; then \
echo "❌ 错误: 请指定版本, 例如: make docker-build-multiarch-version VERSION=v1.0.0"; \
echo "❌ 错误: 请指定版本, 例如: make docker-buildx-version VERSION=v1.0.0"; \
exit 1; \
fi
@echo "🏗️ Building multi-architecture Docker images (version: $(VERSION))..."
./scripts/build-docker-multiarch.sh --version $(VERSION)
@echo "🏗️ Building multi-architecture production Docker images (version: $(VERSION))..."
./docker-buildx.sh --release $(VERSION)
.PHONY: docker-push-multiarch-version
docker-push-multiarch-version:
.PHONY: docker-buildx-push-version
docker-buildx-push-version:
@if [ -z "$(VERSION)" ]; then \
echo "❌ 错误: 请指定版本, 例如: make docker-push-multiarch-version VERSION=v1.0.0"; \
echo "❌ 错误: 请指定版本, 例如: make docker-buildx-push-version VERSION=v1.0.0"; \
exit 1; \
fi
@echo "🚀 Building and pushing multi-architecture Docker images (version: $(VERSION))..."
./scripts/build-docker-multiarch.sh --version $(VERSION) --push
@echo "🚀 Building and pushing multi-architecture production Docker images (version: $(VERSION))..."
./docker-buildx.sh --release $(VERSION) --push
.PHONY: docker-build-ubuntu
docker-build-ubuntu:
@echo "🏗️ Building multi-architecture Ubuntu Docker images..."
./scripts/build-docker-multiarch.sh --type ubuntu
# Development/Source builds using direct buildx commands
.PHONY: docker-dev
docker-dev:
@echo "🏗️ Building multi-architecture development Docker images with buildx..."
@echo "💡 This builds from source code and is intended for local development and testing"
@echo "⚠️ Multi-arch images cannot be loaded locally, use docker-dev-push to push to registry"
$(DOCKER_CLI) buildx build \
--platform linux/amd64,linux/arm64 \
--file $(DOCKERFILE_SOURCE) \
--tag rustfs:source-latest \
--tag rustfs:dev-latest \
.
.PHONY: docker-build-rockylinux
docker-build-rockylinux:
@echo "🏗️ Building multi-architecture RockyLinux Docker images..."
./scripts/build-docker-multiarch.sh --type rockylinux
.PHONY: docker-dev-local
docker-dev-local:
@echo "🏗️ Building single-architecture development Docker image for local use..."
@echo "💡 This builds from source code for the current platform and loads locally"
$(DOCKER_CLI) buildx build \
--file $(DOCKERFILE_SOURCE) \
--tag rustfs:source-latest \
--tag rustfs:dev-latest \
--load \
.
.PHONY: docker-build-devenv
docker-build-devenv:
@echo "🏗️ Building multi-architecture development environment Docker images..."
./scripts/build-docker-multiarch.sh --type devenv
.PHONY: docker-dev-push
docker-dev-push:
@if [ -z "$(REGISTRY)" ]; then \
echo "❌ 错误: 请指定镜像仓库, 例如: make docker-dev-push REGISTRY=ghcr.io/username"; \
exit 1; \
fi
@echo "🚀 Building and pushing multi-architecture development Docker images..."
@echo "💡 推送到仓库: $(REGISTRY)"
$(DOCKER_CLI) buildx build \
--platform linux/amd64,linux/arm64 \
--file $(DOCKERFILE_SOURCE) \
--tag $(REGISTRY)/rustfs:source-latest \
--tag $(REGISTRY)/rustfs:dev-latest \
--push \
.
.PHONY: docker-build-all-types
docker-build-all-types:
@echo "🏗️ Building all multi-architecture Docker image types..."
./scripts/build-docker-multiarch.sh --type production
./scripts/build-docker-multiarch.sh --type ubuntu
./scripts/build-docker-multiarch.sh --type rockylinux
./scripts/build-docker-multiarch.sh --type devenv
# Local production builds using direct buildx (alternative to docker-buildx.sh)
.PHONY: docker-buildx-production-local
docker-buildx-production-local:
@echo "🏗️ Building single-architecture production Docker image locally..."
@echo "💡 Alternative to docker-buildx.sh for local testing"
$(DOCKER_CLI) buildx build \
--file $(DOCKERFILE_PRODUCTION) \
--tag rustfs:production-latest \
--tag rustfs:latest \
--load \
--build-arg RELEASE=latest \
.
# ========================================================================================
# Single Architecture Docker Builds (Traditional)
# ========================================================================================
.PHONY: docker-build-production
docker-build-production:
@echo "🏗️ Building single-architecture production Docker image..."
@echo "💡 Consider using 'make docker-buildx-production-local' for multi-arch support"
$(DOCKER_CLI) build -f $(DOCKERFILE_PRODUCTION) -t rustfs:latest .
.PHONY: docker-build-source
docker-build-source:
@echo "🏗️ Building single-architecture source Docker image..."
@echo "💡 Consider using 'make docker-dev-local' for multi-arch support"
$(DOCKER_CLI) build -f $(DOCKERFILE_SOURCE) -t rustfs:source .
# ========================================================================================
# Development Environment
# ========================================================================================
.PHONY: dev-env-start
dev-env-start:
@echo "🚀 Starting development environment..."
$(DOCKER_CLI) buildx build \
--file $(DOCKERFILE_SOURCE) \
--tag rustfs:dev \
--load \
.
$(DOCKER_CLI) stop $(CONTAINER_NAME) 2>/dev/null || true
$(DOCKER_CLI) rm $(CONTAINER_NAME) 2>/dev/null || true
$(DOCKER_CLI) run -d --name $(CONTAINER_NAME) \
-p 9010:9010 -p 9000:9000 \
-v $(shell pwd):/workspace \
-it rustfs:dev
.PHONY: dev-env-stop
dev-env-stop:
@echo "🛑 Stopping development environment..."
$(DOCKER_CLI) stop $(CONTAINER_NAME) 2>/dev/null || true
$(DOCKER_CLI) rm $(CONTAINER_NAME) 2>/dev/null || true
.PHONY: dev-env-restart
dev-env-restart: dev-env-stop dev-env-start
# ========================================================================================
# Build Utilities
# ========================================================================================
.PHONY: docker-inspect-multiarch
docker-inspect-multiarch:
@@ -159,41 +245,106 @@ docker-inspect-multiarch:
.PHONY: build-cross-all
build-cross-all:
@echo "🔧 Building all target architectures..."
@if ! command -v cross &> /dev/null; then \
echo "📦 Installing cross..."; \
cargo install cross; \
fi
@echo "💡 On macOS/Windows, use 'make docker-dev' for reliable multi-arch builds"
@echo "🔨 Generating protobuf code..."
cargo run --bin gproto || true
@echo "🔨 Building x86_64-unknown-linux-musl..."
cargo build --release --target x86_64-unknown-linux-musl --bin rustfs
./build-rustfs.sh --platform x86_64-unknown-linux-musl
@echo "🔨 Building aarch64-unknown-linux-gnu..."
cross build --release --target aarch64-unknown-linux-gnu --bin rustfs
./build-rustfs.sh --platform aarch64-unknown-linux-gnu
@echo "✅ All architectures built successfully!"
# ========================================================================================
# Help and Documentation
# ========================================================================================
.PHONY: help-build
help-build:
@echo "🔨 RustFS 构建帮助:"
@echo ""
@echo "🚀 本地构建 (推荐使用):"
@echo " make build # 构建 RustFS 二进制文件 (默认包含 console)"
@echo " make build-dev # 开发模式构建"
@echo " make build-musl # 构建 musl 版本"
@echo " make build-gnu # 构建 GNU 版本"
@echo ""
@echo "🐳 Docker 构建:"
@echo " make build-docker # 使用 Docker 容器构建"
@echo " make build-docker BUILD_OS=ubuntu22.04 # 指定构建系统"
@echo ""
@echo "🏗️ 跨架构构建:"
@echo " make build-cross-all # 构建所有架构的二进制文件"
@echo ""
@echo "🔧 直接使用 build-rustfs.sh 脚本:"
@echo " ./build-rustfs.sh --help # 查看脚本帮助"
@echo " ./build-rustfs.sh --no-console # 构建时跳过 console 资源"
@echo " ./build-rustfs.sh --force-console-update # 强制更新 console 资源"
@echo " ./build-rustfs.sh --dev # 开发模式构建"
@echo " ./build-rustfs.sh --sign # 签名二进制文件"
@echo " ./build-rustfs.sh --platform x86_64-unknown-linux-musl # 指定目标平台"
@echo " ./build-rustfs.sh --skip-verification # 跳过二进制验证"
@echo ""
@echo "💡 build-rustfs.sh 脚本提供了更多选项、智能检测和二进制验证功能"
.PHONY: help-docker
help-docker:
@echo "🐳 Docker 多架构构建帮助:"
@echo ""
@echo "基本构建:"
@echo " make docker-build-multiarch # 构建多架构镜像(不推送)"
@echo " make docker-build-multiarch-push # 构建并推送多架构镜像"
@echo "🚀 生产镜像构建 (推荐使用 docker-buildx.sh):"
@echo " make docker-buildx # 构建生产多架构镜像(不推送)"
@echo " make docker-buildx-push # 构建并推送生产多架构镜像"
@echo " make docker-buildx-version VERSION=v1.0.0 # 构建指定版本"
@echo " make docker-buildx-push-version VERSION=v1.0.0 # 构建并推送指定版本"
@echo ""
@echo "版本构建:"
@echo " make docker-build-multiarch-version VERSION=v1.0.0 # 构建指定版本"
@echo " make docker-push-multiarch-version VERSION=v1.0.0 # 构建并推送指定版本"
@echo "🔧 开发/源码镜像构建 (本地开发测试):"
@echo " make docker-dev # 构建开发多架构镜像(无法本地加载)"
@echo " make docker-dev-local # 构建开发单架构镜像(本地加载)"
@echo " make docker-dev-push REGISTRY=xxx # 构建并推送开发镜像"
@echo ""
@echo "镜像类型:"
@echo " make docker-build-ubuntu # 构建 Ubuntu 镜像"
@echo " make docker-build-rockylinux # 构建 RockyLinux 镜像"
@echo " make docker-build-devenv # 构建开发环境镜像"
@echo " make docker-build-all-types # 构建所有类型镜像"
@echo "🏗️ 本地生产镜像构建 (替代方案):"
@echo " make docker-buildx-production-local # 本地构建生产单架构镜像"
@echo ""
@echo "辅助工具:"
@echo "📦 单架构构建 (传统方式):"
@echo " make docker-build-production # 构建单架构生产镜像"
@echo " make docker-build-source # 构建单架构源码镜像"
@echo ""
@echo "🚀 开发环境管理:"
@echo " make dev-env-start # 启动开发容器环境"
@echo " make dev-env-stop # 停止开发容器环境"
@echo " make dev-env-restart # 重启开发容器环境"
@echo ""
@echo "🔧 辅助工具:"
@echo " make build-cross-all # 构建所有架构的二进制文件"
@echo " make docker-inspect-multiarch IMAGE=xxx # 检查镜像的架构支持"
@echo ""
@echo "环境变量 (在推送时需要设置):"
@echo "📋 环境变量:"
@echo " REGISTRY 镜像仓库地址 (推送时需要)"
@echo " DOCKERHUB_USERNAME Docker Hub 用户名"
@echo " DOCKERHUB_TOKEN Docker Hub 访问令牌"
@echo " GITHUB_TOKEN GitHub 访问令牌"
@echo ""
@echo "💡 建议:"
@echo " - 生产用途: 使用 docker-buildx* 命令 (基于预编译二进制)"
@echo " - 本地开发: 使用 docker-dev* 命令 (从源码构建)"
@echo " - 开发环境: 使用 dev-env-* 命令管理开发容器"
.PHONY: help
help:
@echo "🦀 RustFS Makefile 帮助:"
@echo ""
@echo "📋 主要命令分类:"
@echo " make help-build # 显示构建相关帮助"
@echo " make help-docker # 显示 Docker 相关帮助"
@echo ""
@echo "🔧 代码质量:"
@echo " make fmt # 格式化代码"
@echo " make clippy # 运行 clippy 检查"
@echo " make test # 运行测试"
@echo " make pre-commit # 运行所有预提交检查"
@echo ""
@echo "🚀 快速开始:"
@echo " make build # 构建 RustFS 二进制"
@echo " make docker-dev-local # 构建开发 Docker 镜像(本地)"
@echo " make dev-env-start # 启动开发环境"
@echo ""
@echo "💡 更多帮助请使用 'make help-build' 或 'make help-docker'"

View File

@@ -7,6 +7,7 @@
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
<a href="https://hellogithub.com/repository/rustfs/rustfs" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=b95bcb72bdc340b68f16fdf6790b7d5b&claim_uid=MsbvjYeLDKAH457&theme=small" alt="FeaturedHelloGitHub" /></a>
</p>
<p align="center">
@@ -17,11 +18,21 @@
</p>
<p align="center">
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a>
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a> |
<!-- Keep these links. Translations will automatically update with the README. -->
<a href="https://readme-i18n.com/rustfs/rustfs?lang=de">Deutsch</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=es">Español</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=fr">français</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ja">日本語</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ko">한국어</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=pt">Português</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ru">Русский</a>
</p>
RustFS is a high-performance distributed object storage software built using Rust, one of the most popular languages worldwide. Along with MinIO, it shares a range of advantages such as simplicity, S3 compatibility, open-source nature, support for data lakes, AI, and big data. Furthermore, it has a better and more user-friendly open-source license in comparison to other storage systems, being constructed under the Apache license. As Rust serves as its foundation, RustFS provides faster speed and safer distributed features for high-performance object storage.
> ⚠️ **RustFS is under rapid development. Do NOT use in production environments!**
## Features
- **High Performance**: Built with Rust, ensuring speed and efficiency.
@@ -61,7 +72,7 @@ Stress test server parameters
To get started with RustFS, follow these steps:
1. **One-click installation script (Option 1)**
1. **One-click installation script (Option 1)**
```bash
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
@@ -70,13 +81,52 @@ To get started with RustFS, follow these steps:
2. **Docker Quick Start (Option 2)**
```bash
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
# Latest stable release
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:latest
# Development version (main branch)
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:main-latest
# Specific version
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:v1.0.0
```
3. **Build from Source (Option 3) - Advanced Users**
3. **Access the Console**: Open your web browser and navigate to `http://localhost:9001` to access the RustFS console, default username and password is `rustfsadmin` .
4. **Create a Bucket**: Use the console to create a new bucket for your objects.
5. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs to interact with your RustFS instance.
For developers who want to build RustFS Docker images from source with multi-architecture support:
```bash
# Build multi-architecture images locally
./docker-buildx.sh --build-arg RELEASE=latest
# Build and push to registry
./docker-buildx.sh --push
# Build specific version
./docker-buildx.sh --release v1.0.0 --push
# Build for custom registry
./docker-buildx.sh --registry your-registry.com --namespace yourname --push
```
The `docker-buildx.sh` script supports:
- **Multi-architecture builds**: `linux/amd64`, `linux/arm64`
- **Automatic version detection**: Uses git tags or commit hashes
- **Registry flexibility**: Supports Docker Hub, GitHub Container Registry, etc.
- **Build optimization**: Includes caching and parallel builds
You can also use Make targets for convenience:
```bash
make docker-buildx # Build locally
make docker-buildx-push # Build and push
make docker-buildx-version VERSION=v1.0.0 # Build specific version
make help-docker # Show all Docker-related commands
```
4. **Access the Console**: Open your web browser and navigate to `http://localhost:9000` to access the RustFS console, default username and password is `rustfsadmin` .
5. **Create a Bucket**: Use the console to create a new bucket for your objects.
6. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs to interact with your RustFS instance.
## Documentation
@@ -109,7 +159,7 @@ If you have any questions or need assistance, you can:
RustFS is a community-driven project, and we appreciate all contributions. Check out the [Contributors](https://github.com/rustfs/rustfs/graphs/contributors) page to see the amazing people who have helped make RustFS better.
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
<img src="https://opencollective.com/rustfs/contributors.svg?width=890&limit=500&button=false" />
</a>
## License

View File

@@ -7,6 +7,7 @@
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
<a href="https://hellogithub.com/repository/rustfs/rustfs" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=b95bcb72bdc340b68f16fdf6790b7d5b&claim_uid=MsbvjYeLDKAH457&theme=small" alt="FeaturedHelloGitHub" /></a>
</p >
<p align="center">
@@ -61,7 +62,7 @@ RustFS 是一个使用 Rust全球最受欢迎的编程语言之一构建
要开始使用 RustFS请按照以下步骤操作
1. **一键脚本快速启动 (方案一)**
1. **一键脚本快速启动 (方案一)**
```bash
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
@@ -70,11 +71,10 @@ RustFS 是一个使用 Rust全球最受欢迎的编程语言之一构建
2. **Docker快速启动方案二**
```bash
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs
```
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9001` 以访问 RustFS 控制台,默认的用户名和密码是 `rustfsadmin` 。
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9000` 以访问 RustFS 控制台,默认的用户名和密码是 `rustfsadmin` 。
4. **创建存储桶**:使用控制台为您的对象创建新的存储桶。
5. **上传对象**:您可以直接通过控制台上传文件,或使用 S3 兼容的 API 与您的 RustFS 实例交互。
@@ -109,7 +109,7 @@ RustFS 是一个使用 Rust全球最受欢迎的编程语言之一构建
RustFS 是一个社区驱动的项目,我们感谢所有的贡献。查看[贡献者](https://github.com/rustfs/rustfs/graphs/contributors)页面,了解帮助 RustFS 变得更好的杰出人员。
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
<img src="https://opencollective.com/rustfs/contributors.svg?width=890&limit=500&button=false" />
</a >
## 许可证

564
build-rustfs.sh Executable file
View File

@@ -0,0 +1,564 @@
#!/bin/bash
# RustFS Binary Build Script
# This script compiles RustFS binaries for different platforms and architectures
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Auto-detect current platform
detect_platform() {
local arch=$(uname -m)
local os=$(uname -s | tr '[:upper:]' '[:lower:]')
case "$os" in
"linux")
case "$arch" in
"x86_64")
echo "x86_64-unknown-linux-musl"
;;
"aarch64"|"arm64")
echo "aarch64-unknown-linux-musl"
;;
"armv7l")
echo "armv7-unknown-linux-musleabihf"
;;
*)
echo "unknown-platform"
;;
esac
;;
"darwin")
case "$arch" in
"x86_64")
echo "x86_64-apple-darwin"
;;
"arm64"|"aarch64")
echo "aarch64-apple-darwin"
;;
*)
echo "unknown-platform"
;;
esac
;;
*)
echo "unknown-platform"
;;
esac
}
# Cross-platform SHA256 checksum generation
generate_sha256() {
local file="$1"
local output_file="$2"
local os=$(uname -s | tr '[:upper:]' '[:lower:]')
case "$os" in
"linux")
if command -v sha256sum &> /dev/null; then
sha256sum "$file" > "$output_file"
elif command -v shasum &> /dev/null; then
shasum -a 256 "$file" > "$output_file"
else
print_message $RED "❌ No SHA256 command found (sha256sum or shasum)"
return 1
fi
;;
"darwin")
if command -v shasum &> /dev/null; then
shasum -a 256 "$file" > "$output_file"
elif command -v sha256sum &> /dev/null; then
sha256sum "$file" > "$output_file"
else
print_message $RED "❌ No SHA256 command found (shasum or sha256sum)"
return 1
fi
;;
*)
# Try common commands in order
if command -v sha256sum &> /dev/null; then
sha256sum "$file" > "$output_file"
elif command -v shasum &> /dev/null; then
shasum -a 256 "$file" > "$output_file"
else
print_message $RED "❌ No SHA256 command found"
return 1
fi
;;
esac
}
# Default values
OUTPUT_DIR="target/release"
PLATFORM=$(detect_platform) # Auto-detect current platform
BINARY_NAME="rustfs"
BUILD_TYPE="release"
SIGN=false
WITH_CONSOLE=true
FORCE_CONSOLE_UPDATE=false
CONSOLE_VERSION="latest"
SKIP_VERIFICATION=false
CUSTOM_PLATFORM=""
# Print usage
usage() {
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Description:"
echo " Build RustFS binary for the current platform. Designed for CI/CD pipelines"
echo " where different runners build platform-specific binaries natively."
echo " Includes automatic verification to ensure the built binary is functional."
echo ""
echo "Options:"
echo " -o, --output-dir DIR Output directory (default: target/release)"
echo " -b, --binary-name NAME Binary name (default: rustfs)"
echo " -p, --platform TARGET Target platform (default: auto-detect)"
echo " --dev Build in dev mode"
echo " --sign Sign binaries after build"
echo " --with-console Download console static assets (default)"
echo " --no-console Skip console static assets"
echo " --force-console-update Force update console assets even if they exist"
echo " --console-version VERSION Console version to download (default: latest)"
echo " --skip-verification Skip binary verification after build"
echo " -h, --help Show this help message"
echo ""
echo "Examples:"
echo " $0 # Build for current platform (includes console assets)"
echo " $0 --dev # Development build"
echo " $0 --sign # Build and sign binary (release CI)"
echo " $0 --no-console # Build without console static assets"
echo " $0 --force-console-update # Force update console assets"
echo " $0 --platform x86_64-unknown-linux-musl # Build for specific platform"
echo " $0 --skip-verification # Skip binary verification (for cross-compilation)"
echo ""
echo "Detected platform: $(detect_platform)"
echo "CI Usage: Run this script on each platform's runner to build native binaries"
}
# Print colored message
print_message() {
local color=$1
local message=$2
echo -e "${color}${message}${NC}"
}
# Get version from git
get_version() {
if git describe --abbrev=0 --tags >/dev/null 2>&1; then
git describe --abbrev=0 --tags
else
git rev-parse --short HEAD
fi
}
# Setup rust environment
setup_rust_environment() {
print_message $BLUE "🔧 Setting up Rust environment..."
# Install required target for current platform
print_message $YELLOW "Installing target: $PLATFORM"
rustup target add "$PLATFORM"
# Set up environment variables for musl targets
if [[ "$PLATFORM" == *"musl"* ]]; then
print_message $YELLOW "Setting up environment for musl target..."
export RUSTFLAGS="-C target-feature=-crt-static"
# For cargo-zigbuild, set up additional environment variables
if command -v cargo-zigbuild &> /dev/null; then
print_message $YELLOW "Configuring cargo-zigbuild for musl target..."
# Set environment variables for better musl support
export CC_x86_64_unknown_linux_musl="zig cc -target x86_64-linux-musl"
export CXX_x86_64_unknown_linux_musl="zig c++ -target x86_64-linux-musl"
export AR_x86_64_unknown_linux_musl="zig ar"
export CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_LINKER="zig cc -target x86_64-linux-musl"
export CC_aarch64_unknown_linux_musl="zig cc -target aarch64-linux-musl"
export CXX_aarch64_unknown_linux_musl="zig c++ -target aarch64-linux-musl"
export AR_aarch64_unknown_linux_musl="zig ar"
export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_LINKER="zig cc -target aarch64-linux-musl"
# Set environment variables for zstd-sys to avoid target parsing issues
export ZSTD_SYS_USE_PKG_CONFIG=1
export PKG_CONFIG_ALLOW_CROSS=1
fi
fi
# Install required tools
if [ "$SIGN" = true ]; then
if ! command -v minisign &> /dev/null; then
print_message $YELLOW "Installing minisign for binary signing..."
cargo install minisign
fi
fi
}
# Download console static assets
download_console_assets() {
local static_dir="rustfs/static"
local console_exists=false
# Check if console assets already exist
if [ -d "$static_dir" ] && [ -f "$static_dir/index.html" ]; then
console_exists=true
local static_size=$(du -sh "$static_dir" 2>/dev/null | cut -f1 || echo "unknown")
print_message $YELLOW "Console static assets already exist ($static_size)"
fi
# Determine if we need to download
local should_download=false
if [ "$WITH_CONSOLE" = true ]; then
if [ "$console_exists" = false ]; then
print_message $BLUE "🎨 Console assets not found, downloading..."
should_download=true
elif [ "$FORCE_CONSOLE_UPDATE" = true ]; then
print_message $BLUE "🎨 Force updating console assets..."
should_download=true
else
print_message $GREEN "✅ Console assets already available, skipping download"
fi
else
if [ "$console_exists" = true ]; then
print_message $GREEN "✅ Using existing console assets"
else
print_message $YELLOW "⚠️ Console assets not found. Use --download-console to download them."
fi
fi
if [ "$should_download" = true ]; then
print_message $BLUE "📥 Downloading console static assets..."
# Create static directory
mkdir -p "$static_dir"
# Download from GitHub Releases (consistent with Docker build)
local download_url
if [ "$CONSOLE_VERSION" = "latest" ]; then
print_message $YELLOW "Getting latest console release info..."
# For now, use dl.rustfs.com as fallback until GitHub Releases includes console assets
download_url="https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip"
else
download_url="https://dl.rustfs.com/artifacts/console/rustfs-console-${CONSOLE_VERSION}.zip"
fi
print_message $YELLOW "Downloading from: $download_url"
# Download with retries
local temp_file="console-assets-temp.zip"
local download_success=false
for i in {1..3}; do
if curl -L "$download_url" -o "$temp_file" --retry 3 --retry-delay 5 --max-time 300; then
download_success=true
break
else
print_message $YELLOW "Download attempt $i failed, retrying..."
sleep 2
fi
done
if [ "$download_success" = true ]; then
# Verify the downloaded file
if [ -f "$temp_file" ] && [ -s "$temp_file" ]; then
print_message $BLUE "📦 Extracting console assets..."
# Extract to static directory
if unzip -o "$temp_file" -d "$static_dir"; then
rm "$temp_file"
local final_size=$(du -sh "$static_dir" 2>/dev/null | cut -f1 || echo "unknown")
print_message $GREEN "✅ Console assets downloaded successfully ($final_size)"
else
print_message $RED "❌ Failed to extract console assets"
rm -f "$temp_file"
return 1
fi
else
print_message $RED "❌ Downloaded file is empty or invalid"
rm -f "$temp_file"
return 1
fi
else
print_message $RED "❌ Failed to download console assets after 3 attempts"
print_message $YELLOW "💡 Console assets are optional. Build will continue without them."
rm -f "$temp_file"
fi
fi
}
# Verify binary functionality
verify_binary() {
local binary_path="$1"
# Check if binary exists
if [ ! -f "$binary_path" ]; then
print_message $RED "❌ Binary file not found: $binary_path"
return 1
fi
# Check if binary is executable
if [ ! -x "$binary_path" ]; then
print_message $RED "❌ Binary is not executable: $binary_path"
return 1
fi
# Check basic functionality - try to run help command
print_message $YELLOW " Testing --help command..."
if ! "$binary_path" --help >/dev/null 2>&1; then
print_message $RED "❌ Binary failed to run --help command"
return 1
fi
# Check version command
print_message $YELLOW " Testing --version command..."
if ! "$binary_path" --version >/dev/null 2>&1; then
print_message $YELLOW "⚠️ Binary does not support --version command (this is optional)"
fi
# Try to get some basic info about the binary
local file_info=$(file "$binary_path" 2>/dev/null || echo "unknown")
print_message $YELLOW " Binary info: $file_info"
# Check if it's a valid ELF/Mach-O binary
if command -v readelf >/dev/null 2>&1; then
if readelf -h "$binary_path" >/dev/null 2>&1; then
print_message $YELLOW " ELF binary structure: valid"
fi
elif command -v otool >/dev/null 2>&1; then
if otool -h "$binary_path" >/dev/null 2>&1; then
print_message $YELLOW " Mach-O binary structure: valid"
fi
fi
return 0
}
# Build binary for current platform
build_binary() {
local version=$(get_version)
local output_file="${OUTPUT_DIR}/${PLATFORM}/${BINARY_NAME}"
print_message $BLUE "🏗️ Building for platform: $PLATFORM"
print_message $YELLOW " Version: $version"
print_message $YELLOW " Output: $output_file"
# Create output directory
mkdir -p "${OUTPUT_DIR}/${PLATFORM}"
# Simple build logic matching the working version (4fb4b353)
# Force rebuild by touching build.rs
touch rustfs/build.rs
# Determine build command based on platform and cross-compilation needs
local build_cmd=""
local current_platform=$(detect_platform)
print_message $BLUE "📦 Using working version build logic..."
# Check if we need cross-compilation
if [ "$PLATFORM" != "$current_platform" ]; then
# Cross-compilation needed
if [[ "$PLATFORM" == *"apple-darwin"* ]]; then
print_message $RED "❌ macOS cross-compilation not supported"
print_message $YELLOW "💡 macOS targets must be built natively on macOS runners"
return 1
elif [[ "$PLATFORM" == *"windows"* ]]; then
# Use cross for Windows ARM64
if ! command -v cross &> /dev/null; then
print_message $YELLOW "📦 Installing cross tool..."
cargo install cross --git https://github.com/cross-rs/cross
fi
build_cmd="cross build"
else
# Use zigbuild for Linux ARM64 (matches working version)
if ! command -v cargo-zigbuild &> /dev/null; then
print_message $RED "❌ cargo-zigbuild not found. Please install it first."
return 1
fi
build_cmd="cargo zigbuild"
fi
else
# Native compilation
build_cmd="cargo build"
fi
if [ "$BUILD_TYPE" = "release" ]; then
build_cmd+=" --release"
fi
build_cmd+=" --target $PLATFORM"
build_cmd+=" -p rustfs --bins"
print_message $BLUE "📦 Executing: $build_cmd"
# Execute build (this matches exactly what the working version does)
if eval $build_cmd; then
print_message $GREEN "✅ Successfully built for $PLATFORM"
# Copy binary to output directory
cp "target/${PLATFORM}/${BUILD_TYPE}/${BINARY_NAME}" "$output_file"
# Generate checksums
print_message $BLUE "🔐 Generating checksums..."
(cd "${OUTPUT_DIR}/${PLATFORM}" && generate_sha256 "${BINARY_NAME}" "${BINARY_NAME}.sha256sum")
# Verify binary functionality (if not skipped)
if [ "$SKIP_VERIFICATION" = false ]; then
print_message $BLUE "🔍 Verifying binary functionality..."
if verify_binary "$output_file"; then
print_message $GREEN "✅ Binary verification passed"
else
print_message $RED "❌ Binary verification failed"
return 1
fi
else
print_message $YELLOW "⚠️ Binary verification skipped by user request"
fi
# Sign binary if requested
if [ "$SIGN" = true ]; then
print_message $BLUE "✍️ Signing binary..."
(cd "${OUTPUT_DIR}/${PLATFORM}" && minisign -S -m "${BINARY_NAME}" -s ~/.minisign/minisign.key)
fi
print_message $GREEN "✅ Build completed successfully"
else
print_message $RED "❌ Failed to build for $PLATFORM"
return 1
fi
}
# Main build function
build_rustfs() {
local version=$(get_version)
print_message $BLUE "🚀 Starting RustFS binary build process..."
print_message $YELLOW " Version: $version"
print_message $YELLOW " Platform: $PLATFORM"
print_message $YELLOW " Output Directory: $OUTPUT_DIR"
print_message $YELLOW " Build Type: $BUILD_TYPE"
print_message $YELLOW " Sign: $SIGN"
print_message $YELLOW " With Console: $WITH_CONSOLE"
if [ "$WITH_CONSOLE" = true ]; then
print_message $YELLOW " Console Version: $CONSOLE_VERSION"
print_message $YELLOW " Force Console Update: $FORCE_CONSOLE_UPDATE"
fi
print_message $YELLOW " Skip Verification: $SKIP_VERIFICATION"
echo ""
# Setup environment
setup_rust_environment
echo ""
# Download console assets if requested
download_console_assets
echo ""
# Build binary
build_binary
echo ""
print_message $GREEN "🎉 Build process completed successfully!"
# Show built binary
local binary_file="${OUTPUT_DIR}/${PLATFORM}/${BINARY_NAME}"
if [ -f "$binary_file" ]; then
local size=$(ls -lh "$binary_file" | awk '{print $5}')
print_message $BLUE "📋 Built binary: $binary_file ($size)"
fi
}
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
-o|--output-dir)
OUTPUT_DIR="$2"
shift 2
;;
-b|--binary-name)
BINARY_NAME="$2"
shift 2
;;
-p|--platform)
CUSTOM_PLATFORM="$2"
shift 2
;;
--dev)
BUILD_TYPE="debug"
shift
;;
--sign)
SIGN=true
shift
;;
--with-console)
WITH_CONSOLE=true
shift
;;
--no-console)
WITH_CONSOLE=false
shift
;;
--force-console-update)
FORCE_CONSOLE_UPDATE=true
WITH_CONSOLE=true # Auto-enable download when forcing update
shift
;;
--console-version)
CONSOLE_VERSION="$2"
shift 2
;;
--skip-verification)
SKIP_VERIFICATION=true
shift
;;
-h|--help)
usage
exit 0
;;
*)
print_message $RED "❌ Unknown option: $1"
usage
exit 1
;;
esac
done
# Main execution
main() {
print_message $BLUE "🦀 RustFS Binary Build Script"
echo ""
# Check if we're in a Rust project
if [ ! -f "Cargo.toml" ]; then
print_message $RED "❌ No Cargo.toml found. Are you in a Rust project directory?"
exit 1
fi
# Override platform if specified
if [ -n "$CUSTOM_PLATFORM" ]; then
PLATFORM="$CUSTOM_PLATFORM"
print_message $YELLOW "🎯 Using specified platform: $PLATFORM"
# Auto-enable skip verification for cross-compilation
if [ "$PLATFORM" != "$(detect_platform)" ]; then
SKIP_VERIFICATION=true
print_message $YELLOW "⚠️ Cross-compilation detected, enabling --skip-verification"
fi
fi
# Start build process
build_rustfs
}
# Run main function
main

View File

@@ -1,35 +0,0 @@
#!/bin/bash
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
clear
# Get the current platform architecture
ARCH=$(uname -m)
# Set the target directory according to the schema
if [ "$ARCH" == "x86_64" ]; then
TARGET_DIR="target/x86_64"
elif [ "$ARCH" == "aarch64" ]; then
TARGET_DIR="target/arm64"
else
TARGET_DIR="target/unknown"
fi
# Set CARGO_TARGET_DIR and build the project
CARGO_TARGET_DIR=$TARGET_DIR RUSTFLAGS="-C link-arg=-fuse-ld=mold" cargo build --release --package rustfs
echo -e "\a"
echo -e "\a"
echo -e "\a"

View File

@@ -26,7 +26,6 @@ dioxus = { workspace = true, features = ["router"] }
dirs = { workspace = true }
hex = { workspace = true }
keyring = { workspace = true }
lazy_static = { workspace = true }
rfd = { workspace = true }
rust-embed = { workspace = true, features = ["interpolate-folder-path"] }
rust-i18n = { workspace = true }

View File

@@ -37,7 +37,9 @@ copyright = "Copyright 2025 rustfs.com"
icon = [
"assets/icons/icon.icns",
"assets/icons/icon.ico"
"assets/icons/icon.ico",
"assets/icons/icon.png",
"assets/icons/rustfs-icon.png",
]
#[bundle.macos]
#provider_short_name = "RustFs"

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 498 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 969 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 9.9 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 969 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.0 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

View File

@@ -1,20 +1,15 @@
<svg width="1558" height="260" viewBox="0 0 1558 260" fill="none" xmlns="http://www.w3.org/2000/svg">
<g clip-path="url(#clip0_0_3)">
<path d="M1288.5 112.905H1159.75V58.4404H1262L1270 0L1074 0V260H1159.75V162.997H1296.95L1288.5 112.905Z"
fill="#0196D0"/>
<path d="M1058.62 58.4404V0H789V58.4404H881.133V260H966.885V58.4404H1058.62Z" fill="#0196D0"/>
<path d="M521 179.102V0L454.973 15V161C454.973 181.124 452.084 193.146 443.5 202C434.916 211.257 419.318 214.5 400.5 214.5C381.022 214.5 366.744 210.854 357.5 202C348.916 193.548 346.357 175.721 346.357 156V0L280 15V175.48C280 208.08 290.234 229.412 309.712 241.486C329.19 253.56 358.903 260 400.5 260C440.447 260 470.159 253.56 490.297 241.486C510.766 229.412 521 208.483 521 179.102Z"
fill="#0196D0"/>
<path d="M172.84 84.2813C172.84 97.7982 168.249 107.737 158.41 113.303C149.883 118.471 137.092 121.254 120.693 122.049V162.997C129.876 163.792 138.076 166.177 144.307 176.514L184.647 260H265L225.316 180.489C213.181 155.046 201.374 149.48 178.744 143.517C212.197 138.349 241.386 118.471 241.386 73.1499C241.386 53.2722 233.843 30.2141 218.756 17.8899C203.998 5.56575 183.991 0 159.394 0H120.693V48.5015H127.58C142.23 48.5015 153.6 51.4169 161.689 57.2477C169.233 62.8135 172.84 71.5596 172.84 84.2813ZM120.693 122.049C119.163 122.049 117.741 122.049 116.43 122.049H68.5457V48.5015H120.693V0H0V260H70.5137V162.997H110.526C113.806 162.997 117.741 162.997 120.693 162.997V122.049Z"
fill="#0196D0"/>
<path d="M774 179.297C774 160.829 766.671 144.669 752.013 131.972C738.127 119.66 712.025 110.169 673.708 103.5C662.136 101.191 651.722 99.6523 643.235 97.3437C586.532 84.6467 594.632 52.7118 650.564 52.7118C680.651 52.7118 709.582 61.946 738.127 66.9478C742.37 67.7174 743.913 68.1021 744.298 68.1021L750.47 12.697C720.383 3.46282 684.895 0 654.036 0C616.619 0 587.689 6.54088 567.245 19.2379C546.801 31.9349 536 57.7137 536 82.3382C536 103.5 543.715 119.66 559.916 131.972C575.731 143.515 604.276 152.749 645.55 160.059C658.279 162.368 668.694 163.907 676.794 166.215C685.023 168.524 691.066 170.704 694.924 172.756C702.253 176.604 706.11 182.375 706.11 188.531C706.11 196.611 701.481 202.767 692.224 207C664.836 220.081 587.689 212.001 556.83 198.15L543.715 247.784C547.186 248.169 552.972 249.323 559.916 250.477C616.619 259.327 690.681 270.869 741.212 238.935C762.814 225.468 774 206.23 774 179.297Z"
fill="#0196D0"/>
<path d="M1558 179.568C1558 160.383 1550.42 144.268 1535.67 131.99C1521.32 119.968 1494.34 110.631 1454.74 103.981C1442.38 101.679 1432.01 99.3764 1422.84 97.8416C1422.44 97.8416 1422.04 97.8416 1422.04 97.4579V112.422L1361.04 75.2038L1422.04 38.3692V52.9496C1424.7 52.9496 1427.49 52.9496 1430.41 52.9496C1461.51 52.9496 1491.42 62.5419 1521.32 67.5299C1525.31 67.9136 1526.9 67.9136 1527.3 67.9136L1533.68 12.6619C1502.98 3.83692 1465.9 0 1434 0C1395.33 0 1365.43 6.52277 1345.09 19.5683C1323.16 32.6139 1312 57.9376 1312 82.8776C1312 103.981 1320.37 120.096 1336.72 131.607C1353.46 143.885 1382.97 153.093 1425.23 160.383C1434 161.535 1441.18 162.686 1447.56 164.22L1448.36 150.791L1507.36 190.312L1445.57 224.844L1445.96 212.949C1409.68 215.635 1357.45 209.112 1333.53 197.985L1320.37 247.482C1323.56 248.249 1329.54 248.633 1336.72 250.551C1395.33 259.376 1471.88 270.887 1524.11 238.657C1546.84 225.611 1558 205.659 1558 179.568Z"
fill="#0196D0"/>
</g>
<defs>
<clipPath id="clip0_0_3">
<rect width="1558" height="260" fill="white"/>
</clipPath>
</defs>
<g clip-path="url(#clip0_0_3)">
<path d="M1288.5 112.905H1159.75V58.4404H1262L1270 0L1074 0V260H1159.75V162.997H1296.95L1288.5 112.905Z" fill="#0196D0"/>
<path d="M1058.62 58.4404V0H789V58.4404H881.133V260H966.885V58.4404H1058.62Z" fill="#0196D0"/>
<path d="M521 179.102V0L454.973 15V161C454.973 181.124 452.084 193.146 443.5 202C434.916 211.257 419.318 214.5 400.5 214.5C381.022 214.5 366.744 210.854 357.5 202C348.916 193.548 346.357 175.721 346.357 156V0L280 15V175.48C280 208.08 290.234 229.412 309.712 241.486C329.19 253.56 358.903 260 400.5 260C440.447 260 470.159 253.56 490.297 241.486C510.766 229.412 521 208.483 521 179.102Z" fill="#0196D0"/>
<path d="M172.84 84.2813C172.84 97.7982 168.249 107.737 158.41 113.303C149.883 118.471 137.092 121.254 120.693 122.049V162.997C129.876 163.792 138.076 166.177 144.307 176.514L184.647 260H265L225.316 180.489C213.181 155.046 201.374 149.48 178.744 143.517C212.197 138.349 241.386 118.471 241.386 73.1499C241.386 53.2722 233.843 30.2141 218.756 17.8899C203.998 5.56575 183.991 0 159.394 0H120.693V48.5015H127.58C142.23 48.5015 153.6 51.4169 161.689 57.2477C169.233 62.8135 172.84 71.5596 172.84 84.2813ZM120.693 122.049C119.163 122.049 117.741 122.049 116.43 122.049H68.5457V48.5015H120.693V0H0V260H70.5137V162.997H110.526C113.806 162.997 117.741 162.997 120.693 162.997V122.049Z" fill="#0196D0"/>
<path d="M774 179.297C774 160.829 766.671 144.669 752.013 131.972C738.127 119.66 712.025 110.169 673.708 103.5C662.136 101.191 651.722 99.6523 643.235 97.3437C586.532 84.6467 594.632 52.7118 650.564 52.7118C680.651 52.7118 709.582 61.946 738.127 66.9478C742.37 67.7174 743.913 68.1021 744.298 68.1021L750.47 12.697C720.383 3.46282 684.895 0 654.036 0C616.619 0 587.689 6.54088 567.245 19.2379C546.801 31.9349 536 57.7137 536 82.3382C536 103.5 543.715 119.66 559.916 131.972C575.731 143.515 604.276 152.749 645.55 160.059C658.279 162.368 668.694 163.907 676.794 166.215C685.023 168.524 691.066 170.704 694.924 172.756C702.253 176.604 706.11 182.375 706.11 188.531C706.11 196.611 701.481 202.767 692.224 207C664.836 220.081 587.689 212.001 556.83 198.15L543.715 247.784C547.186 248.169 552.972 249.323 559.916 250.477C616.619 259.327 690.681 270.869 741.212 238.935C762.814 225.468 774 206.23 774 179.297Z" fill="#0196D0"/>
<path d="M1558 179.568C1558 160.383 1550.42 144.268 1535.67 131.99C1521.32 119.968 1494.34 110.631 1454.74 103.981C1442.38 101.679 1432.01 99.3764 1422.84 97.8416C1422.44 97.8416 1422.04 97.8416 1422.04 97.4579V112.422L1361.04 75.2038L1422.04 38.3692V52.9496C1424.7 52.9496 1427.49 52.9496 1430.41 52.9496C1461.51 52.9496 1491.42 62.5419 1521.32 67.5299C1525.31 67.9136 1526.9 67.9136 1527.3 67.9136L1533.68 12.6619C1502.98 3.83692 1465.9 0 1434 0C1395.33 0 1365.43 6.52277 1345.09 19.5683C1323.16 32.6139 1312 57.9376 1312 82.8776C1312 103.981 1320.37 120.096 1336.72 131.607C1353.46 143.885 1382.97 153.093 1425.23 160.383C1434 161.535 1441.18 162.686 1447.56 164.22L1448.36 150.791L1507.36 190.312L1445.57 224.844L1445.96 212.949C1409.68 215.635 1357.45 209.112 1333.53 197.985L1320.37 247.482C1323.56 248.249 1329.54 248.633 1336.72 250.551C1395.33 259.376 1471.88 270.887 1524.11 238.657C1546.84 225.611 1558 205.659 1558 179.568Z" fill="#0196D0"/>
</g>
<defs>
<clipPath id="clip0_0_3">
<rect width="1558" height="260" fill="white"/>
</clipPath>
</defs>
</svg>

Before

Width:  |  Height:  |  Size: 3.5 KiB

After

Width:  |  Height:  |  Size: 3.4 KiB

View File

@@ -14,12 +14,12 @@
use crate::utils::RustFSConfig;
use dioxus::logger::tracing::{debug, error, info};
use lazy_static::lazy_static;
use rust_embed::RustEmbed;
use sha2::{Digest, Sha256};
use std::error::Error;
use std::path::{Path, PathBuf};
use std::process::Command as StdCommand;
use std::sync::LazyLock;
use std::time::Duration;
use tokio::fs;
use tokio::fs::File;
@@ -31,15 +31,13 @@ use tokio::sync::{Mutex, mpsc};
#[folder = "$CARGO_MANIFEST_DIR/embedded-rustfs/"]
struct Asset;
// Use `lazy_static` to cache the checksum of embedded resources
lazy_static! {
static ref RUSTFS_HASH: Mutex<String> = {
let rustfs_file = if cfg!(windows) { "rustfs.exe" } else { "rustfs" };
let rustfs_data = Asset::get(rustfs_file).expect("RustFs binary not embedded");
let hash = hex::encode(Sha256::digest(&rustfs_data.data));
Mutex::new(hash)
};
}
// Use `LazyLock` to cache the checksum of embedded resources
static RUSTFS_HASH: LazyLock<Mutex<String>> = LazyLock::new(|| {
let rustfs_file = if cfg!(windows) { "rustfs.exe" } else { "rustfs" };
let rustfs_data = Asset::get(rustfs_file).expect("RustFs binary not embedded");
let hash = hex::encode(Sha256::digest(&rustfs_data.data));
Mutex::new(hash)
});
/// Service command
/// This enum represents the commands that can be sent to the service manager

41
crates/ahm/Cargo.toml Normal file
View File

@@ -0,0 +1,41 @@
[package]
name = "rustfs-ahm"
version.workspace = true
edition.workspace = true
authors = ["RustFS Team"]
license.workspace = true
description = "RustFS AHM (Automatic Health Management) Scanner"
repository.workspace = true
rust-version.workspace = true
homepage.workspace = true
documentation = "https://docs.rs/rustfs-ahm/latest/rustfs_ahm/"
keywords = ["RustFS", "AHM", "health-management", "scanner", "Minio"]
categories = ["web-programming", "development-tools", "filesystem"]
[dependencies]
rustfs-ecstore = { workspace = true }
rustfs-common = { workspace = true }
rustfs-filemeta = { workspace = true }
rustfs-madmin = { workspace = true }
rustfs-utils = { workspace = true }
tokio = { workspace = true, features = ["full"] }
tokio-util = { workspace = true }
tracing = { workspace = true }
serde = { workspace = true, features = ["derive"] }
serde_json = { workspace = true }
thiserror = { workspace = true }
bytes = { workspace = true }
time = { workspace = true, features = ["serde"] }
uuid = { workspace = true, features = ["v4", "serde"] }
anyhow = { workspace = true }
async-trait = { workspace = true }
futures = { workspace = true }
url = { workspace = true }
rustfs-lock = { workspace = true }
lazy_static = { workspace = true }
[dev-dependencies]
rmp-serde = { workspace = true }
tokio-test = { workspace = true }
serde_json = { workspace = true }

45
crates/ahm/src/error.rs Normal file
View File

@@ -0,0 +1,45 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use thiserror::Error;
#[derive(Debug, Error)]
pub enum Error {
#[error("I/O error: {0}")]
Io(#[from] std::io::Error),
#[error("Storage error: {0}")]
Storage(#[from] rustfs_ecstore::error::Error),
#[error("Configuration error: {0}")]
Config(String),
#[error("Scanner error: {0}")]
Scanner(String),
#[error("Metrics error: {0}")]
Metrics(String),
#[error(transparent)]
Other(#[from] anyhow::Error),
}
pub type Result<T, E = Error> = std::result::Result<T, E>;
// Implement conversion from ahm::Error to std::io::Error for use in main.rs
impl From<Error> for std::io::Error {
fn from(err: Error) -> Self {
std::io::Error::other(err)
}
}

54
crates/ahm/src/lib.rs Normal file
View File

@@ -0,0 +1,54 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::OnceLock;
use tokio_util::sync::CancellationToken;
pub mod error;
pub mod scanner;
pub use error::{Error, Result};
pub use scanner::{
BucketTargetUsageInfo, BucketUsageInfo, DataUsageInfo, Scanner, ScannerMetrics, load_data_usage_from_backend,
store_data_usage_in_backend,
};
// Global cancellation token for AHM services (scanner and other background tasks)
static GLOBAL_AHM_SERVICES_CANCEL_TOKEN: OnceLock<CancellationToken> = OnceLock::new();
/// Initialize the global AHM services cancellation token
pub fn init_ahm_services_cancel_token(cancel_token: CancellationToken) -> Result<()> {
GLOBAL_AHM_SERVICES_CANCEL_TOKEN
.set(cancel_token)
.map_err(|_| Error::Config("AHM services cancel token already initialized".to_string()))
}
/// Get the global AHM services cancellation token
pub fn get_ahm_services_cancel_token() -> Option<&'static CancellationToken> {
GLOBAL_AHM_SERVICES_CANCEL_TOKEN.get()
}
/// Create and initialize the global AHM services cancellation token
pub fn create_ahm_services_cancel_token() -> CancellationToken {
let cancel_token = CancellationToken::new();
init_ahm_services_cancel_token(cancel_token.clone()).expect("AHM services cancel token already initialized");
cancel_token
}
/// Shutdown all AHM services gracefully
pub fn shutdown_ahm_services() {
if let Some(cancel_token) = GLOBAL_AHM_SERVICES_CANCEL_TOKEN.get() {
cancel_token.cancel();
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,671 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::{collections::HashMap, sync::Arc, time::SystemTime};
use rustfs_ecstore::{bucket::metadata_sys::get_replication_config, config::com::read_config, store::ECStore};
use rustfs_utils::path::SLASH_SEPARATOR;
use serde::{Deserialize, Serialize};
use tracing::{error, info, warn};
use crate::error::{Error, Result};
// Data usage storage constants
pub const DATA_USAGE_ROOT: &str = SLASH_SEPARATOR;
const DATA_USAGE_OBJ_NAME: &str = ".usage.json";
const DATA_USAGE_BLOOM_NAME: &str = ".bloomcycle.bin";
pub const DATA_USAGE_CACHE_NAME: &str = ".usage-cache.bin";
// Data usage storage paths
lazy_static::lazy_static! {
pub static ref DATA_USAGE_BUCKET: String = format!("{}{}{}",
rustfs_ecstore::disk::RUSTFS_META_BUCKET,
SLASH_SEPARATOR,
rustfs_ecstore::disk::BUCKET_META_PREFIX
);
pub static ref DATA_USAGE_OBJ_NAME_PATH: String = format!("{}{}{}",
rustfs_ecstore::disk::BUCKET_META_PREFIX,
SLASH_SEPARATOR,
DATA_USAGE_OBJ_NAME
);
pub static ref DATA_USAGE_BLOOM_NAME_PATH: String = format!("{}{}{}",
rustfs_ecstore::disk::BUCKET_META_PREFIX,
SLASH_SEPARATOR,
DATA_USAGE_BLOOM_NAME
);
}
/// Bucket target usage info provides replication statistics
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
pub struct BucketTargetUsageInfo {
pub replication_pending_size: u64,
pub replication_failed_size: u64,
pub replicated_size: u64,
pub replica_size: u64,
pub replication_pending_count: u64,
pub replication_failed_count: u64,
pub replicated_count: u64,
}
/// Bucket usage info provides bucket-level statistics
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
pub struct BucketUsageInfo {
pub size: u64,
// Following five fields suffixed with V1 are here for backward compatibility
// Total Size for objects that have not yet been replicated
pub replication_pending_size_v1: u64,
// Total size for objects that have witness one or more failures and will be retried
pub replication_failed_size_v1: u64,
// Total size for objects that have been replicated to destination
pub replicated_size_v1: u64,
// Total number of objects pending replication
pub replication_pending_count_v1: u64,
// Total number of objects that failed replication
pub replication_failed_count_v1: u64,
pub objects_count: u64,
pub object_size_histogram: HashMap<String, u64>,
pub object_versions_histogram: HashMap<String, u64>,
pub versions_count: u64,
pub delete_markers_count: u64,
pub replica_size: u64,
pub replica_count: u64,
pub replication_info: HashMap<String, BucketTargetUsageInfo>,
}
/// DataUsageInfo represents data usage stats of the underlying storage
#[derive(Debug, Default, Clone, Serialize, Deserialize)]
pub struct DataUsageInfo {
/// Total capacity
pub total_capacity: u64,
/// Total used capacity
pub total_used_capacity: u64,
/// Total free capacity
pub total_free_capacity: u64,
/// LastUpdate is the timestamp of when the data usage info was last updated
pub last_update: Option<SystemTime>,
/// Objects total count across all buckets
pub objects_total_count: u64,
/// Versions total count across all buckets
pub versions_total_count: u64,
/// Delete markers total count across all buckets
pub delete_markers_total_count: u64,
/// Objects total size across all buckets
pub objects_total_size: u64,
/// Replication info across all buckets
pub replication_info: HashMap<String, BucketTargetUsageInfo>,
/// Total number of buckets in this cluster
pub buckets_count: u64,
/// Buckets usage info provides following information across all buckets
pub buckets_usage: HashMap<String, BucketUsageInfo>,
/// Deprecated kept here for backward compatibility reasons
pub bucket_sizes: HashMap<String, u64>,
}
/// Size summary for a single object or group of objects
#[derive(Debug, Default, Clone)]
pub struct SizeSummary {
/// Total size
pub total_size: usize,
/// Number of versions
pub versions: usize,
/// Number of delete markers
pub delete_markers: usize,
/// Replicated size
pub replicated_size: usize,
/// Replicated count
pub replicated_count: usize,
/// Pending size
pub pending_size: usize,
/// Failed size
pub failed_size: usize,
/// Replica size
pub replica_size: usize,
/// Replica count
pub replica_count: usize,
/// Pending count
pub pending_count: usize,
/// Failed count
pub failed_count: usize,
/// Replication target stats
pub repl_target_stats: HashMap<String, ReplTargetSizeSummary>,
}
/// Replication target size summary
#[derive(Debug, Default, Clone)]
pub struct ReplTargetSizeSummary {
/// Replicated size
pub replicated_size: usize,
/// Replicated count
pub replicated_count: usize,
/// Pending size
pub pending_size: usize,
/// Failed size
pub failed_size: usize,
/// Pending count
pub pending_count: usize,
/// Failed count
pub failed_count: usize,
}
impl DataUsageInfo {
/// Create a new DataUsageInfo
pub fn new() -> Self {
Self::default()
}
/// Add object metadata to data usage statistics
pub fn add_object(&mut self, object_path: &str, meta_object: &rustfs_filemeta::MetaObject) {
// This method is kept for backward compatibility
// For accurate version counting, use add_object_from_file_meta instead
let bucket_name = match self.extract_bucket_from_path(object_path) {
Ok(name) => name,
Err(_) => return,
};
// Update bucket statistics
if let Some(bucket_usage) = self.buckets_usage.get_mut(&bucket_name) {
bucket_usage.size += meta_object.size as u64;
bucket_usage.objects_count += 1;
bucket_usage.versions_count += 1; // Simplified: assume 1 version per object
// Update size histogram
let total_size = meta_object.size as u64;
let size_ranges = [
("0-1KB", 0, 1024),
("1KB-1MB", 1024, 1024 * 1024),
("1MB-10MB", 1024 * 1024, 10 * 1024 * 1024),
("10MB-100MB", 10 * 1024 * 1024, 100 * 1024 * 1024),
("100MB-1GB", 100 * 1024 * 1024, 1024 * 1024 * 1024),
("1GB+", 1024 * 1024 * 1024, u64::MAX),
];
for (range_name, min_size, max_size) in size_ranges {
if total_size >= min_size && total_size < max_size {
*bucket_usage.object_size_histogram.entry(range_name.to_string()).or_insert(0) += 1;
break;
}
}
// Update version histogram (simplified - count as single version)
*bucket_usage
.object_versions_histogram
.entry("SINGLE_VERSION".to_string())
.or_insert(0) += 1;
} else {
// Create new bucket usage
let mut bucket_usage = BucketUsageInfo {
size: meta_object.size as u64,
objects_count: 1,
versions_count: 1,
..Default::default()
};
bucket_usage.object_size_histogram.insert("0-1KB".to_string(), 1);
bucket_usage.object_versions_histogram.insert("SINGLE_VERSION".to_string(), 1);
self.buckets_usage.insert(bucket_name, bucket_usage);
}
// Update global statistics
self.objects_total_size += meta_object.size as u64;
self.objects_total_count += 1;
self.versions_total_count += 1;
}
/// Add object from FileMeta for accurate version counting
pub fn add_object_from_file_meta(&mut self, object_path: &str, file_meta: &rustfs_filemeta::FileMeta) {
let bucket_name = match self.extract_bucket_from_path(object_path) {
Ok(name) => name,
Err(_) => return,
};
// Calculate accurate statistics from all versions
let mut total_size = 0u64;
let mut versions_count = 0u64;
let mut delete_markers_count = 0u64;
let mut latest_object_size = 0u64;
// Process all versions to get accurate counts
for version in &file_meta.versions {
match rustfs_filemeta::FileMetaVersion::try_from(version.clone()) {
Ok(ver) => {
if let Some(obj) = ver.object {
total_size += obj.size as u64;
versions_count += 1;
latest_object_size = obj.size as u64; // Keep track of latest object size
} else if ver.delete_marker.is_some() {
delete_markers_count += 1;
}
}
Err(_) => {
// Skip invalid versions
continue;
}
}
}
// Update bucket statistics
if let Some(bucket_usage) = self.buckets_usage.get_mut(&bucket_name) {
bucket_usage.size += total_size;
bucket_usage.objects_count += 1;
bucket_usage.versions_count += versions_count;
bucket_usage.delete_markers_count += delete_markers_count;
// Update size histogram based on latest object size
let size_ranges = [
("0-1KB", 0, 1024),
("1KB-1MB", 1024, 1024 * 1024),
("1MB-10MB", 1024 * 1024, 10 * 1024 * 1024),
("10MB-100MB", 10 * 1024 * 1024, 100 * 1024 * 1024),
("100MB-1GB", 100 * 1024 * 1024, 1024 * 1024 * 1024),
("1GB+", 1024 * 1024 * 1024, u64::MAX),
];
for (range_name, min_size, max_size) in size_ranges {
if latest_object_size >= min_size && latest_object_size < max_size {
*bucket_usage.object_size_histogram.entry(range_name.to_string()).or_insert(0) += 1;
break;
}
}
// Update version histogram based on actual version count
let version_ranges = [
("1", 1, 1),
("2-5", 2, 5),
("6-10", 6, 10),
("11-50", 11, 50),
("51-100", 51, 100),
("100+", 101, usize::MAX),
];
for (range_name, min_versions, max_versions) in version_ranges {
if versions_count as usize >= min_versions && versions_count as usize <= max_versions {
*bucket_usage
.object_versions_histogram
.entry(range_name.to_string())
.or_insert(0) += 1;
break;
}
}
} else {
// Create new bucket usage
let mut bucket_usage = BucketUsageInfo {
size: total_size,
objects_count: 1,
versions_count,
delete_markers_count,
..Default::default()
};
// Set size histogram
let size_ranges = [
("0-1KB", 0, 1024),
("1KB-1MB", 1024, 1024 * 1024),
("1MB-10MB", 1024 * 1024, 10 * 1024 * 1024),
("10MB-100MB", 10 * 1024 * 1024, 100 * 1024 * 1024),
("100MB-1GB", 100 * 1024 * 1024, 1024 * 1024 * 1024),
("1GB+", 1024 * 1024 * 1024, u64::MAX),
];
for (range_name, min_size, max_size) in size_ranges {
if latest_object_size >= min_size && latest_object_size < max_size {
bucket_usage.object_size_histogram.insert(range_name.to_string(), 1);
break;
}
}
// Set version histogram
let version_ranges = [
("1", 1, 1),
("2-5", 2, 5),
("6-10", 6, 10),
("11-50", 11, 50),
("51-100", 51, 100),
("100+", 101, usize::MAX),
];
for (range_name, min_versions, max_versions) in version_ranges {
if versions_count as usize >= min_versions && versions_count as usize <= max_versions {
bucket_usage.object_versions_histogram.insert(range_name.to_string(), 1);
break;
}
}
self.buckets_usage.insert(bucket_name, bucket_usage);
// Update buckets count when adding new bucket
self.buckets_count = self.buckets_usage.len() as u64;
}
// Update global statistics
self.objects_total_size += total_size;
self.objects_total_count += 1;
self.versions_total_count += versions_count;
self.delete_markers_total_count += delete_markers_count;
}
/// Extract bucket name from object path
fn extract_bucket_from_path(&self, object_path: &str) -> Result<String> {
let parts: Vec<&str> = object_path.split('/').collect();
if parts.is_empty() {
return Err(Error::Scanner("Invalid object path: empty".to_string()));
}
Ok(parts[0].to_string())
}
/// Update capacity information
pub fn update_capacity(&mut self, total: u64, used: u64, free: u64) {
self.total_capacity = total;
self.total_used_capacity = used;
self.total_free_capacity = free;
self.last_update = Some(SystemTime::now());
}
/// Add bucket usage info
pub fn add_bucket_usage(&mut self, bucket: String, usage: BucketUsageInfo) {
self.buckets_usage.insert(bucket.clone(), usage);
self.buckets_count = self.buckets_usage.len() as u64;
self.last_update = Some(SystemTime::now());
}
/// Get bucket usage info
pub fn get_bucket_usage(&self, bucket: &str) -> Option<&BucketUsageInfo> {
self.buckets_usage.get(bucket)
}
/// Calculate total statistics from all buckets
pub fn calculate_totals(&mut self) {
self.objects_total_count = 0;
self.versions_total_count = 0;
self.delete_markers_total_count = 0;
self.objects_total_size = 0;
for usage in self.buckets_usage.values() {
self.objects_total_count += usage.objects_count;
self.versions_total_count += usage.versions_count;
self.delete_markers_total_count += usage.delete_markers_count;
self.objects_total_size += usage.size;
}
}
/// Merge another DataUsageInfo into this one
pub fn merge(&mut self, other: &DataUsageInfo) {
// Merge bucket usage
for (bucket, usage) in &other.buckets_usage {
if let Some(existing) = self.buckets_usage.get_mut(bucket) {
existing.merge(usage);
} else {
self.buckets_usage.insert(bucket.clone(), usage.clone());
}
}
// Recalculate totals
self.calculate_totals();
// Ensure buckets_count stays consistent with buckets_usage
self.buckets_count = self.buckets_usage.len() as u64;
// Update last update time
if let Some(other_update) = other.last_update {
if self.last_update.is_none() || other_update > self.last_update.unwrap() {
self.last_update = Some(other_update);
}
}
}
}
impl BucketUsageInfo {
/// Create a new BucketUsageInfo
pub fn new() -> Self {
Self::default()
}
/// Add size summary to this bucket usage
pub fn add_size_summary(&mut self, summary: &SizeSummary) {
self.size += summary.total_size as u64;
self.versions_count += summary.versions as u64;
self.delete_markers_count += summary.delete_markers as u64;
self.replica_size += summary.replica_size as u64;
self.replica_count += summary.replica_count as u64;
}
/// Merge another BucketUsageInfo into this one
pub fn merge(&mut self, other: &BucketUsageInfo) {
self.size += other.size;
self.objects_count += other.objects_count;
self.versions_count += other.versions_count;
self.delete_markers_count += other.delete_markers_count;
self.replica_size += other.replica_size;
self.replica_count += other.replica_count;
// Merge histograms
for (key, value) in &other.object_size_histogram {
*self.object_size_histogram.entry(key.clone()).or_insert(0) += value;
}
for (key, value) in &other.object_versions_histogram {
*self.object_versions_histogram.entry(key.clone()).or_insert(0) += value;
}
// Merge replication info
for (target, info) in &other.replication_info {
let entry = self.replication_info.entry(target.clone()).or_default();
entry.replicated_size += info.replicated_size;
entry.replica_size += info.replica_size;
entry.replication_pending_size += info.replication_pending_size;
entry.replication_failed_size += info.replication_failed_size;
entry.replication_pending_count += info.replication_pending_count;
entry.replication_failed_count += info.replication_failed_count;
entry.replicated_count += info.replicated_count;
}
// Merge backward compatibility fields
self.replication_pending_size_v1 += other.replication_pending_size_v1;
self.replication_failed_size_v1 += other.replication_failed_size_v1;
self.replicated_size_v1 += other.replicated_size_v1;
self.replication_pending_count_v1 += other.replication_pending_count_v1;
self.replication_failed_count_v1 += other.replication_failed_count_v1;
}
}
impl SizeSummary {
/// Create a new SizeSummary
pub fn new() -> Self {
Self::default()
}
/// Add another SizeSummary to this one
pub fn add(&mut self, other: &SizeSummary) {
self.total_size += other.total_size;
self.versions += other.versions;
self.delete_markers += other.delete_markers;
self.replicated_size += other.replicated_size;
self.replicated_count += other.replicated_count;
self.pending_size += other.pending_size;
self.failed_size += other.failed_size;
self.replica_size += other.replica_size;
self.replica_count += other.replica_count;
self.pending_count += other.pending_count;
self.failed_count += other.failed_count;
// Merge replication target stats
for (target, stats) in &other.repl_target_stats {
let entry = self.repl_target_stats.entry(target.clone()).or_default();
entry.replicated_size += stats.replicated_size;
entry.replicated_count += stats.replicated_count;
entry.pending_size += stats.pending_size;
entry.failed_size += stats.failed_size;
entry.pending_count += stats.pending_count;
entry.failed_count += stats.failed_count;
}
}
}
/// Store data usage info to backend storage
pub async fn store_data_usage_in_backend(data_usage_info: DataUsageInfo, store: Arc<ECStore>) -> Result<()> {
let data =
serde_json::to_vec(&data_usage_info).map_err(|e| Error::Config(format!("Failed to serialize data usage info: {e}")))?;
// Save to backend using the same mechanism as original code
rustfs_ecstore::config::com::save_config(store, &DATA_USAGE_OBJ_NAME_PATH, data)
.await
.map_err(Error::Storage)?;
Ok(())
}
/// Load data usage info from backend storage
pub async fn load_data_usage_from_backend(store: Arc<ECStore>) -> Result<DataUsageInfo> {
let buf = match read_config(store, &DATA_USAGE_OBJ_NAME_PATH).await {
Ok(data) => data,
Err(e) => {
error!("Failed to read data usage info from backend: {}", e);
if e == rustfs_ecstore::error::Error::ConfigNotFound {
return Ok(DataUsageInfo::default());
}
return Err(Error::Storage(e));
}
};
let mut data_usage_info: DataUsageInfo =
serde_json::from_slice(&buf).map_err(|e| Error::Config(format!("Failed to deserialize data usage info: {e}")))?;
warn!("Loaded data usage info from backend {:?}", &data_usage_info);
// Handle backward compatibility like original code
if data_usage_info.buckets_usage.is_empty() {
data_usage_info.buckets_usage = data_usage_info
.bucket_sizes
.iter()
.map(|(bucket, &size)| {
(
bucket.clone(),
BucketUsageInfo {
size,
..Default::default()
},
)
})
.collect();
}
if data_usage_info.bucket_sizes.is_empty() {
data_usage_info.bucket_sizes = data_usage_info
.buckets_usage
.iter()
.map(|(bucket, bui)| (bucket.clone(), bui.size))
.collect();
}
for (bucket, bui) in &data_usage_info.buckets_usage {
if bui.replicated_size_v1 > 0
|| bui.replication_failed_count_v1 > 0
|| bui.replication_failed_size_v1 > 0
|| bui.replication_pending_count_v1 > 0
{
if let Ok((cfg, _)) = get_replication_config(bucket).await {
if !cfg.role.is_empty() {
data_usage_info.replication_info.insert(
cfg.role.clone(),
BucketTargetUsageInfo {
replication_failed_size: bui.replication_failed_size_v1,
replication_failed_count: bui.replication_failed_count_v1,
replicated_size: bui.replicated_size_v1,
replication_pending_count: bui.replication_pending_count_v1,
replication_pending_size: bui.replication_pending_size_v1,
..Default::default()
},
);
}
}
}
}
Ok(data_usage_info)
}
/// Example function showing how to use AHM data usage functionality
/// This demonstrates the integration pattern for DataUsageInfoHandler
pub async fn example_data_usage_integration() -> Result<()> {
// Get the global storage instance
let Some(store) = rustfs_ecstore::new_object_layer_fn() else {
return Err(Error::Config("Storage not initialized".to_string()));
};
// Load data usage from backend (this replaces the original load_data_usage_from_backend)
let data_usage = load_data_usage_from_backend(store).await?;
info!(
"Loaded data usage info: {} buckets, {} total objects",
data_usage.buckets_count, data_usage.objects_total_count
);
// Example: Store updated data usage back to backend
// This would typically be called by the scanner after collecting new statistics
// store_data_usage_in_backend(data_usage, store).await?;
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_data_usage_info_creation() {
let mut info = DataUsageInfo::new();
info.update_capacity(1000, 500, 500);
assert_eq!(info.total_capacity, 1000);
assert_eq!(info.total_used_capacity, 500);
assert_eq!(info.total_free_capacity, 500);
assert!(info.last_update.is_some());
}
#[test]
fn test_bucket_usage_info_merge() {
let mut usage1 = BucketUsageInfo::new();
usage1.size = 100;
usage1.objects_count = 10;
usage1.versions_count = 5;
let mut usage2 = BucketUsageInfo::new();
usage2.size = 200;
usage2.objects_count = 20;
usage2.versions_count = 10;
usage1.merge(&usage2);
assert_eq!(usage1.size, 300);
assert_eq!(usage1.objects_count, 30);
assert_eq!(usage1.versions_count, 15);
}
#[test]
fn test_size_summary_add() {
let mut summary1 = SizeSummary::new();
summary1.total_size = 100;
summary1.versions = 5;
let mut summary2 = SizeSummary::new();
summary2.total_size = 200;
summary2.versions = 10;
summary1.add(&summary2);
assert_eq!(summary1.total_size, 300);
assert_eq!(summary1.versions, 15);
}
}

View File

@@ -0,0 +1,277 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
/// Size interval for object size histogram
#[derive(Debug, Clone)]
pub struct SizeInterval {
pub start: u64,
pub end: u64,
pub name: &'static str,
}
/// Version interval for object versions histogram
#[derive(Debug, Clone)]
pub struct VersionInterval {
pub start: u64,
pub end: u64,
pub name: &'static str,
}
/// Object size histogram intervals
pub const OBJECTS_HISTOGRAM_INTERVALS: &[SizeInterval] = &[
SizeInterval {
start: 0,
end: 1024 - 1,
name: "LESS_THAN_1_KiB",
},
SizeInterval {
start: 1024,
end: 1024 * 1024 - 1,
name: "1_KiB_TO_1_MiB",
},
SizeInterval {
start: 1024 * 1024,
end: 10 * 1024 * 1024 - 1,
name: "1_MiB_TO_10_MiB",
},
SizeInterval {
start: 10 * 1024 * 1024,
end: 64 * 1024 * 1024 - 1,
name: "10_MiB_TO_64_MiB",
},
SizeInterval {
start: 64 * 1024 * 1024,
end: 128 * 1024 * 1024 - 1,
name: "64_MiB_TO_128_MiB",
},
SizeInterval {
start: 128 * 1024 * 1024,
end: 512 * 1024 * 1024 - 1,
name: "128_MiB_TO_512_MiB",
},
SizeInterval {
start: 512 * 1024 * 1024,
end: u64::MAX,
name: "MORE_THAN_512_MiB",
},
];
/// Object version count histogram intervals
pub const OBJECTS_VERSION_COUNT_INTERVALS: &[VersionInterval] = &[
VersionInterval {
start: 1,
end: 1,
name: "1_VERSION",
},
VersionInterval {
start: 2,
end: 10,
name: "2_TO_10_VERSIONS",
},
VersionInterval {
start: 11,
end: 100,
name: "11_TO_100_VERSIONS",
},
VersionInterval {
start: 101,
end: 1000,
name: "101_TO_1000_VERSIONS",
},
VersionInterval {
start: 1001,
end: u64::MAX,
name: "MORE_THAN_1000_VERSIONS",
},
];
/// Size histogram for object size distribution
#[derive(Debug, Clone, Default)]
pub struct SizeHistogram {
counts: Vec<u64>,
}
/// Versions histogram for object version count distribution
#[derive(Debug, Clone, Default)]
pub struct VersionsHistogram {
counts: Vec<u64>,
}
impl SizeHistogram {
/// Create a new size histogram
pub fn new() -> Self {
Self {
counts: vec![0; OBJECTS_HISTOGRAM_INTERVALS.len()],
}
}
/// Add a size to the histogram
pub fn add(&mut self, size: u64) {
for (idx, interval) in OBJECTS_HISTOGRAM_INTERVALS.iter().enumerate() {
if size >= interval.start && size <= interval.end {
self.counts[idx] += 1;
break;
}
}
}
/// Get the histogram as a map
pub fn to_map(&self) -> HashMap<String, u64> {
let mut result = HashMap::new();
for (idx, count) in self.counts.iter().enumerate() {
let interval = &OBJECTS_HISTOGRAM_INTERVALS[idx];
result.insert(interval.name.to_string(), *count);
}
result
}
/// Merge another histogram into this one
pub fn merge(&mut self, other: &SizeHistogram) {
for (idx, count) in other.counts.iter().enumerate() {
self.counts[idx] += count;
}
}
/// Get total count
pub fn total_count(&self) -> u64 {
self.counts.iter().sum()
}
/// Reset the histogram
pub fn reset(&mut self) {
for count in &mut self.counts {
*count = 0;
}
}
}
impl VersionsHistogram {
/// Create a new versions histogram
pub fn new() -> Self {
Self {
counts: vec![0; OBJECTS_VERSION_COUNT_INTERVALS.len()],
}
}
/// Add a version count to the histogram
pub fn add(&mut self, versions: u64) {
for (idx, interval) in OBJECTS_VERSION_COUNT_INTERVALS.iter().enumerate() {
if versions >= interval.start && versions <= interval.end {
self.counts[idx] += 1;
break;
}
}
}
/// Get the histogram as a map
pub fn to_map(&self) -> HashMap<String, u64> {
let mut result = HashMap::new();
for (idx, count) in self.counts.iter().enumerate() {
let interval = &OBJECTS_VERSION_COUNT_INTERVALS[idx];
result.insert(interval.name.to_string(), *count);
}
result
}
/// Merge another histogram into this one
pub fn merge(&mut self, other: &VersionsHistogram) {
for (idx, count) in other.counts.iter().enumerate() {
self.counts[idx] += count;
}
}
/// Get total count
pub fn total_count(&self) -> u64 {
self.counts.iter().sum()
}
/// Reset the histogram
pub fn reset(&mut self) {
for count in &mut self.counts {
*count = 0;
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_size_histogram() {
let mut histogram = SizeHistogram::new();
// Add some sizes
histogram.add(512); // LESS_THAN_1_KiB
histogram.add(1024); // 1_KiB_TO_1_MiB
histogram.add(1024 * 1024); // 1_MiB_TO_10_MiB
histogram.add(5 * 1024 * 1024); // 1_MiB_TO_10_MiB
let map = histogram.to_map();
assert_eq!(map.get("LESS_THAN_1_KiB"), Some(&1));
assert_eq!(map.get("1_KiB_TO_1_MiB"), Some(&1));
assert_eq!(map.get("1_MiB_TO_10_MiB"), Some(&2));
assert_eq!(map.get("10_MiB_TO_64_MiB"), Some(&0));
}
#[test]
fn test_versions_histogram() {
let mut histogram = VersionsHistogram::new();
// Add some version counts
histogram.add(1); // 1_VERSION
histogram.add(5); // 2_TO_10_VERSIONS
histogram.add(50); // 11_TO_100_VERSIONS
histogram.add(500); // 101_TO_1000_VERSIONS
let map = histogram.to_map();
assert_eq!(map.get("1_VERSION"), Some(&1));
assert_eq!(map.get("2_TO_10_VERSIONS"), Some(&1));
assert_eq!(map.get("11_TO_100_VERSIONS"), Some(&1));
assert_eq!(map.get("101_TO_1000_VERSIONS"), Some(&1));
}
#[test]
fn test_histogram_merge() {
let mut histogram1 = SizeHistogram::new();
histogram1.add(1024);
histogram1.add(1024 * 1024);
let mut histogram2 = SizeHistogram::new();
histogram2.add(1024);
histogram2.add(5 * 1024 * 1024);
histogram1.merge(&histogram2);
let map = histogram1.to_map();
assert_eq!(map.get("1_KiB_TO_1_MiB"), Some(&2)); // 1 from histogram1 + 1 from histogram2
assert_eq!(map.get("1_MiB_TO_10_MiB"), Some(&2)); // 1 from histogram1 + 1 from histogram2
}
#[test]
fn test_histogram_reset() {
let mut histogram = SizeHistogram::new();
histogram.add(1024);
histogram.add(1024 * 1024);
assert_eq!(histogram.total_count(), 2);
histogram.reset();
assert_eq!(histogram.total_count(), 0);
}
}

View File

@@ -0,0 +1,284 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::{
collections::HashMap,
sync::atomic::{AtomicU64, Ordering},
time::{Duration, SystemTime},
};
use serde::{Deserialize, Serialize};
use tracing::info;
/// Scanner metrics
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct ScannerMetrics {
/// Total objects scanned since server start
pub objects_scanned: u64,
/// Total object versions scanned since server start
pub versions_scanned: u64,
/// Total directories scanned since server start
pub directories_scanned: u64,
/// Total bucket scans started since server start
pub bucket_scans_started: u64,
/// Total bucket scans finished since server start
pub bucket_scans_finished: u64,
/// Total objects with health issues found
pub objects_with_issues: u64,
/// Total heal tasks queued
pub heal_tasks_queued: u64,
/// Total heal tasks completed
pub heal_tasks_completed: u64,
/// Total heal tasks failed
pub heal_tasks_failed: u64,
/// Last scan activity time
pub last_activity: Option<SystemTime>,
/// Current scan cycle
pub current_cycle: u64,
/// Total scan cycles completed
pub total_cycles: u64,
/// Current scan duration
pub current_scan_duration: Option<Duration>,
/// Average scan duration
pub avg_scan_duration: Duration,
/// Objects scanned per second
pub objects_per_second: f64,
/// Buckets scanned per second
pub buckets_per_second: f64,
/// Storage metrics by bucket
pub bucket_metrics: HashMap<String, BucketMetrics>,
/// Disk metrics
pub disk_metrics: HashMap<String, DiskMetrics>,
}
/// Bucket-specific metrics
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct BucketMetrics {
/// Bucket name
pub bucket: String,
/// Total objects in bucket
pub total_objects: u64,
/// Total size of objects in bucket (bytes)
pub total_size: u64,
/// Objects with health issues
pub objects_with_issues: u64,
/// Last scan time
pub last_scan_time: Option<SystemTime>,
/// Scan duration
pub scan_duration: Option<Duration>,
/// Heal tasks queued for this bucket
pub heal_tasks_queued: u64,
/// Heal tasks completed for this bucket
pub heal_tasks_completed: u64,
/// Heal tasks failed for this bucket
pub heal_tasks_failed: u64,
}
/// Disk-specific metrics
#[derive(Debug, Clone, Default, Serialize, Deserialize)]
pub struct DiskMetrics {
/// Disk path
pub disk_path: String,
/// Total disk space (bytes)
pub total_space: u64,
/// Used disk space (bytes)
pub used_space: u64,
/// Free disk space (bytes)
pub free_space: u64,
/// Objects scanned on this disk
pub objects_scanned: u64,
/// Objects with issues on this disk
pub objects_with_issues: u64,
/// Last scan time
pub last_scan_time: Option<SystemTime>,
/// Whether disk is online
pub is_online: bool,
/// Whether disk is being scanned
pub is_scanning: bool,
}
/// Thread-safe metrics collector
pub struct MetricsCollector {
/// Atomic counters for real-time metrics
objects_scanned: AtomicU64,
versions_scanned: AtomicU64,
directories_scanned: AtomicU64,
bucket_scans_started: AtomicU64,
bucket_scans_finished: AtomicU64,
objects_with_issues: AtomicU64,
heal_tasks_queued: AtomicU64,
heal_tasks_completed: AtomicU64,
heal_tasks_failed: AtomicU64,
current_cycle: AtomicU64,
total_cycles: AtomicU64,
}
impl MetricsCollector {
/// Create a new metrics collector
pub fn new() -> Self {
Self {
objects_scanned: AtomicU64::new(0),
versions_scanned: AtomicU64::new(0),
directories_scanned: AtomicU64::new(0),
bucket_scans_started: AtomicU64::new(0),
bucket_scans_finished: AtomicU64::new(0),
objects_with_issues: AtomicU64::new(0),
heal_tasks_queued: AtomicU64::new(0),
heal_tasks_completed: AtomicU64::new(0),
heal_tasks_failed: AtomicU64::new(0),
current_cycle: AtomicU64::new(0),
total_cycles: AtomicU64::new(0),
}
}
/// Increment objects scanned count
pub fn increment_objects_scanned(&self, count: u64) {
self.objects_scanned.fetch_add(count, Ordering::Relaxed);
}
/// Increment versions scanned count
pub fn increment_versions_scanned(&self, count: u64) {
self.versions_scanned.fetch_add(count, Ordering::Relaxed);
}
/// Increment directories scanned count
pub fn increment_directories_scanned(&self, count: u64) {
self.directories_scanned.fetch_add(count, Ordering::Relaxed);
}
/// Increment bucket scans started count
pub fn increment_bucket_scans_started(&self, count: u64) {
self.bucket_scans_started.fetch_add(count, Ordering::Relaxed);
}
/// Increment bucket scans finished count
pub fn increment_bucket_scans_finished(&self, count: u64) {
self.bucket_scans_finished.fetch_add(count, Ordering::Relaxed);
}
/// Increment objects with issues count
pub fn increment_objects_with_issues(&self, count: u64) {
self.objects_with_issues.fetch_add(count, Ordering::Relaxed);
}
/// Increment heal tasks queued count
pub fn increment_heal_tasks_queued(&self, count: u64) {
self.heal_tasks_queued.fetch_add(count, Ordering::Relaxed);
}
/// Increment heal tasks completed count
pub fn increment_heal_tasks_completed(&self, count: u64) {
self.heal_tasks_completed.fetch_add(count, Ordering::Relaxed);
}
/// Increment heal tasks failed count
pub fn increment_heal_tasks_failed(&self, count: u64) {
self.heal_tasks_failed.fetch_add(count, Ordering::Relaxed);
}
/// Set current cycle
pub fn set_current_cycle(&self, cycle: u64) {
self.current_cycle.store(cycle, Ordering::Relaxed);
}
/// Increment total cycles
pub fn increment_total_cycles(&self) {
self.total_cycles.fetch_add(1, Ordering::Relaxed);
}
/// Get current metrics snapshot
pub fn get_metrics(&self) -> ScannerMetrics {
ScannerMetrics {
objects_scanned: self.objects_scanned.load(Ordering::Relaxed),
versions_scanned: self.versions_scanned.load(Ordering::Relaxed),
directories_scanned: self.directories_scanned.load(Ordering::Relaxed),
bucket_scans_started: self.bucket_scans_started.load(Ordering::Relaxed),
bucket_scans_finished: self.bucket_scans_finished.load(Ordering::Relaxed),
objects_with_issues: self.objects_with_issues.load(Ordering::Relaxed),
heal_tasks_queued: self.heal_tasks_queued.load(Ordering::Relaxed),
heal_tasks_completed: self.heal_tasks_completed.load(Ordering::Relaxed),
heal_tasks_failed: self.heal_tasks_failed.load(Ordering::Relaxed),
last_activity: Some(SystemTime::now()),
current_cycle: self.current_cycle.load(Ordering::Relaxed),
total_cycles: self.total_cycles.load(Ordering::Relaxed),
current_scan_duration: None, // Will be set by scanner
avg_scan_duration: Duration::ZERO, // Will be calculated
objects_per_second: 0.0, // Will be calculated
buckets_per_second: 0.0, // Will be calculated
bucket_metrics: HashMap::new(), // Will be populated by scanner
disk_metrics: HashMap::new(), // Will be populated by scanner
}
}
/// Reset all metrics
pub fn reset(&self) {
self.objects_scanned.store(0, Ordering::Relaxed);
self.versions_scanned.store(0, Ordering::Relaxed);
self.directories_scanned.store(0, Ordering::Relaxed);
self.bucket_scans_started.store(0, Ordering::Relaxed);
self.bucket_scans_finished.store(0, Ordering::Relaxed);
self.objects_with_issues.store(0, Ordering::Relaxed);
self.heal_tasks_queued.store(0, Ordering::Relaxed);
self.heal_tasks_completed.store(0, Ordering::Relaxed);
self.heal_tasks_failed.store(0, Ordering::Relaxed);
self.current_cycle.store(0, Ordering::Relaxed);
self.total_cycles.store(0, Ordering::Relaxed);
info!("Scanner metrics reset");
}
}
impl Default for MetricsCollector {
fn default() -> Self {
Self::new()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_metrics_collector_creation() {
let collector = MetricsCollector::new();
let metrics = collector.get_metrics();
assert_eq!(metrics.objects_scanned, 0);
assert_eq!(metrics.versions_scanned, 0);
}
#[test]
fn test_metrics_increment() {
let collector = MetricsCollector::new();
collector.increment_objects_scanned(10);
collector.increment_versions_scanned(5);
collector.increment_objects_with_issues(2);
let metrics = collector.get_metrics();
assert_eq!(metrics.objects_scanned, 10);
assert_eq!(metrics.versions_scanned, 5);
assert_eq!(metrics.objects_with_issues, 2);
}
#[test]
fn test_metrics_reset() {
let collector = MetricsCollector::new();
collector.increment_objects_scanned(10);
collector.reset();
let metrics = collector.get_metrics();
assert_eq!(metrics.objects_scanned, 0);
}
}

View File

@@ -11,3 +11,15 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod data_scanner;
pub mod data_usage;
pub mod histogram;
pub mod metrics;
// Re-export main types for convenience
pub use data_scanner::Scanner;
pub use data_usage::{
BucketTargetUsageInfo, BucketUsageInfo, DataUsageInfo, load_data_usage_from_backend, store_data_usage_in_backend,
};
pub use metrics::ScannerMetrics;

View File

@@ -19,6 +19,10 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Application authentication and authorization for RustFS, providing secure access control and user management."
keywords = ["authentication", "authorization", "security", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "authentication"]
[dependencies]
base64-simd = { workspace = true }

View File

@@ -3,7 +3,7 @@
# RustFS AppAuth - Application Authentication
<p align="center">
<strong>Secure application authentication and authorization for RustFS object storage</strong>
<strong>Application-level authentication and authorization module for RustFS distributed object storage</strong>
</p>
<p align="center">
@@ -17,461 +17,21 @@
## 📖 Overview
**RustFS AppAuth** provides secure application authentication and authorization mechanisms for the [RustFS](https://rustfs.com) distributed object storage system. It implements modern cryptographic standards including RSA-based authentication, JWT tokens, and secure session management for application-level access control.
> **Note:** This is a security-critical submodule of RustFS that provides essential application authentication capabilities for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
**RustFS AppAuth** provides application-level authentication and authorization capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
### 🔐 Authentication Methods
- **RSA Authentication**: Public-key cryptography for secure authentication
- **JWT Tokens**: JSON Web Token support for stateless authentication
- **API Keys**: Simple API key-based authentication
- **Session Management**: Secure session handling and lifecycle management
### 🛡️ Security Features
- **Cryptographic Signing**: RSA digital signatures for request validation
- **Token Encryption**: Encrypted token storage and transmission
- **Key Rotation**: Automatic key rotation and management
- **Audit Logging**: Comprehensive authentication event logging
### 🚀 Performance Features
- **Base64 Optimization**: High-performance base64 encoding/decoding
- **Token Caching**: Efficient token validation caching
- **Parallel Verification**: Concurrent authentication processing
- **Hardware Acceleration**: Leverage CPU crypto extensions
### 🔧 Integration Features
- **S3 Compatibility**: AWS S3-compatible authentication
- **Multi-Tenant**: Support for multiple application tenants
- **Permission Mapping**: Fine-grained permission assignment
- **External Integration**: LDAP, OAuth, and custom authentication providers
## 📦 Installation
Add this to your `Cargo.toml`:
```toml
[dependencies]
rustfs-appauth = "0.1.0"
```
## 🔧 Usage
### Basic Authentication Setup
```rust
use rustfs_appauth::{AppAuthenticator, AuthConfig, AuthMethod};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Configure authentication
let config = AuthConfig {
auth_method: AuthMethod::RSA,
key_size: 2048,
token_expiry: Duration::from_hours(24),
enable_caching: true,
audit_logging: true,
};
// Initialize authenticator
let authenticator = AppAuthenticator::new(config).await?;
// Generate application credentials
let app_credentials = authenticator.generate_app_credentials("my-app").await?;
println!("App ID: {}", app_credentials.app_id);
println!("Public Key: {}", app_credentials.public_key);
Ok(())
}
```
### RSA-Based Authentication
```rust
use rustfs_appauth::{RSAAuthenticator, AuthRequest, AuthResponse};
async fn rsa_authentication_example() -> Result<(), Box<dyn std::error::Error>> {
// Create RSA authenticator
let rsa_auth = RSAAuthenticator::new(2048).await?;
// Generate key pair for application
let (private_key, public_key) = rsa_auth.generate_keypair().await?;
// Register application
let app_id = rsa_auth.register_application("my-storage-app", &public_key).await?;
println!("Application registered with ID: {}", app_id);
// Create authentication request
let auth_request = AuthRequest {
app_id: app_id.clone(),
timestamp: chrono::Utc::now(),
request_data: b"GET /bucket/object".to_vec(),
};
// Sign request with private key
let signed_request = rsa_auth.sign_request(&auth_request, &private_key).await?;
// Verify authentication
let auth_response = rsa_auth.authenticate(&signed_request).await?;
match auth_response {
AuthResponse::Success { session_token, permissions } => {
println!("Authentication successful!");
println!("Session token: {}", session_token);
println!("Permissions: {:?}", permissions);
}
AuthResponse::Failed { reason } => {
println!("Authentication failed: {}", reason);
}
}
Ok(())
}
```
### JWT Token Management
```rust
use rustfs_appauth::{JWTManager, TokenClaims, TokenRequest};
async fn jwt_management_example() -> Result<(), Box<dyn std::error::Error>> {
// Create JWT manager
let jwt_manager = JWTManager::new("your-secret-key").await?;
// Create token claims
let claims = TokenClaims {
app_id: "my-app".to_string(),
user_id: Some("user123".to_string()),
permissions: vec![
"read:bucket".to_string(),
"write:bucket".to_string(),
],
expires_at: chrono::Utc::now() + chrono::Duration::hours(24),
issued_at: chrono::Utc::now(),
};
// Generate JWT token
let token = jwt_manager.generate_token(&claims).await?;
println!("Generated token: {}", token);
// Validate token
let validation_result = jwt_manager.validate_token(&token).await?;
match validation_result {
Ok(validated_claims) => {
println!("Token valid for app: {}", validated_claims.app_id);
println!("Permissions: {:?}", validated_claims.permissions);
}
Err(e) => {
println!("Token validation failed: {}", e);
}
}
// Refresh token
let refreshed_token = jwt_manager.refresh_token(&token).await?;
println!("Refreshed token: {}", refreshed_token);
Ok(())
}
```
### API Key Authentication
```rust
use rustfs_appauth::{APIKeyManager, APIKeyConfig, KeyPermissions};
async fn api_key_authentication() -> Result<(), Box<dyn std::error::Error>> {
let api_key_manager = APIKeyManager::new().await?;
// Create API key configuration
let key_config = APIKeyConfig {
app_name: "storage-client".to_string(),
permissions: KeyPermissions {
read_buckets: vec!["public-*".to_string()],
write_buckets: vec!["uploads".to_string()],
admin_access: false,
},
expires_at: Some(chrono::Utc::now() + chrono::Duration::days(90)),
rate_limit: Some(1000), // requests per hour
};
// Generate API key
let api_key = api_key_manager.generate_key(&key_config).await?;
println!("Generated API key: {}", api_key.key);
println!("Key ID: {}", api_key.key_id);
// Authenticate with API key
let auth_result = api_key_manager.authenticate(&api_key.key).await?;
if auth_result.is_valid {
println!("API key authentication successful");
println!("Rate limit remaining: {}", auth_result.rate_limit_remaining);
}
// List API keys for application
let keys = api_key_manager.list_keys("storage-client").await?;
for key in keys {
println!("Key: {} - Status: {} - Expires: {:?}",
key.key_id, key.status, key.expires_at);
}
// Revoke API key
api_key_manager.revoke_key(&api_key.key_id).await?;
println!("API key revoked successfully");
Ok(())
}
```
### Session Management
```rust
use rustfs_appauth::{SessionManager, SessionConfig, SessionInfo};
async fn session_management_example() -> Result<(), Box<dyn std::error::Error>> {
// Configure session management
let session_config = SessionConfig {
session_timeout: Duration::from_hours(8),
max_sessions_per_app: 10,
require_refresh: true,
secure_cookies: true,
};
let session_manager = SessionManager::new(session_config).await?;
// Create new session
let session_info = SessionInfo {
app_id: "web-app".to_string(),
user_id: Some("user456".to_string()),
ip_address: "192.168.1.100".to_string(),
user_agent: "RustFS-Client/1.0".to_string(),
};
let session = session_manager.create_session(&session_info).await?;
println!("Session created: {}", session.session_id);
// Validate session
let validation = session_manager.validate_session(&session.session_id).await?;
if validation.is_valid {
println!("Session is valid, expires at: {}", validation.expires_at);
}
// Refresh session
session_manager.refresh_session(&session.session_id).await?;
println!("Session refreshed");
// Get active sessions
let active_sessions = session_manager.get_active_sessions("web-app").await?;
println!("Active sessions: {}", active_sessions.len());
// Terminate session
session_manager.terminate_session(&session.session_id).await?;
println!("Session terminated");
Ok(())
}
```
### Multi-Tenant Authentication
```rust
use rustfs_appauth::{MultiTenantAuth, TenantConfig, TenantPermissions};
async fn multi_tenant_auth_example() -> Result<(), Box<dyn std::error::Error>> {
let multi_tenant_auth = MultiTenantAuth::new().await?;
// Create tenant configurations
let tenant1_config = TenantConfig {
tenant_id: "company-a".to_string(),
name: "Company A".to_string(),
permissions: TenantPermissions {
max_buckets: 100,
max_storage_gb: 1000,
allowed_regions: vec!["us-east-1".to_string(), "us-west-2".to_string()],
},
auth_methods: vec![AuthMethod::RSA, AuthMethod::JWT],
};
let tenant2_config = TenantConfig {
tenant_id: "company-b".to_string(),
name: "Company B".to_string(),
permissions: TenantPermissions {
max_buckets: 50,
max_storage_gb: 500,
allowed_regions: vec!["eu-west-1".to_string()],
},
auth_methods: vec![AuthMethod::APIKey],
};
// Register tenants
multi_tenant_auth.register_tenant(&tenant1_config).await?;
multi_tenant_auth.register_tenant(&tenant2_config).await?;
// Authenticate application for specific tenant
let auth_request = TenantAuthRequest {
tenant_id: "company-a".to_string(),
app_id: "app-1".to_string(),
credentials: AuthCredentials::RSA {
signature: "signed-data".to_string(),
public_key: "public-key-data".to_string(),
},
};
let auth_result = multi_tenant_auth.authenticate(&auth_request).await?;
if auth_result.is_authenticated {
println!("Multi-tenant authentication successful");
println!("Tenant: {}", auth_result.tenant_id);
println!("Permissions: {:?}", auth_result.permissions);
}
Ok(())
}
```
### Authentication Middleware
```rust
use rustfs_appauth::{AuthMiddleware, AuthContext, MiddlewareConfig};
use axum::{Router, middleware, Extension};
async fn setup_auth_middleware() -> Result<Router, Box<dyn std::error::Error>> {
// Configure authentication middleware
let middleware_config = MiddlewareConfig {
skip_paths: vec!["/health".to_string(), "/metrics".to_string()],
require_auth: true,
audit_requests: true,
};
let auth_middleware = AuthMiddleware::new(middleware_config).await?;
// Create router with authentication middleware
let app = Router::new()
.route("/api/buckets", axum::routing::get(list_buckets))
.route("/api/objects", axum::routing::post(upload_object))
.layer(middleware::from_fn(auth_middleware.authenticate))
.layer(Extension(auth_middleware));
Ok(app)
}
async fn list_buckets(
Extension(auth_context): Extension<AuthContext>,
) -> Result<String, Box<dyn std::error::Error>> {
// Use authentication context
println!("Authenticated app: {}", auth_context.app_id);
println!("Permissions: {:?}", auth_context.permissions);
// Your bucket listing logic here
Ok("Bucket list".to_string())
}
```
## 🏗️ Architecture
### AppAuth Architecture
```
AppAuth Architecture:
┌─────────────────────────────────────────────────────────────┐
│ Authentication API │
├─────────────────────────────────────────────────────────────┤
│ RSA Auth │ JWT Tokens │ API Keys │ Sessions │
├─────────────────────────────────────────────────────────────┤
│ Cryptographic Operations │
├─────────────────────────────────────────────────────────────┤
│ Signing/ │ Token │ Key │ Session │
│ Verification │ Management │ Management │ Storage │
├─────────────────────────────────────────────────────────────┤
│ Security Infrastructure │
└─────────────────────────────────────────────────────────────┘
```
### Authentication Methods
| Method | Security Level | Use Case | Performance |
|--------|----------------|----------|-------------|
| RSA | High | Enterprise applications | Medium |
| JWT | Medium-High | Web applications | High |
| API Key | Medium | Service-to-service | Very High |
| Session | Medium | Interactive applications | High |
## 🧪 Testing
Run the test suite:
```bash
# Run all tests
cargo test
# Test RSA authentication
cargo test rsa_auth
# Test JWT tokens
cargo test jwt_tokens
# Test API key management
cargo test api_keys
# Test session management
cargo test sessions
# Integration tests
cargo test --test integration
```
## 📋 Requirements
- **Rust**: 1.70.0 or later
- **Platforms**: Linux, macOS, Windows
- **Dependencies**: RSA cryptographic libraries
- **Security**: Secure key storage recommended
## 🌍 Related Projects
This module is part of the RustFS ecosystem:
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
- [RustFS IAM](../iam) - Identity and access management
- [RustFS Signer](../signer) - Request signing
- [RustFS Crypto](../crypto) - Cryptographic operations
- JWT-based authentication with secure token management
- RBAC (Role-Based Access Control) for fine-grained permissions
- Multi-tenant application isolation and management
- OAuth 2.0 and OpenID Connect integration
- API key management and rotation
- Session management with configurable expiration
## 📚 Documentation
For comprehensive documentation, visit:
- [RustFS Documentation](https://docs.rustfs.com)
- [AppAuth API Reference](https://docs.rustfs.com/appauth/)
- [Security Guide](https://docs.rustfs.com/security/)
## 🔗 Links
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details.
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
---
<p align="center">
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
All other trademarks are the property of their respective owners.
</p>
<p align="center">
Made with 🔐 by the RustFS Team
</p>
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -23,14 +23,14 @@ use std::io::{Error, Result};
#[derive(Serialize, Deserialize, Debug, Default, Clone)]
pub struct Token {
pub name: String, // 应用 ID
pub expired: u64, // 到期时间 (UNIX 时间戳)
pub name: String, // Application ID
pub expired: u64, // Expiry time (UNIX timestamp)
}
// 公钥生成 Token
// [token] Token 对象
// [key] 公钥字符串
// 返回 base64 处理的加密字符串
/// Public key generation Token
/// [token] Token object
/// [key] Public key string
/// Returns the encrypted string processed by base64
pub fn gencode(token: &Token, key: &str) -> Result<String> {
let data = serde_json::to_vec(token)?;
let public_key = RsaPublicKey::from_public_key_pem(key).map_err(Error::other)?;
@@ -38,10 +38,10 @@ pub fn gencode(token: &Token, key: &str) -> Result<String> {
Ok(base64_simd::URL_SAFE_NO_PAD.encode_to_string(&encrypted_data))
}
// 私钥解析 Token
// [token] base64 处理的加密字符串
// [key] 私钥字符串
// 返回 Token 对象
/// Private key resolution Token
/// [token] Encrypted string processed by base64
/// [key] Private key string
/// Return to the Token object
pub fn parse(token: &str, key: &str) -> Result<Token> {
let encrypted_data = base64_simd::URL_SAFE_NO_PAD
.decode_to_vec(token.as_bytes())

View File

@@ -19,11 +19,14 @@ edition.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
homepage.workspace = true
description = "Common utilities and data structures for RustFS, providing shared functionality across the project."
keywords = ["common", "utilities", "data-structures", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "data-structures"]
[lints]
workspace = true
[dependencies]
lazy_static.workspace = true
tokio.workspace = true
tonic = { workspace = true }

View File

@@ -3,7 +3,7 @@
# RustFS Common - Shared Components
<p align="center">
<strong>Common types, utilities, and shared components for RustFS distributed object storage</strong>
<strong>Shared components and common utilities module for RustFS distributed object storage</strong>
</p>
<p align="center">
@@ -17,279 +17,21 @@
## 📖 Overview
**RustFS Common** provides shared components, types, and utilities used across all RustFS modules. This foundational library ensures consistency, reduces code duplication, and provides essential building blocks for the [RustFS](https://rustfs.com) distributed object storage system.
> **Note:** This is a foundational submodule of RustFS that provides essential shared components for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
**RustFS Common** provides shared components and common utilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
### 🔧 Core Types
- **Common Data Structures**: Shared types and enums
- **Error Handling**: Unified error types and utilities
- **Result Types**: Consistent result handling patterns
- **Constants**: System-wide constants and defaults
### 🛠️ Utilities
- **Async Helpers**: Common async patterns and utilities
- **Serialization**: Shared serialization utilities
- **Logging**: Common logging and tracing setup
- **Metrics**: Shared metrics and observability
### 🌐 Network Components
- **gRPC Common**: Shared gRPC types and utilities
- **Protocol Helpers**: Common protocol implementations
- **Connection Management**: Shared connection utilities
- **Request/Response Types**: Common API types
## 📦 Installation
Add this to your `Cargo.toml`:
```toml
[dependencies]
rustfs-common = "0.1.0"
```
## 🔧 Usage
### Basic Common Types
```rust
use rustfs_common::{Result, Error, ObjectInfo, BucketInfo};
fn main() -> Result<()> {
// Use common result type
let result = some_operation()?;
// Use common object info
let object = ObjectInfo {
name: "example.txt".to_string(),
size: 1024,
etag: "d41d8cd98f00b204e9800998ecf8427e".to_string(),
last_modified: chrono::Utc::now(),
content_type: "text/plain".to_string(),
};
println!("Object: {} ({} bytes)", object.name, object.size);
Ok(())
}
```
### Error Handling
```rust
use rustfs_common::{Error, ErrorKind, Result};
fn example_operation() -> Result<String> {
// Different error types
match some_condition {
true => Ok("Success".to_string()),
false => Err(Error::new(
ErrorKind::InvalidInput,
"Invalid operation parameters"
)),
}
}
fn handle_errors() {
match example_operation() {
Ok(value) => println!("Success: {}", value),
Err(e) => {
match e.kind() {
ErrorKind::InvalidInput => println!("Input error: {}", e),
ErrorKind::NotFound => println!("Not found: {}", e),
ErrorKind::PermissionDenied => println!("Access denied: {}", e),
_ => println!("Other error: {}", e),
}
}
}
}
```
### Async Utilities
```rust
use rustfs_common::async_utils::{timeout_with_default, retry_with_backoff, spawn_task};
use std::time::Duration;
async fn async_operations() -> Result<()> {
// Timeout with default value
let result = timeout_with_default(
Duration::from_secs(5),
expensive_operation(),
"default_value".to_string()
).await;
// Retry with exponential backoff
let result = retry_with_backoff(
3, // max attempts
Duration::from_millis(100), // initial delay
|| async { fallible_operation().await }
).await?;
// Spawn background task
spawn_task("background-worker", async {
background_work().await;
});
Ok(())
}
```
### Metrics and Observability
```rust
use rustfs_common::metrics::{Counter, Histogram, Gauge, MetricsRegistry};
fn setup_metrics() -> Result<()> {
let registry = MetricsRegistry::new();
// Create metrics
let requests_total = Counter::new("requests_total", "Total number of requests")?;
let request_duration = Histogram::new(
"request_duration_seconds",
"Request duration in seconds"
)?;
let active_connections = Gauge::new(
"active_connections",
"Number of active connections"
)?;
// Register metrics
registry.register(Box::new(requests_total))?;
registry.register(Box::new(request_duration))?;
registry.register(Box::new(active_connections))?;
Ok(())
}
```
### gRPC Common Types
```rust
use rustfs_common::grpc::{GrpcResult, GrpcError, TonicStatus};
use tonic::{Request, Response, Status};
async fn grpc_service_example(
request: Request<MyRequest>
) -> GrpcResult<MyResponse> {
let req = request.into_inner();
// Validate request
if req.name.is_empty() {
return Err(GrpcError::invalid_argument("Name cannot be empty"));
}
// Process request
let response = MyResponse {
result: format!("Processed: {}", req.name),
status: "success".to_string(),
};
Ok(Response::new(response))
}
// Error conversion
impl From<Error> for Status {
fn from(err: Error) -> Self {
match err.kind() {
ErrorKind::NotFound => Status::not_found(err.to_string()),
ErrorKind::PermissionDenied => Status::permission_denied(err.to_string()),
ErrorKind::InvalidInput => Status::invalid_argument(err.to_string()),
_ => Status::internal(err.to_string()),
}
}
}
```
## 🏗️ Architecture
### Common Module Structure
```
Common Architecture:
┌─────────────────────────────────────────────────────────────┐
│ Public API Layer │
├─────────────────────────────────────────────────────────────┤
│ Core Types │ Error Types │ Result Types │
├─────────────────────────────────────────────────────────────┤
│ Async Utils │ Metrics │ gRPC Common │
├─────────────────────────────────────────────────────────────┤
│ Constants │ Serialization │ Logging │
├─────────────────────────────────────────────────────────────┤
│ Foundation Types │
└─────────────────────────────────────────────────────────────┘
```
### Core Components
| Component | Purpose | Usage |
|-----------|---------|-------|
| Types | Common data structures | Shared across all modules |
| Errors | Unified error handling | Consistent error reporting |
| Async Utils | Async patterns | Common async operations |
| Metrics | Observability | Performance monitoring |
| gRPC | Protocol support | Service communication |
## 🧪 Testing
Run the test suite:
```bash
# Run all tests
cargo test
# Test specific components
cargo test types
cargo test errors
cargo test async_utils
```
## 📋 Requirements
- **Rust**: 1.70.0 or later
- **Platforms**: Linux, macOS, Windows
- **Dependencies**: Minimal, focused on essential functionality
## 🌍 Related Projects
This module is part of the RustFS ecosystem:
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
- [RustFS Utils](../utils) - Utility functions
- [RustFS Config](../config) - Configuration management
- Shared data structures and type definitions
- Common error handling and result types
- Utility functions used across modules
- Configuration structures and validation
- Logging and tracing infrastructure
- Cross-platform compatibility helpers
## 📚 Documentation
For comprehensive documentation, visit:
- [RustFS Documentation](https://docs.rustfs.com)
- [Common API Reference](https://docs.rustfs.com/common/)
## 🔗 Links
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details.
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
---
<p align="center">
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
All other trademarks are the property of their respective owners.
</p>
<p align="center">
Made with 🔧 by the RustFS Team
</p>
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -12,19 +12,19 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
#![allow(non_upper_case_globals)] // FIXME
use std::collections::HashMap;
use std::sync::LazyLock;
use lazy_static::lazy_static;
use tokio::sync::RwLock;
use tonic::transport::Channel;
lazy_static! {
pub static ref GLOBAL_Local_Node_Name: RwLock<String> = RwLock::new("".to_string());
pub static ref GLOBAL_Rustfs_Host: RwLock<String> = RwLock::new("".to_string());
pub static ref GLOBAL_Rustfs_Port: RwLock<String> = RwLock::new("9000".to_string());
pub static ref GLOBAL_Rustfs_Addr: RwLock<String> = RwLock::new("".to_string());
pub static ref GLOBAL_Conn_Map: RwLock<HashMap<String, Channel>> = RwLock::new(HashMap::new());
}
pub static GLOBAL_Local_Node_Name: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
pub static GLOBAL_Rustfs_Host: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
pub static GLOBAL_Rustfs_Port: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("9000".to_string()));
pub static GLOBAL_Rustfs_Addr: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
pub static GLOBAL_Conn_Map: LazyLock<RwLock<HashMap<String, Channel>>> = LazyLock::new(|| RwLock::new(HashMap::new()));
pub async fn set_global_addr(addr: &str) {
*GLOBAL_Rustfs_Addr.write().await = addr.to_string();

View File

@@ -19,6 +19,10 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Configuration management for RustFS, providing a centralized way to manage application settings and features."
keywords = ["configuration", "settings", "management", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "config"]
[dependencies]
const-str = { workspace = true, optional = true }

View File

@@ -3,7 +3,7 @@
# RustFS Config - Configuration Management
<p align="center">
<strong>Centralized configuration management for RustFS distributed object storage</strong>
<strong>Configuration management and validation module for RustFS distributed object storage</strong>
</p>
<p align="center">
@@ -17,388 +17,21 @@
## 📖 Overview
**RustFS Config** is the configuration management module for the [RustFS](https://rustfs.com) distributed object storage system. It provides centralized configuration handling, environment-based configuration loading, validation, and runtime configuration updates for all RustFS components.
> **Note:** This is a foundational submodule of RustFS that provides essential configuration management capabilities for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
**RustFS Config** provides configuration management and validation capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
### ⚙️ Configuration Management
- **Multi-Format Support**: JSON, YAML, TOML configuration formats
- **Environment Variables**: Automatic environment variable override
- **Default Values**: Comprehensive default configuration
- **Validation**: Configuration validation and error reporting
### 🔧 Advanced Features
- **Hot Reload**: Runtime configuration updates without restart
- **Profile Support**: Environment-specific configuration profiles
- **Secret Management**: Secure handling of sensitive configuration
- **Configuration Merging**: Hierarchical configuration composition
### 🛠️ Developer Features
- **Type Safety**: Strongly typed configuration structures
- **Documentation**: Auto-generated configuration documentation
- **CLI Integration**: Command-line configuration override
- **Testing Support**: Configuration mocking for tests
## 📦 Installation
Add this to your `Cargo.toml`:
```toml
[dependencies]
rustfs-config = "0.1.0"
# With specific features
rustfs-config = { version = "0.1.0", features = ["constants", "notify"] }
```
### Feature Flags
Available features:
- `constants` - Configuration constants and compile-time values
- `notify` - Configuration change notification support
- `observability` - Observability and metrics configuration
- `default` - Core configuration functionality
## 🔧 Usage
### Basic Configuration Loading
```rust
use rustfs_config::{Config, ConfigBuilder, ConfigFormat};
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Load configuration from file
let config = Config::from_file("config.yaml")?;
// Load with environment overrides
let config = ConfigBuilder::new()
.add_file("config.yaml")
.add_env_prefix("RUSTFS")
.build()?;
// Access configuration values
println!("Server address: {}", config.server.address);
println!("Storage path: {}", config.storage.path);
Ok(())
}
```
### Environment-Based Configuration
```rust
use rustfs_config::{Config, Environment};
async fn load_environment_config() -> Result<(), Box<dyn std::error::Error>> {
// Load configuration based on environment
let env = Environment::detect()?;
let config = Config::for_environment(env).await?;
match env {
Environment::Development => {
println!("Using development configuration");
println!("Debug mode: {}", config.debug.enabled);
}
Environment::Production => {
println!("Using production configuration");
println!("Log level: {}", config.logging.level);
}
Environment::Testing => {
println!("Using test configuration");
println!("Test database: {}", config.database.test_url);
}
}
Ok(())
}
```
### Configuration Structure
```rust
use rustfs_config::{Config, ServerConfig, StorageConfig, SecurityConfig};
use serde::{Deserialize, Serialize};
#[derive(Debug, Deserialize, Serialize)]
pub struct ApplicationConfig {
pub server: ServerConfig,
pub storage: StorageConfig,
pub security: SecurityConfig,
pub logging: LoggingConfig,
pub monitoring: MonitoringConfig,
}
#[derive(Debug, Deserialize, Serialize)]
pub struct ServerConfig {
pub address: String,
pub port: u16,
pub workers: usize,
pub timeout: std::time::Duration,
}
#[derive(Debug, Deserialize, Serialize)]
pub struct StorageConfig {
pub path: String,
pub max_size: u64,
pub compression: bool,
pub erasure_coding: ErasureCodingConfig,
}
fn load_typed_config() -> Result<ApplicationConfig, Box<dyn std::error::Error>> {
let config: ApplicationConfig = Config::builder()
.add_file("config.yaml")
.add_env_prefix("RUSTFS")
.set_default("server.port", 9000)?
.set_default("server.workers", 4)?
.build_typed()?;
Ok(config)
}
```
### Configuration Validation
```rust
use rustfs_config::{Config, ValidationError, Validator};
#[derive(Debug)]
pub struct ConfigValidator;
impl Validator<ApplicationConfig> for ConfigValidator {
fn validate(&self, config: &ApplicationConfig) -> Result<(), ValidationError> {
// Validate server configuration
if config.server.port < 1024 {
return Err(ValidationError::new("server.port", "Port must be >= 1024"));
}
if config.server.workers == 0 {
return Err(ValidationError::new("server.workers", "Workers must be > 0"));
}
// Validate storage configuration
if !std::path::Path::new(&config.storage.path).exists() {
return Err(ValidationError::new("storage.path", "Storage path does not exist"));
}
// Validate erasure coding parameters
if config.storage.erasure_coding.data_drives + config.storage.erasure_coding.parity_drives > 16 {
return Err(ValidationError::new("storage.erasure_coding", "Total drives cannot exceed 16"));
}
Ok(())
}
}
fn validate_configuration() -> Result<(), Box<dyn std::error::Error>> {
let config: ApplicationConfig = Config::load_with_validation(
"config.yaml",
ConfigValidator,
)?;
println!("Configuration is valid!");
Ok(())
}
```
### Hot Configuration Reload
```rust
use rustfs_config::{ConfigWatcher, ConfigEvent};
use tokio::sync::mpsc;
async fn watch_configuration_changes() -> Result<(), Box<dyn std::error::Error>> {
let (tx, mut rx) = mpsc::channel::<ConfigEvent>(100);
// Start configuration watcher
let watcher = ConfigWatcher::new("config.yaml", tx)?;
watcher.start().await?;
// Handle configuration changes
while let Some(event) = rx.recv().await {
match event {
ConfigEvent::Changed(new_config) => {
println!("Configuration changed, reloading...");
// Apply new configuration
apply_configuration(new_config).await?;
}
ConfigEvent::Error(err) => {
eprintln!("Configuration error: {}", err);
}
}
}
Ok(())
}
async fn apply_configuration(config: ApplicationConfig) -> Result<(), Box<dyn std::error::Error>> {
// Update server configuration
// Update storage configuration
// Update security settings
// etc.
Ok(())
}
```
### Configuration Profiles
```rust
use rustfs_config::{Config, Profile, ProfileManager};
fn load_profile_based_config() -> Result<(), Box<dyn std::error::Error>> {
let profile_manager = ProfileManager::new("configs/")?;
// Load specific profile
let config = profile_manager.load_profile("production")?;
// Load with fallback
let config = profile_manager
.load_profile("staging")
.or_else(|_| profile_manager.load_profile("default"))?;
// Merge multiple profiles
let config = profile_manager
.merge_profiles(&["base", "production", "regional"])?;
Ok(())
}
```
## 🏗️ Architecture
### Configuration Architecture
```
Config Architecture:
┌─────────────────────────────────────────────────────────────┐
│ Configuration API │
├─────────────────────────────────────────────────────────────┤
│ File Loader │ Env Loader │ CLI Parser │
├─────────────────────────────────────────────────────────────┤
│ Configuration Merger │
├─────────────────────────────────────────────────────────────┤
│ Validation │ Watching │ Hot Reload │
├─────────────────────────────────────────────────────────────┤
│ Type System Integration │
└─────────────────────────────────────────────────────────────┘
```
### Configuration Sources
| Source | Priority | Format | Example |
|--------|----------|---------|---------|
| Command Line | 1 (Highest) | Key-Value | `--server.port=8080` |
| Environment Variables | 2 | Key-Value | `RUSTFS_SERVER_PORT=8080` |
| Configuration File | 3 | JSON/YAML/TOML | `config.yaml` |
| Default Values | 4 (Lowest) | Code | Compile-time defaults |
## 📋 Configuration Reference
### Server Configuration
```yaml
server:
address: "0.0.0.0"
port: 9000
workers: 4
timeout: "30s"
tls:
enabled: true
cert_file: "/etc/ssl/server.crt"
key_file: "/etc/ssl/server.key"
```
### Storage Configuration
```yaml
storage:
path: "/var/lib/rustfs"
max_size: "1TB"
compression: true
erasure_coding:
data_drives: 8
parity_drives: 4
stripe_size: "1MB"
```
### Security Configuration
```yaml
security:
auth:
enabled: true
method: "jwt"
secret_key: "${JWT_SECRET}"
encryption:
algorithm: "AES-256-GCM"
key_rotation_interval: "24h"
```
## 🧪 Testing
Run the test suite:
```bash
# Run all tests
cargo test
# Test configuration loading
cargo test config_loading
# Test validation
cargo test validation
# Test hot reload
cargo test hot_reload
```
## 📋 Requirements
- **Rust**: 1.70.0 or later
- **Platforms**: Linux, macOS, Windows
- **Dependencies**: Minimal external dependencies
## 🌍 Related Projects
This module is part of the RustFS ecosystem:
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
- [RustFS Utils](../utils) - Utility functions
- [RustFS Common](../common) - Common types and utilities
- Multi-format configuration support (TOML, YAML, JSON, ENV)
- Environment variable integration and override
- Configuration validation and type safety
- Hot-reload capabilities for dynamic updates
- Default value management and fallbacks
- Secure credential handling and encryption
## 📚 Documentation
For comprehensive documentation, visit:
- [RustFS Documentation](https://docs.rustfs.com)
- [Config API Reference](https://docs.rustfs.com/config/)
## 🔗 Links
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details.
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
---
<p align="center">
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
All other trademarks are the property of their respective owners.
</p>
<p align="center">
Made with ⚙️ by the RustFS Team
</p>
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,11 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Cryptography and security features for RustFS, providing encryption, hashing, and secure authentication mechanisms."
keywords = ["cryptography", "encryption", "hashing", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "cryptography"]
documentation = "https://docs.rs/rustfs-crypto/latest/rustfs_crypto/"
[lints]
workspace = true

View File

@@ -1,9 +1,9 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS Crypto Module
# RustFS Crypto - Cryptographic Operations
<p align="center">
<strong>High-performance cryptographic module for RustFS distributed object storage</strong>
<strong>High-performance cryptographic operations module for RustFS distributed object storage</strong>
</p>
<p align="center">
@@ -17,313 +17,21 @@
## 📖 Overview
The **RustFS Crypto Module** is a core cryptographic component of the [RustFS](https://rustfs.com) distributed object storage system. This module provides secure, high-performance encryption and decryption capabilities, JWT token management, and cross-platform cryptographic operations designed specifically for enterprise-grade storage systems.
> **Note:** This is a submodule of RustFS and is designed to work seamlessly within the RustFS ecosystem. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
**RustFS Crypto** provides high-performance cryptographic operations for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
### 🔐 Encryption & Decryption
- **Multiple Algorithms**: Support for AES-GCM, ChaCha20Poly1305, and PBKDF2
- **Key Derivation**: Argon2id and PBKDF2 for secure key generation
- **Memory Safety**: Built with Rust's memory safety guarantees
- **Cross-Platform**: Optimized for x86_64, aarch64, s390x, and other architectures
### 🎫 JWT Management
- **Token Generation**: Secure JWT token creation with HS512 algorithm
- **Token Validation**: Robust JWT token verification and decoding
- **Claims Management**: Flexible claims handling with JSON support
### 🛡️ Security Features
- **FIPS Compliance**: Optional FIPS 140-2 compatible mode
- **Hardware Acceleration**: Automatic detection and utilization of CPU crypto extensions
- **Secure Random**: Cryptographically secure random number generation
- **Side-Channel Protection**: Resistant to timing attacks
### 🚀 Performance
- **Zero-Copy Operations**: Efficient memory usage with `Bytes` support
- **Async/Await**: Full async support for non-blocking operations
- **Hardware Optimization**: CPU-specific optimizations for better performance
## 📦 Installation
Add this to your `Cargo.toml`:
```toml
[dependencies]
rustfs-crypto = "0.1.0"
```
### Feature Flags
```toml
[dependencies]
rustfs-crypto = { version = "0.1.0", features = ["crypto", "fips"] }
```
Available features:
- `crypto` (default): Enable all cryptographic functions
- `fips`: Enable FIPS 140-2 compliance mode
- `default`: Includes both `crypto` and `fips`
## 🔧 Usage
### Basic Encryption/Decryption
```rust
use rustfs_crypto::{encrypt_data, decrypt_data};
fn main() -> Result<(), Box<dyn std::error::Error>> {
let password = b"my_secure_password";
let data = b"sensitive information";
// Encrypt data
let encrypted = encrypt_data(password, data)?;
println!("Encrypted {} bytes", encrypted.len());
// Decrypt data
let decrypted = decrypt_data(password, &encrypted)?;
assert_eq!(data, decrypted.as_slice());
println!("Successfully decrypted data");
Ok(())
}
```
### JWT Token Management
```rust
use rustfs_crypto::{jwt_encode, jwt_decode};
use serde_json::json;
fn main() -> Result<(), Box<dyn std::error::Error>> {
let secret = b"jwt_secret_key";
let claims = json!({
"sub": "user123",
"exp": 1234567890,
"iat": 1234567890
});
// Create JWT token
let token = jwt_encode(secret, &claims)?;
println!("Generated token: {}", token);
// Verify and decode token
let decoded = jwt_decode(&token, secret)?;
println!("Decoded claims: {:?}", decoded.claims);
Ok(())
}
```
### Advanced Usage with Custom Configuration
```rust
use rustfs_crypto::{encrypt_data, decrypt_data, Error};
#[cfg(feature = "crypto")]
fn secure_storage_example() -> Result<(), Error> {
// Large data encryption
let large_data = vec![0u8; 1024 * 1024]; // 1MB
let password = b"complex_password_123!@#";
// Encrypt with automatic algorithm selection
let encrypted = encrypt_data(password, &large_data)?;
// Decrypt and verify
let decrypted = decrypt_data(password, &encrypted)?;
assert_eq!(large_data.len(), decrypted.len());
println!("Successfully processed {} bytes", large_data.len());
Ok(())
}
```
## 🏗️ Architecture
### Supported Encryption Algorithms
| Algorithm | Key Derivation | Use Case | FIPS Compliant |
|-----------|---------------|----------|----------------|
| AES-GCM | Argon2id | General purpose, hardware accelerated | ✅ |
| ChaCha20Poly1305 | Argon2id | Software-only environments | ❌ |
| AES-GCM | PBKDF2 | FIPS compliance required | ✅ |
### Cross-Platform Support
The module automatically detects and optimizes for:
- **x86/x86_64**: AES-NI and PCLMULQDQ instructions
- **aarch64**: ARM Crypto Extensions
- **s390x**: IBM Z Crypto Extensions
- **Other architectures**: Fallback to software implementations
## 🧪 Testing
Run the test suite:
```bash
# Run all tests
cargo test
# Run tests with all features
cargo test --all-features
# Run benchmarks
cargo bench
# Test cross-platform compatibility
cargo test --target x86_64-unknown-linux-gnu
cargo test --target aarch64-unknown-linux-gnu
```
## 📊 Performance
The crypto module is designed for high-performance scenarios:
- **Encryption Speed**: Up to 2GB/s on modern hardware
- **Memory Usage**: Minimal heap allocation with zero-copy operations
- **CPU Utilization**: Automatic hardware acceleration detection
- **Scalability**: Thread-safe operations for concurrent access
## 🤝 Integration with RustFS
This module is specifically designed to integrate with other RustFS components:
- **Storage Layer**: Provides encryption for object storage
- **Authentication**: JWT tokens for API authentication
- **Configuration**: Secure configuration data encryption
- **Metadata**: Encrypted metadata storage
## 📋 Requirements
- **Rust**: 1.70.0 or later
- **Platforms**: Linux, macOS, Windows
- **Architectures**: x86_64, aarch64, s390x, and more
## 🔒 Security Considerations
- All cryptographic operations use industry-standard algorithms
- Key derivation follows best practices (Argon2id, PBKDF2)
- Memory is securely cleared after use
- Timing attack resistance is built-in
- Hardware security modules (HSM) support planned
## 🐛 Known Issues
- Hardware acceleration detection may not work on all virtualized environments
- FIPS mode requires additional system-level configuration
- Some older CPU architectures may have reduced performance
## 🌍 Related Projects
This module is part of the RustFS ecosystem:
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
- [RustFS ECStore](../ecstore) - Erasure coding storage engine
- [RustFS IAM](../iam) - Identity and access management
- [RustFS Policy](../policy) - Policy engine
- AES-GCM encryption with hardware acceleration
- RSA and ECDSA digital signature support
- Secure hash functions (SHA-256, BLAKE3)
- Key derivation and management utilities
- Stream ciphers for large data encryption
- Hardware security module integration
## 📚 Documentation
For comprehensive documentation, visit:
- [RustFS Documentation](https://docs.rustfs.com)
- [API Reference](https://docs.rustfs.com/crypto/)
- [Security Guide](https://docs.rustfs.com/security/)
## 🔗 Links
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details on:
- Code style and formatting requirements
- Testing procedures and coverage
- Security considerations for cryptographic code
- Pull request process and review guidelines
### Development Setup
```bash
# Clone the repository
git clone https://github.com/rustfs/rustfs.git
cd rustfs
# Navigate to crypto module
cd crates/crypto
# Install dependencies
cargo build
# Run tests
cargo test
# Format code
cargo fmt
# Run linter
cargo clippy
```
## 💬 Getting Help
- **Documentation**: [docs.rustfs.com](https://docs.rustfs.com)
- **Issues**: [GitHub Issues](https://github.com/rustfs/rustfs/issues)
- **Discussions**: [GitHub Discussions](https://github.com/rustfs/rustfs/discussions)
- **Security**: Report security issues to <security@rustfs.com>
## 📞 Contact
- **Bugs**: [GitHub Issues](https://github.com/rustfs/rustfs/issues)
- **Business**: <hello@rustfs.com>
- **Jobs**: <jobs@rustfs.com>
- **General Discussion**: [GitHub Discussions](https://github.com/rustfs/rustfs/discussions)
## 👥 Contributors
This module is maintained by the RustFS team and community contributors. Special thanks to all who have contributed to making RustFS cryptography secure and efficient.
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
</a>
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
```
Copyright 2024 RustFS Team
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
---
<p align="center">
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
All other trademarks are the property of their respective owners.
</p>
<p align="center">
Made with ❤️ by the RustFS Team
</p>
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -19,6 +19,12 @@ edition.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
homepage.workspace = true
description = "Erasure coding storage backend for RustFS, providing efficient data storage and retrieval with redundancy."
keywords = ["erasure-coding", "storage", "rustfs", "Minio", "solomon"]
categories = ["web-programming", "development-tools", "filesystem"]
documentation = "https://docs.rs/rustfs-ecstore/latest/rustfs_ecstore/"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[lints]
@@ -103,8 +109,8 @@ winapi = { workspace = true }
[dev-dependencies]
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }
criterion = { version = "0.5", features = ["html_reports"] }
temp-env = "0.3.6"
criterion = { workspace = true, features = ["html_reports"] }
temp-env = { workspace = true }
[build-dependencies]
shadow-rs = { workspace = true, features = ["build", "metadata"] }

View File

@@ -1,6 +1,6 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS ECStore - Erasure Coding Storage Engine
# RustFS ECStore - Erasure Coding Storage
<p align="center">
<strong>High-performance erasure coding storage engine for RustFS distributed object storage</strong>
@@ -17,425 +17,24 @@
## 📖 Overview
**RustFS ECStore** is the core storage engine of the [RustFS](https://rustfs.com) distributed object storage system. It provides enterprise-grade erasure coding capabilities, data integrity protection, and high-performance object storage operations. This module serves as the foundation for RustFS's distributed storage architecture.
> **Note:** This is a core submodule of RustFS and provides the primary storage capabilities for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
**RustFS ECStore** provides erasure coding storage capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
### 🔧 Erasure Coding Storage
- **Reed-Solomon Erasure Coding**: Advanced error correction with configurable redundancy
- **Data Durability**: Protection against disk failures and bit rot
- **Automatic Repair**: Self-healing capabilities for corrupted or missing data
- **Configurable Parity**: Flexible parity configurations (4+2, 8+4, 16+4, etc.)
### 💾 Storage Management
- **Multi-Disk Support**: Intelligent disk management and load balancing
- **Storage Classes**: Support for different storage tiers and policies
- **Bucket Management**: Advanced bucket operations and lifecycle management
- **Object Versioning**: Complete versioning support with metadata tracking
### 🚀 Performance & Scalability
- **High Throughput**: Optimized for large-scale data operations
- **Parallel Processing**: Concurrent read/write operations across multiple disks
- **Memory Efficient**: Smart caching and memory management
- **SIMD Optimization**: Hardware-accelerated erasure coding operations
### 🛡️ Data Integrity
- **Bitrot Detection**: Real-time data corruption detection
- **Checksum Verification**: Multiple checksum algorithms (MD5, SHA256, XXHash)
- **Healing System**: Automatic background healing and repair
- **Data Scrubbing**: Proactive data integrity scanning
### 🔄 Advanced Features
- **Compression**: Built-in compression support for space optimization
- **Replication**: Cross-region replication capabilities
- **Notification System**: Real-time event notifications
- **Metrics & Monitoring**: Comprehensive performance metrics
## 🏗️ Architecture
### Storage Layout
```
ECStore Architecture:
┌─────────────────────────────────────────────────────────────┐
│ Storage API Layer │
├─────────────────────────────────────────────────────────────┤
│ Bucket Management │ Object Operations │ Metadata Mgmt │
├─────────────────────────────────────────────────────────────┤
│ Erasure Coding Engine │
├─────────────────────────────────────────────────────────────┤
│ Disk Management │ Healing System │ Cache │
├─────────────────────────────────────────────────────────────┤
│ Physical Storage Devices │
└─────────────────────────────────────────────────────────────┘
```
### Erasure Coding Schemes
| Configuration | Data Drives | Parity Drives | Fault Tolerance | Storage Efficiency |
|---------------|-------------|---------------|-----------------|-------------------|
| 4+2 | 4 | 2 | 2 disk failures | 66.7% |
| 8+4 | 8 | 4 | 4 disk failures | 66.7% |
| 16+4 | 16 | 4 | 4 disk failures | 80% |
| Custom | N | K | K disk failures | N/(N+K) |
## 📦 Installation
Add this to your `Cargo.toml`:
```toml
[dependencies]
rustfs-ecstore = "0.1.0"
```
## 🔧 Usage
### Basic Storage Operations
```rust
use rustfs_ecstore::{StorageAPI, new_object_layer_fn};
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize storage layer
let storage = new_object_layer_fn("/path/to/storage").await?;
// Create a bucket
storage.make_bucket("my-bucket", None).await?;
// Put an object
let data = b"Hello, RustFS!";
storage.put_object("my-bucket", "hello.txt", data.to_vec()).await?;
// Get an object
let retrieved = storage.get_object("my-bucket", "hello.txt", None).await?;
println!("Retrieved: {}", String::from_utf8_lossy(&retrieved.data));
Ok(())
}
```
### Advanced Configuration
```rust
use rustfs_ecstore::{StorageAPI, config::Config};
async fn setup_storage_with_config() -> Result<(), Box<dyn std::error::Error>> {
let config = Config {
erasure_sets: vec![
// 8+4 configuration for high durability
ErasureSet::new(8, 4, vec![
"/disk1", "/disk2", "/disk3", "/disk4",
"/disk5", "/disk6", "/disk7", "/disk8",
"/disk9", "/disk10", "/disk11", "/disk12"
])
],
healing_enabled: true,
compression_enabled: true,
..Default::default()
};
let storage = new_object_layer_fn("/path/to/storage")
.with_config(config)
.await?;
Ok(())
}
```
### Bucket Management
```rust
use rustfs_ecstore::{StorageAPI, bucket::BucketInfo};
async fn bucket_operations(storage: Arc<dyn StorageAPI>) -> Result<(), Box<dyn std::error::Error>> {
// Create bucket with specific configuration
let bucket_info = BucketInfo {
name: "enterprise-bucket".to_string(),
versioning_enabled: true,
lifecycle_config: Some(lifecycle_config()),
..Default::default()
};
storage.make_bucket_with_config(bucket_info).await?;
// List buckets
let buckets = storage.list_buckets().await?;
for bucket in buckets {
println!("Bucket: {}, Created: {}", bucket.name, bucket.created);
}
// Set bucket policy
storage.set_bucket_policy("enterprise-bucket", policy_json).await?;
Ok(())
}
```
### Healing and Maintenance
```rust
use rustfs_ecstore::{heal::HealingManager, StorageAPI};
async fn healing_operations(storage: Arc<dyn StorageAPI>) -> Result<(), Box<dyn std::error::Error>> {
// Check storage health
let health = storage.storage_info().await?;
println!("Storage Health: {:?}", health);
// Trigger healing for specific bucket
let healing_result = storage.heal_bucket("my-bucket").await?;
println!("Healing completed: {:?}", healing_result);
// Background healing status
let healing_status = storage.healing_status().await?;
println!("Background healing: {:?}", healing_status);
Ok(())
}
```
## 🧪 Testing
Run the test suite:
```bash
# Run all tests
cargo test
# Run tests with specific features
cargo test --features "compression,healing"
# Run benchmarks
cargo bench
# Run erasure coding benchmarks
cargo bench --bench erasure_benchmark
# Run comparison benchmarks
cargo bench --bench comparison_benchmark
```
## 📊 Performance Benchmarks
ECStore is designed for high-performance storage operations:
### Throughput Performance
- **Sequential Write**: Up to 10GB/s on NVMe storage
- **Sequential Read**: Up to 12GB/s with parallel reads
- **Random I/O**: 100K+ IOPS for small objects
- **Erasure Coding**: 5GB/s encoding/decoding throughput
### Scalability Metrics
- **Storage Capacity**: Exabyte-scale deployments
- **Concurrent Operations**: 10,000+ concurrent requests
- **Disk Scaling**: Support for 1000+ disks per node
- **Fault Tolerance**: Up to 50% disk failure resilience
## 🔧 Configuration
### Storage Configuration
```toml
[storage]
# Erasure coding configuration
erasure_set_size = 12 # Total disks per set
data_drives = 8 # Data drives per set
parity_drives = 4 # Parity drives per set
# Performance tuning
read_quorum = 6 # Minimum disks for read
write_quorum = 7 # Minimum disks for write
parallel_reads = true # Enable parallel reads
compression = true # Enable compression
# Healing configuration
healing_enabled = true
healing_interval = "24h"
bitrot_check_interval = "168h" # Weekly bitrot check
```
### Advanced Features
```rust
use rustfs_ecstore::config::StorageConfig;
let config = StorageConfig {
// Enable advanced features
bitrot_protection: true,
automatic_healing: true,
compression_level: 6,
checksum_algorithm: ChecksumAlgorithm::XXHash64,
// Performance tuning
read_buffer_size: 1024 * 1024, // 1MB read buffer
write_buffer_size: 4 * 1024 * 1024, // 4MB write buffer
concurrent_operations: 1000,
// Storage optimization
small_object_threshold: 128 * 1024, // 128KB
large_object_threshold: 64 * 1024 * 1024, // 64MB
..Default::default()
};
```
## 🤝 Integration with RustFS
ECStore integrates seamlessly with other RustFS components:
- **API Server**: Provides S3-compatible storage operations
- **IAM Module**: Handles authentication and authorization
- **Policy Engine**: Implements bucket policies and access controls
- **Notification System**: Publishes storage events
- **Monitoring**: Provides detailed metrics and health status
## 📋 Requirements
- **Rust**: 1.70.0 or later
- **Platforms**: Linux, macOS, Windows
- **Storage**: Local disks, network storage, cloud storage
- **Memory**: Minimum 4GB RAM (8GB+ recommended)
- **Network**: High-speed network for distributed deployments
## 🚀 Performance Tuning
### Optimization Tips
1. **Disk Configuration**:
- Use dedicated disks for each erasure set
- Prefer NVMe over SATA for better performance
- Ensure consistent disk sizes within erasure sets
2. **Memory Settings**:
- Allocate sufficient memory for caching
- Tune read/write buffer sizes based on workload
- Enable memory-mapped files for large objects
3. **Network Optimization**:
- Use high-speed network connections
- Configure proper MTU sizes
- Enable network compression for WAN scenarios
4. **CPU Optimization**:
- Utilize SIMD instructions for erasure coding
- Balance CPU cores across erasure sets
- Enable hardware-accelerated checksums
## 🐛 Troubleshooting
### Common Issues
1. **Disk Failures**:
- Check disk health using `storage_info()`
- Trigger healing with `heal_bucket()`
- Replace failed disks and re-add to cluster
2. **Performance Issues**:
- Monitor disk I/O utilization
- Check network bandwidth usage
- Verify erasure coding configuration
3. **Data Integrity**:
- Run bitrot detection scans
- Verify checksums for critical data
- Check healing system status
## 🌍 Related Projects
This module is part of the RustFS ecosystem:
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
- [RustFS Crypto](../crypto) - Cryptographic operations
- [RustFS IAM](../iam) - Identity and access management
- [RustFS Policy](../policy) - Policy engine
- [RustFS FileMeta](../filemeta) - File metadata management
- Reed-Solomon erasure coding implementation
- Configurable redundancy levels (N+K schemes)
- Automatic data healing and reconstruction
- Multi-drive support with intelligent placement
- Parallel encoding/decoding for performance
- Efficient disk space utilization
## 📚 Documentation
For comprehensive documentation, visit:
- [RustFS Documentation](https://docs.rustfs.com)
- [Storage API Reference](https://docs.rustfs.com/ecstore/)
- [Erasure Coding Guide](https://docs.rustfs.com/erasure-coding/)
- [Performance Tuning](https://docs.rustfs.com/performance/)
## 🔗 Links
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details on:
- Storage engine architecture and design patterns
- Erasure coding implementation guidelines
- Performance optimization techniques
- Testing procedures for storage operations
- Documentation standards for storage APIs
### Development Setup
```bash
# Clone the repository
git clone https://github.com/rustfs/rustfs.git
cd rustfs
# Navigate to ECStore module
cd crates/ecstore
# Install dependencies
cargo build
# Run tests
cargo test
# Run benchmarks
cargo bench
# Format code
cargo fmt
# Run linter
cargo clippy
```
## 💬 Getting Help
- **Documentation**: [docs.rustfs.com](https://docs.rustfs.com)
- **Issues**: [GitHub Issues](https://github.com/rustfs/rustfs/issues)
- **Discussions**: [GitHub Discussions](https://github.com/rustfs/rustfs/discussions)
- **Storage Support**: <storage-support@rustfs.com>
## 📞 Contact
- **Bugs**: [GitHub Issues](https://github.com/rustfs/rustfs/issues)
- **Business**: <hello@rustfs.com>
- **Jobs**: <jobs@rustfs.com>
- **General Discussion**: [GitHub Discussions](https://github.com/rustfs/rustfs/discussions)
## 👥 Contributors
This module is maintained by the RustFS storage team and community contributors. Special thanks to all who have contributed to making RustFS storage reliable and efficient.
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
</a>
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.
```
Copyright 2024 RustFS Team

View File

@@ -1,103 +1,19 @@
# ECStore - Erasure Coding Storage
ECStore provides erasure coding functionality for the RustFS project, using high-performance Reed-Solomon SIMD
implementation for optimal performance.
ECStore provides erasure coding functionality for the RustFS project, using high-performance Reed-Solomon SIMD implementation for optimal performance.
## Reed-Solomon Implementation
## Features
### SIMD Backend (Only)
- **Reed-Solomon Implementation**: High-performance SIMD-optimized erasure coding
- **Cross-Platform Compatibility**: Support for x86_64, aarch64, and other architectures
- **Performance Optimized**: SIMD instructions for maximum throughput
- **Thread Safety**: Safe concurrent access with caching optimizations
- **Scalable**: Excellent performance for high-throughput scenarios
- **Performance**: Uses SIMD optimization for high-performance encoding/decoding
- **Compatibility**: Works with any shard size through SIMD implementation
- **Reliability**: High-performance SIMD implementation for large data processing
- **Use case**: Optimized for maximum performance in large data processing scenarios
## Documentation
### Usage Example
For complete documentation, examples, and usage information, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
```rust
use rustfs_ecstore::erasure_coding::Erasure;
## License
// Create erasure coding instance
// 4 data shards, 2 parity shards, 1KB block size
let erasure = Erasure::new(4, 2, 1024);
// Encode data
let data = b"hello world from rustfs erasure coding";
let shards = erasure.encode_data(data) ?;
// Simulate loss of one shard
let mut shards_opt: Vec<Option<Vec<u8> > > = shards
.iter()
.map( | b| Some(b.to_vec()))
.collect();
shards_opt[2] = None; // Lose shard 2
// Reconstruct missing data
erasure.decode_data( & mut shards_opt) ?;
// Recover original data
let mut recovered = Vec::new();
for shard in shards_opt.iter().take(4) { // Only data shards
recovered.extend_from_slice(shard.as_ref().unwrap());
}
recovered.truncate(data.len());
assert_eq!(&recovered, data);
```
## Performance Considerations
### SIMD Implementation Benefits
- **High Throughput**: Optimized for large block sizes (>= 1KB recommended)
- **CPU Optimization**: Leverages modern CPU SIMD instructions
- **Scalability**: Excellent performance for high-throughput scenarios
### Implementation Details
#### `reed-solomon-simd`
- **Instance Caching**: Encoder/decoder instances are cached and reused for optimal performance
- **Thread Safety**: Thread-safe with RwLock-based caching
- **SIMD Optimization**: Leverages CPU SIMD instructions for maximum performance
- **Reset Capability**: Cached instances are reset for different parameters, avoiding unnecessary allocations
### Performance Tips
1. **Batch Operations**: When possible, batch multiple small operations into larger blocks
2. **Block Size Optimization**: Use block sizes that are multiples of 64 bytes for optimal SIMD performance
3. **Memory Allocation**: Pre-allocate buffers when processing multiple blocks
4. **Cache Warming**: Initial operations may be slower due to cache setup, subsequent operations benefit from caching
## Cross-Platform Compatibility
The SIMD implementation supports:
- x86_64 with advanced SIMD instructions (AVX2, SSE)
- aarch64 (ARM64) with NEON SIMD optimizations
- Other architectures with fallback implementations
The implementation automatically selects the best available SIMD instructions for the target platform, providing optimal
performance across different architectures.
## Testing and Benchmarking
Run performance benchmarks:
```bash
# Run erasure coding benchmarks
cargo bench --bench erasure_benchmark
# Run comparison benchmarks
cargo bench --bench comparison_benchmark
# Generate benchmark reports
./run_benchmarks.sh
```
## Error Handling
All operations return `Result` types with comprehensive error information:
- Encoding errors: Invalid parameters, insufficient memory
- Decoding errors: Too many missing shards, corrupted data
- Configuration errors: Invalid shard counts, unsupported parameters
This project is licensed under the Apache License, Version 2.0.

View File

@@ -43,15 +43,16 @@ pub async fn create_bitrot_reader(
) -> disk::error::Result<Option<BitrotReader<Box<dyn AsyncRead + Send + Sync + Unpin>>>> {
// Calculate the total length to read, including the checksum overhead
let length = length.div_ceil(shard_size) * checksum_algo.size() + length;
let offset = offset.div_ceil(shard_size) * checksum_algo.size() + offset;
if let Some(data) = inline_data {
// Use inline data
let rd = Cursor::new(data.to_vec());
let mut rd = Cursor::new(data.to_vec());
rd.set_position(offset as u64);
let reader = BitrotReader::new(Box::new(rd) as Box<dyn AsyncRead + Send + Sync + Unpin>, shard_size, checksum_algo);
Ok(Some(reader))
} else if let Some(disk) = disk {
// Read from disk
match disk.read_file_stream(bucket, path, offset, length).await {
match disk.read_file_stream(bucket, path, offset, length - offset).await {
Ok(rd) => {
let reader = BitrotReader::new(rd, shard_size, checksum_algo);
Ok(Some(reader))

View File

@@ -18,7 +18,7 @@ use futures::future::join_all;
use rustfs_filemeta::{MetaCacheEntries, MetaCacheEntry, MetacacheReader, is_io_eof};
use std::{future::Future, pin::Pin, sync::Arc};
use tokio::{spawn, sync::broadcast::Receiver as B_Receiver};
use tracing::error;
use tracing::{error, warn};
pub type AgreedFn = Box<dyn Fn(MetaCacheEntry) -> Pin<Box<dyn Future<Output = ()> + Send>> + Send + 'static>;
pub type PartialFn =
@@ -118,10 +118,14 @@ pub async fn list_path_raw(mut rx: B_Receiver<bool>, opts: ListPathRawOptions) -
if let Some(disk) = d.clone() {
disk
} else {
warn!("list_path_raw: fallback disk is none");
break;
}
}
None => break,
None => {
warn!("list_path_raw: fallback disk is none2");
break;
}
};
match disk
.as_ref()

View File

@@ -288,6 +288,12 @@ impl From<rmp_serde::encode::Error> for DiskError {
}
}
impl From<rmp_serde::decode::Error> for DiskError {
fn from(e: rmp_serde::decode::Error) -> Self {
DiskError::other(e)
}
}
impl From<rmp::encode::ValueWriteError> for DiskError {
fn from(e: rmp::encode::ValueWriteError) -> Self {
DiskError::other(e)

View File

@@ -57,8 +57,8 @@ use bytes::Bytes;
use path_absolutize::Absolutize;
use rustfs_common::defer;
use rustfs_filemeta::{
Cache, FileInfo, FileInfoOpts, FileMeta, MetaCacheEntry, MetacacheWriter, Opts, RawFileInfo, UpdateFn, get_file_info,
read_xl_meta_no_data,
Cache, FileInfo, FileInfoOpts, FileMeta, MetaCacheEntry, MetacacheWriter, ObjectPartInfo, Opts, RawFileInfo, UpdateFn,
get_file_info, read_xl_meta_no_data,
};
use rustfs_utils::HashAlgorithm;
use rustfs_utils::os::get_info;
@@ -145,8 +145,8 @@ impl Debug for LocalDisk {
impl LocalDisk {
pub async fn new(ep: &Endpoint, cleanup: bool) -> Result<Self> {
debug!("Creating local disk");
let root = match fs::canonicalize(ep.get_file_path()).await {
Ok(path) => path,
let root = match PathBuf::from(ep.get_file_path()).absolutize() {
Ok(path) => path.into_owned(),
Err(e) => {
if e.kind() == ErrorKind::NotFound {
return Err(DiskError::VolumeNotFound);
@@ -1312,6 +1312,67 @@ impl DiskAPI for LocalDisk {
Ok(resp)
}
#[tracing::instrument(skip(self))]
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>> {
let volume_dir = self.get_bucket_path(bucket)?;
let mut ret = vec![ObjectPartInfo::default(); paths.len()];
for (i, path_str) in paths.iter().enumerate() {
let path = Path::new(path_str);
let file_name = path.file_name().and_then(|v| v.to_str()).unwrap_or_default();
let num = file_name
.strip_prefix("part.")
.and_then(|v| v.strip_suffix(".meta"))
.and_then(|v| v.parse::<usize>().ok())
.unwrap_or_default();
if let Err(err) = access(
volume_dir
.clone()
.join(path.parent().unwrap_or(Path::new("")).join(format!("part.{num}"))),
)
.await
{
ret[i] = ObjectPartInfo {
number: num,
error: Some(err.to_string()),
..Default::default()
};
continue;
}
let data = match self
.read_all_data(bucket, volume_dir.clone(), volume_dir.clone().join(path))
.await
{
Ok(data) => data,
Err(err) => {
ret[i] = ObjectPartInfo {
number: num,
error: Some(err.to_string()),
..Default::default()
};
continue;
}
};
match ObjectPartInfo::unmarshal(&data) {
Ok(meta) => {
ret[i] = meta;
}
Err(err) => {
ret[i] = ObjectPartInfo {
number: num,
error: Some(err.to_string()),
..Default::default()
};
}
};
}
Ok(ret)
}
#[tracing::instrument(skip(self))]
async fn check_parts(&self, volume: &str, path: &str, fi: &FileInfo) -> Result<CheckPartsResp> {
let volume_dir = self.get_bucket_path(volume)?;
@@ -1550,11 +1611,6 @@ impl DiskAPI for LocalDisk {
#[tracing::instrument(level = "debug", skip(self))]
async fn read_file_stream(&self, volume: &str, path: &str, offset: usize, length: usize) -> Result<FileReader> {
// warn!(
// "disk read_file_stream: volume: {}, path: {}, offset: {}, length: {}",
// volume, path, offset, length
// );
let volume_dir = self.get_bucket_path(volume)?;
if !skip_access_checks(volume) {
access(&volume_dir)

View File

@@ -41,7 +41,7 @@ use endpoint::Endpoint;
use error::DiskError;
use error::{Error, Result};
use local::LocalDisk;
use rustfs_filemeta::{FileInfo, RawFileInfo};
use rustfs_filemeta::{FileInfo, ObjectPartInfo, RawFileInfo};
use rustfs_madmin::info_commands::DiskMetrics;
use serde::{Deserialize, Serialize};
use std::{fmt::Debug, path::PathBuf, sync::Arc};
@@ -331,6 +331,14 @@ impl DiskAPI for Disk {
}
}
#[tracing::instrument(skip(self))]
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>> {
match self {
Disk::Local(local_disk) => local_disk.read_parts(bucket, paths).await,
Disk::Remote(remote_disk) => remote_disk.read_parts(bucket, paths).await,
}
}
#[tracing::instrument(skip(self))]
async fn rename_part(&self, src_volume: &str, src_path: &str, dst_volume: &str, dst_path: &str, meta: Bytes) -> Result<()> {
match self {
@@ -513,7 +521,7 @@ pub trait DiskAPI: Debug + Send + Sync + 'static {
// CheckParts
async fn check_parts(&self, volume: &str, path: &str, fi: &FileInfo) -> Result<CheckPartsResp>;
// StatInfoFile
// ReadParts
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>>;
async fn read_multiple(&self, req: ReadMultipleReq) -> Result<Vec<ReadMultipleResp>>;
// CleanAbandonedData
async fn write_all(&self, volume: &str, path: &str, data: Bytes) -> Result<()>;

View File

@@ -30,6 +30,7 @@ use std::{
time::SystemTime,
};
use tokio::sync::{OnceCell, RwLock};
use tokio_util::sync::CancellationToken;
use uuid::Uuid;
pub const DISK_ASSUME_UNKNOWN_SIZE: u64 = 1 << 30;
@@ -62,7 +63,12 @@ static ref globalDeploymentIDPtr: OnceLock<Uuid> = OnceLock::new();
pub static ref GLOBAL_BOOT_TIME: OnceCell<SystemTime> = OnceCell::new();
pub static ref GLOBAL_LocalNodeName: String = "127.0.0.1:9000".to_string();
pub static ref GLOBAL_LocalNodeNameHex: String = rustfs_utils::crypto::hex(GLOBAL_LocalNodeName.as_bytes());
pub static ref GLOBAL_NodeNamesHex: HashMap<String, ()> = HashMap::new();}
pub static ref GLOBAL_NodeNamesHex: HashMap<String, ()> = HashMap::new();
pub static ref GLOBAL_REGION: OnceLock<String> = OnceLock::new();
}
// Global cancellation token for background services (data scanner and auto heal)
static GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN: OnceLock<CancellationToken> = OnceLock::new();
static GLOBAL_ACTIVE_CRED: OnceLock<Credentials> = OnceLock::new();
@@ -182,3 +188,35 @@ pub async fn update_erasure_type(setup_type: SetupType) {
// }
type TypeLocalDiskSetDrives = Vec<Vec<Vec<Option<DiskStore>>>>;
pub fn set_global_region(region: String) {
GLOBAL_REGION.set(region).unwrap();
}
pub fn get_global_region() -> Option<String> {
GLOBAL_REGION.get().cloned()
}
/// Initialize the global background services cancellation token
pub fn init_background_services_cancel_token(cancel_token: CancellationToken) -> Result<(), CancellationToken> {
GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN.set(cancel_token)
}
/// Get the global background services cancellation token
pub fn get_background_services_cancel_token() -> Option<&'static CancellationToken> {
GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN.get()
}
/// Create and initialize the global background services cancellation token
pub fn create_background_services_cancel_token() -> CancellationToken {
let cancel_token = CancellationToken::new();
init_background_services_cancel_token(cancel_token.clone()).expect("Background services cancel token already initialized");
cancel_token
}
/// Shutdown all background services gracefully
pub fn shutdown_background_services() {
if let Some(cancel_token) = GLOBAL_BACKGROUND_SERVICES_CANCEL_TOKEN.get() {
cancel_token.cancel();
}
}

View File

@@ -24,6 +24,7 @@ use tokio::{
},
time::interval,
};
use tokio_util::sync::CancellationToken;
use tracing::{error, info};
use uuid::Uuid;
@@ -32,7 +33,7 @@ use super::{
heal_ops::{HealSequence, new_bg_heal_sequence},
};
use crate::error::{Error, Result};
use crate::global::GLOBAL_MRFState;
use crate::global::{GLOBAL_MRFState, get_background_services_cancel_token};
use crate::heal::error::ERR_RETRY_HEALING;
use crate::heal::heal_commands::{HEAL_ITEM_BUCKET, HealScanMode};
use crate::heal::heal_ops::{BG_HEALING_UUID, HealSource};
@@ -54,6 +55,13 @@ use crate::{
pub static DEFAULT_MONITOR_NEW_DISK_INTERVAL: Duration = Duration::from_secs(10);
pub async fn init_auto_heal() {
info!("Initializing auto heal background task");
let Some(cancel_token) = get_background_services_cancel_token() else {
error!("Background services cancel token not initialized");
return;
};
init_background_healing().await;
let v = env::var("_RUSTFS_AUTO_DRIVE_HEALING").unwrap_or("on".to_string());
if v == "on" {
@@ -61,12 +69,16 @@ pub async fn init_auto_heal() {
GLOBAL_BackgroundHealState
.push_heal_local_disks(&get_local_disks_to_heal().await)
.await;
spawn(async {
monitor_local_disks_and_heal().await;
let cancel_clone = cancel_token.clone();
spawn(async move {
monitor_local_disks_and_heal(cancel_clone).await;
});
}
spawn(async {
GLOBAL_MRFState.heal_routine().await;
let cancel_clone = cancel_token.clone();
spawn(async move {
GLOBAL_MRFState.heal_routine_with_cancel(cancel_clone).await;
});
}
@@ -108,50 +120,66 @@ pub async fn get_local_disks_to_heal() -> Vec<Endpoint> {
disks_to_heal
}
async fn monitor_local_disks_and_heal() {
async fn monitor_local_disks_and_heal(cancel_token: CancellationToken) {
info!("Auto heal monitor started");
let mut interval = interval(DEFAULT_MONITOR_NEW_DISK_INTERVAL);
loop {
interval.tick().await;
let heal_disks = GLOBAL_BackgroundHealState.get_heal_local_disk_endpoints().await;
if heal_disks.is_empty() {
info!("heal local disks is empty");
interval.reset();
continue;
}
tokio::select! {
_ = cancel_token.cancelled() => {
info!("Auto heal monitor received shutdown signal, exiting gracefully");
break;
}
_ = interval.tick() => {
let heal_disks = GLOBAL_BackgroundHealState.get_heal_local_disk_endpoints().await;
if heal_disks.is_empty() {
info!("heal local disks is empty");
interval.reset();
continue;
}
info!("heal local disks: {:?}", heal_disks);
info!("heal local disks: {:?}", heal_disks);
let store = new_object_layer_fn().expect("errServerNotInitialized");
if let (_result, Some(err)) = store.heal_format(false).await.expect("heal format failed") {
error!("heal local disk format error: {}", err);
if err == Error::NoHealRequired {
} else {
info!("heal format err: {}", err.to_string());
let store = new_object_layer_fn().expect("errServerNotInitialized");
if let (_result, Some(err)) = store.heal_format(false).await.expect("heal format failed") {
error!("heal local disk format error: {}", err);
if err == Error::NoHealRequired {
} else {
info!("heal format err: {}", err.to_string());
interval.reset();
continue;
}
}
let mut futures = Vec::new();
for disk in heal_disks.into_ref().iter() {
let disk_clone = disk.clone();
let cancel_clone = cancel_token.clone();
futures.push(async move {
let disk_for_cancel = disk_clone.clone();
tokio::select! {
_ = cancel_clone.cancelled() => {
info!("Disk healing task cancelled for disk: {}", disk_for_cancel);
}
_ = async {
GLOBAL_BackgroundHealState
.set_disk_healing_status(disk_clone.clone(), true)
.await;
if heal_fresh_disk(&disk_clone).await.is_err() {
info!("heal_fresh_disk is err");
GLOBAL_BackgroundHealState
.set_disk_healing_status(disk_clone.clone(), false)
.await;
}
GLOBAL_BackgroundHealState.pop_heal_local_disks(&[disk_clone]).await;
} => {}
}
});
}
let _ = join_all(futures).await;
interval.reset();
continue;
}
}
let mut futures = Vec::new();
for disk in heal_disks.into_ref().iter() {
let disk_clone = disk.clone();
futures.push(async move {
GLOBAL_BackgroundHealState
.set_disk_healing_status(disk_clone.clone(), true)
.await;
if heal_fresh_disk(&disk_clone).await.is_err() {
info!("heal_fresh_disk is err");
GLOBAL_BackgroundHealState
.set_disk_healing_status(disk_clone.clone(), false)
.await;
return;
}
GLOBAL_BackgroundHealState.pop_heal_local_disks(&[disk_clone]).await;
});
}
let _ = join_all(futures).await;
interval.reset();
}
}

View File

@@ -20,14 +20,13 @@ use std::{
path::{Path, PathBuf},
pin::Pin,
sync::{
Arc, OnceLock,
Arc,
atomic::{AtomicBool, AtomicU32, AtomicU64, Ordering},
},
time::{Duration, SystemTime},
};
use time::{self, OffsetDateTime};
use tokio_util::sync::CancellationToken;
use super::{
data_scanner_metric::{ScannerMetric, ScannerMetrics, globalScannerMetrics},
@@ -51,7 +50,7 @@ use crate::{
metadata_sys,
},
event_notification::{EventArgs, send_event},
global::GLOBAL_LocalNodeName,
global::{GLOBAL_LocalNodeName, get_background_services_cancel_token},
store_api::{ObjectOptions, ObjectToDelete, StorageAPI},
};
use crate::{
@@ -128,8 +127,6 @@ lazy_static! {
pub static ref globalHealConfig: Arc<RwLock<Config>> = Arc::new(RwLock::new(Config::default()));
}
static GLOBAL_SCANNER_CANCEL_TOKEN: OnceLock<CancellationToken> = OnceLock::new();
struct DynamicSleeper {
factor: f64,
max_sleep: Duration,
@@ -198,21 +195,18 @@ fn new_dynamic_sleeper(factor: f64, max_wait: Duration, is_scanner: bool) -> Dyn
/// - Minimum sleep duration to avoid excessive CPU usage
/// - Proper error handling and logging
///
/// # Returns
/// A CancellationToken that can be used to gracefully shutdown the scanner
///
/// # Architecture
/// 1. Initialize with random seed for sleep intervals
/// 2. Run scanner cycles in a loop
/// 3. Use randomized sleep between cycles to avoid thundering herd
/// 4. Ensure minimum sleep duration to prevent CPU thrashing
pub async fn init_data_scanner() -> CancellationToken {
pub async fn init_data_scanner() {
info!("Initializing data scanner background task");
let cancel_token = CancellationToken::new();
GLOBAL_SCANNER_CANCEL_TOKEN
.set(cancel_token.clone())
.expect("Scanner already initialized");
let Some(cancel_token) = get_background_services_cancel_token() else {
error!("Background services cancel token not initialized");
return;
};
let cancel_clone = cancel_token.clone();
tokio::spawn(async move {
@@ -256,8 +250,6 @@ pub async fn init_data_scanner() -> CancellationToken {
info!("Data scanner background task stopped gracefully");
});
cancel_token
}
/// Run a single data scanner cycle
@@ -282,7 +274,7 @@ async fn run_data_scanner_cycle() {
};
// Check for cancellation before starting expensive operations
if let Some(token) = GLOBAL_SCANNER_CANCEL_TOKEN.get() {
if let Some(token) = get_background_services_cancel_token() {
if token.is_cancelled() {
debug!("Scanner cancelled before starting cycle");
return;
@@ -397,9 +389,8 @@ async fn execute_namespace_scan(
cycle: u64,
scan_mode: HealScanMode,
) -> Result<()> {
let cancel_token = GLOBAL_SCANNER_CANCEL_TOKEN
.get()
.ok_or_else(|| Error::other("Scanner not initialized"))?;
let cancel_token =
get_background_services_cancel_token().ok_or_else(|| Error::other("Background services not initialized"))?;
tokio::select! {
result = store.ns_scanner(tx, cycle as usize, scan_mode) => {

View File

@@ -25,7 +25,8 @@ use std::time::Duration;
use tokio::sync::RwLock;
use tokio::sync::mpsc::{Receiver, Sender};
use tokio::time::sleep;
use tracing::error;
use tokio_util::sync::CancellationToken;
use tracing::{error, info};
use uuid::Uuid;
pub const MRF_OPS_QUEUE_SIZE: u64 = 100000;
@@ -87,56 +88,96 @@ impl MRFState {
let _ = self.tx.send(op).await;
}
pub async fn heal_routine(&self) {
/// Enhanced heal routine with cancellation support
///
/// This method implements the same healing logic as the original heal_routine,
/// but adds proper cancellation support via CancellationToken.
/// The core logic remains identical to maintain compatibility.
pub async fn heal_routine_with_cancel(&self, cancel_token: CancellationToken) {
info!("MRF heal routine started with cancellation support");
loop {
// rx used only there,
if let Some(op) = self.rx.write().await.recv().await {
if op.bucket == RUSTFS_META_BUCKET {
for pattern in &*PATTERNS {
if pattern.is_match(&op.object) {
return;
tokio::select! {
_ = cancel_token.cancelled() => {
info!("MRF heal routine received shutdown signal, exiting gracefully");
break;
}
op_result = async {
let mut rx_guard = self.rx.write().await;
rx_guard.recv().await
} => {
if let Some(op) = op_result {
// Special path filtering (original logic)
if op.bucket == RUSTFS_META_BUCKET {
for pattern in &*PATTERNS {
if pattern.is_match(&op.object) {
continue; // Skip this operation, continue with next
}
}
}
}
}
let now = Utc::now();
if now.sub(op.queued).num_seconds() < 1 {
sleep(Duration::from_secs(1)).await;
}
// Network reconnection delay (original logic)
let now = Utc::now();
if now.sub(op.queued).num_seconds() < 1 {
tokio::select! {
_ = cancel_token.cancelled() => {
info!("MRF heal routine cancelled during reconnection delay");
break;
}
_ = sleep(Duration::from_secs(1)) => {}
}
}
let scan_mode = if op.bitrot_scan { HEAL_DEEP_SCAN } else { HEAL_NORMAL_SCAN };
if op.object.is_empty() {
if let Err(err) = heal_bucket(&op.bucket).await {
error!("heal bucket failed, bucket: {}, err: {:?}", op.bucket, err);
}
} else if op.versions.is_empty() {
if let Err(err) =
heal_object(&op.bucket, &op.object, &op.version_id.clone().unwrap_or_default(), scan_mode).await
{
error!("heal object failed, bucket: {}, object: {}, err: {:?}", op.bucket, op.object, err);
}
} else {
let vers = op.versions.len() / 16;
if vers > 0 {
for i in 0..vers {
let start = i * 16;
let end = start + 16;
// Core healing logic (original logic preserved)
let scan_mode = if op.bitrot_scan { HEAL_DEEP_SCAN } else { HEAL_NORMAL_SCAN };
if op.object.is_empty() {
// Heal bucket (original logic)
if let Err(err) = heal_bucket(&op.bucket).await {
error!("heal bucket failed, bucket: {}, err: {:?}", op.bucket, err);
}
} else if op.versions.is_empty() {
// Heal single object (original logic)
if let Err(err) = heal_object(
&op.bucket,
&op.object,
&Uuid::from_slice(&op.versions[start..end]).expect("").to_string(),
scan_mode,
)
.await
{
&op.version_id.clone().unwrap_or_default(),
scan_mode
).await {
error!("heal object failed, bucket: {}, object: {}, err: {:?}", op.bucket, op.object, err);
}
} else {
// Heal multiple versions (original logic)
let vers = op.versions.len() / 16;
if vers > 0 {
for i in 0..vers {
// Check for cancellation before each version
if cancel_token.is_cancelled() {
info!("MRF heal routine cancelled during version processing");
return;
}
let start = i * 16;
let end = start + 16;
if let Err(err) = heal_object(
&op.bucket,
&op.object,
&Uuid::from_slice(&op.versions[start..end]).expect("").to_string(),
scan_mode,
).await {
error!("heal object failed, bucket: {}, object: {}, err: {:?}", op.bucket, op.object, err);
}
}
}
}
} else {
info!("MRF heal routine channel closed, exiting");
break;
}
}
} else {
return;
}
}
info!("MRF heal routine stopped gracefully");
}
}

View File

@@ -22,8 +22,8 @@ use rustfs_protos::{
proto_gen::node_service::{
CheckPartsRequest, DeletePathsRequest, DeleteRequest, DeleteVersionRequest, DeleteVersionsRequest, DeleteVolumeRequest,
DiskInfoRequest, ListDirRequest, ListVolumesRequest, MakeVolumeRequest, MakeVolumesRequest, NsScannerRequest,
ReadAllRequest, ReadMultipleRequest, ReadVersionRequest, ReadXlRequest, RenameDataRequest, RenameFileRequest,
StatVolumeRequest, UpdateMetadataRequest, VerifyFileRequest, WriteAllRequest, WriteMetadataRequest,
ReadAllRequest, ReadMultipleRequest, ReadPartsRequest, ReadVersionRequest, ReadXlRequest, RenameDataRequest,
RenameFileRequest, StatVolumeRequest, UpdateMetadataRequest, VerifyFileRequest, WriteAllRequest, WriteMetadataRequest,
},
};
@@ -44,7 +44,7 @@ use crate::{
heal_commands::{HealScanMode, HealingTracker},
},
};
use rustfs_filemeta::{FileInfo, RawFileInfo};
use rustfs_filemeta::{FileInfo, ObjectPartInfo, RawFileInfo};
use rustfs_protos::proto_gen::node_service::RenamePartRequest;
use rustfs_rio::{HttpReader, HttpWriter};
use tokio::{
@@ -790,6 +790,27 @@ impl DiskAPI for RemoteDisk {
Ok(check_parts_resp)
}
#[tracing::instrument(skip(self))]
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>> {
let mut client = node_service_time_out_client(&self.addr)
.await
.map_err(|err| Error::other(format!("can not get client, err: {err}")))?;
let request = Request::new(ReadPartsRequest {
disk: self.endpoint.to_string(),
bucket: bucket.to_string(),
paths: paths.to_vec(),
});
let response = client.read_parts(request).await?.into_inner();
if !response.success {
return Err(response.error.unwrap_or_default().into());
}
let read_parts_resp = rmp_serde::from_slice::<Vec<ObjectPartInfo>>(&response.object_part_infos)?;
Ok(read_parts_resp)
}
#[tracing::instrument(skip(self))]
async fn check_parts(&self, volume: &str, path: &str, fi: &FileInfo) -> Result<CheckPartsResp> {
info!("check_parts");

View File

@@ -404,7 +404,42 @@ impl Node for NodeService {
}))
}
}
async fn read_parts(&self, request: Request<ReadPartsRequest>) -> Result<Response<ReadPartsResponse>, Status> {
let request = request.into_inner();
if let Some(disk) = self.find_disk(&request.disk).await {
match disk.read_parts(&request.bucket, &request.paths).await {
Ok(data) => {
let data = match rmp_serde::to_vec(&data) {
Ok(data) => data,
Err(err) => {
return Ok(tonic::Response::new(ReadPartsResponse {
success: false,
object_part_infos: Bytes::new(),
error: Some(DiskError::other(format!("encode data failed: {err}")).into()),
}));
}
};
Ok(tonic::Response::new(ReadPartsResponse {
success: true,
object_part_infos: Bytes::copy_from_slice(&data),
error: None,
}))
}
Err(err) => Ok(tonic::Response::new(ReadPartsResponse {
success: false,
object_part_infos: Bytes::new(),
error: Some(err.into()),
})),
}
} else {
Ok(tonic::Response::new(ReadPartsResponse {
success: false,
object_part_infos: Bytes::new(),
error: Some(DiskError::other("can not find disk".to_string()).into()),
}))
}
}
async fn check_parts(&self, request: Request<CheckPartsRequest>) -> Result<Response<CheckPartsResponse>, Status> {
let request = request.into_inner();
if let Some(disk) = self.find_disk(&request.disk).await {

View File

@@ -24,13 +24,13 @@ use crate::disk::{
};
use crate::erasure_coding;
use crate::erasure_coding::bitrot_verify;
use crate::error::ObjectApiError;
use crate::error::{Error, Result};
use crate::error::{ObjectApiError, is_err_object_not_found};
use crate::global::GLOBAL_MRFState;
use crate::global::{GLOBAL_LocalNodeName, GLOBAL_TierConfigMgr};
use crate::heal::data_usage_cache::DataUsageCache;
use crate::heal::heal_ops::{HealEntryFn, HealSequence};
use crate::store_api::ObjectToDelete;
use crate::store_api::{ListPartsInfo, ObjectToDelete};
use crate::{
bucket::lifecycle::bucket_lifecycle_ops::{gen_transition_objname, get_transitioned_object_reader, put_restore_opts},
cache_value::metacache_set::{ListPathRawOptions, list_path_raw},
@@ -119,6 +119,7 @@ use tracing::{debug, info, warn};
use uuid::Uuid;
pub const DEFAULT_READ_BUFFER_SIZE: usize = 1024 * 1024;
pub const MAX_PARTS_COUNT: usize = 10000;
#[derive(Debug, Clone)]
pub struct SetDisks {
@@ -316,6 +317,9 @@ impl SetDisks {
.filter(|v| v.as_ref().is_some_and(|d| d.is_local()))
.collect()
}
fn default_read_quorum(&self) -> usize {
self.set_drive_count - self.default_parity_count
}
fn default_write_quorum(&self) -> usize {
let mut data_count = self.set_drive_count - self.default_parity_count;
if data_count == self.default_parity_count {
@@ -550,6 +554,183 @@ impl SetDisks {
}
}
async fn read_parts(
disks: &[Option<DiskStore>],
bucket: &str,
part_meta_paths: &[String],
part_numbers: &[usize],
read_quorum: usize,
) -> disk::error::Result<Vec<ObjectPartInfo>> {
let mut futures = Vec::with_capacity(disks.len());
for (i, disk) in disks.iter().enumerate() {
futures.push(async move {
if let Some(disk) = disk {
disk.read_parts(bucket, part_meta_paths).await
} else {
Err(DiskError::DiskNotFound)
}
});
}
let mut errs = Vec::with_capacity(disks.len());
let mut object_parts = Vec::with_capacity(disks.len());
let results = join_all(futures).await;
for result in results {
match result {
Ok(res) => {
errs.push(None);
object_parts.push(res);
}
Err(e) => {
errs.push(Some(e));
object_parts.push(vec![]);
}
}
}
if let Some(err) = reduce_read_quorum_errs(&errs, OBJECT_OP_IGNORED_ERRS, read_quorum) {
return Err(err);
}
let mut ret = vec![ObjectPartInfo::default(); part_meta_paths.len()];
for (part_idx, part_info) in part_meta_paths.iter().enumerate() {
let mut part_meta_quorum = HashMap::new();
let mut part_infos = Vec::new();
for (j, parts) in object_parts.iter().enumerate() {
if parts.len() != part_meta_paths.len() {
*part_meta_quorum.entry(part_info.clone()).or_insert(0) += 1;
continue;
}
if !parts[part_idx].etag.is_empty() {
*part_meta_quorum.entry(parts[part_idx].etag.clone()).or_insert(0) += 1;
part_infos.push(parts[part_idx].clone());
continue;
}
*part_meta_quorum.entry(part_info.clone()).or_insert(0) += 1;
}
let mut max_quorum = 0;
let mut max_etag = None;
let mut max_part_meta = None;
for (etag, quorum) in part_meta_quorum.iter() {
if quorum > &max_quorum {
max_quorum = *quorum;
max_etag = Some(etag);
max_part_meta = Some(etag);
}
}
let mut found = None;
for info in part_infos.iter() {
if let Some(etag) = max_etag
&& info.etag == *etag
{
found = Some(info.clone());
break;
}
if let Some(part_meta) = max_part_meta
&& info.etag.is_empty()
&& part_meta.ends_with(format!("part.{0}.meta", info.number).as_str())
{
found = Some(info.clone());
break;
}
}
if let (Some(found), Some(max_etag)) = (found, max_etag)
&& !found.etag.is_empty()
&& part_meta_quorum.get(max_etag).unwrap_or(&0) >= &read_quorum
{
ret[part_idx] = found;
} else {
ret[part_idx] = ObjectPartInfo {
number: part_numbers[part_idx],
error: Some(format!("part.{} not found", part_numbers[part_idx])),
..Default::default()
};
}
}
Ok(ret)
}
async fn list_parts(disks: &[Option<DiskStore>], part_path: &str, read_quorum: usize) -> disk::error::Result<Vec<usize>> {
let mut futures = Vec::with_capacity(disks.len());
for (i, disk) in disks.iter().enumerate() {
futures.push(async move {
if let Some(disk) = disk {
disk.list_dir(RUSTFS_META_MULTIPART_BUCKET, RUSTFS_META_MULTIPART_BUCKET, part_path, -1)
.await
} else {
Err(DiskError::DiskNotFound)
}
});
}
let mut errs = Vec::with_capacity(disks.len());
let mut object_parts = Vec::with_capacity(disks.len());
let results = join_all(futures).await;
for result in results {
match result {
Ok(res) => {
errs.push(None);
object_parts.push(res);
}
Err(e) => {
errs.push(Some(e));
object_parts.push(vec![]);
}
}
}
if let Some(err) = reduce_read_quorum_errs(&errs, OBJECT_OP_IGNORED_ERRS, read_quorum) {
return Err(err);
}
let mut part_quorum_map: HashMap<usize, usize> = HashMap::new();
for drive_parts in object_parts {
let mut parts_with_meta_count: HashMap<usize, usize> = HashMap::new();
// part files can be either part.N or part.N.meta
for part_path in drive_parts {
if let Some(num_str) = part_path.strip_prefix("part.") {
if let Some(meta_idx) = num_str.find(".meta") {
if let Ok(part_num) = num_str[..meta_idx].parse::<usize>() {
*parts_with_meta_count.entry(part_num).or_insert(0) += 1;
}
} else if let Ok(part_num) = num_str.parse::<usize>() {
*parts_with_meta_count.entry(part_num).or_insert(0) += 1;
}
}
}
// Include only part.N.meta files with corresponding part.N
for (&part_num, &cnt) in &parts_with_meta_count {
if cnt >= 2 {
*part_quorum_map.entry(part_num).or_insert(0) += 1;
}
}
}
let mut part_numbers = Vec::with_capacity(part_quorum_map.len());
for (part_num, count) in part_quorum_map {
if count >= read_quorum {
part_numbers.push(part_num);
}
}
part_numbers.sort();
Ok(part_numbers)
}
#[tracing::instrument(skip(disks, meta))]
async fn rename_part(
disks: &[Option<DiskStore>],
@@ -1942,6 +2123,8 @@ impl SetDisks {
let till_offset = erasure.shard_file_offset(part_offset, part_length, part_size);
let read_offset = (part_offset / erasure.block_size) * erasure.shard_size();
let mut readers = Vec::with_capacity(disks.len());
let mut errors = Vec::with_capacity(disks.len());
for (idx, disk_op) in disks.iter().enumerate() {
@@ -1950,7 +2133,7 @@ impl SetDisks {
disk_op.as_ref(),
bucket,
&format!("{}/{}/part.{}", object, files[idx].data_dir.unwrap_or_default(), part_number),
part_offset,
read_offset,
till_offset,
erasure.shard_size(),
HashAlgorithm::HighwayHash256,
@@ -4099,6 +4282,8 @@ impl ObjectIO for SetDisks {
}
}
drop(writers); // drop writers to close all files, this is to prevent FileAccessDenied errors when renaming data
let (online_disks, _, op_old_dir) = Self::rename_data(
&shuffle_disks,
RUSTFS_META_TMP_BUCKET,
@@ -4882,7 +5067,7 @@ impl StorageAPI for SetDisks {
) -> Result<PartInfo> {
let upload_id_path = Self::get_upload_id_dir(bucket, object, upload_id);
let (mut fi, _) = self.check_upload_id_exists(bucket, object, upload_id, true).await?;
let (fi, _) = self.check_upload_id_exists(bucket, object, upload_id, true).await?;
let write_quorum = fi.write_quorum(self.default_write_quorum());
@@ -5035,9 +5220,11 @@ impl StorageAPI for SetDisks {
// debug!("put_object_part part_info {:?}", part_info);
fi.parts = vec![part_info];
// fi.parts = vec![part_info.clone()];
let fi_buff = fi.marshal_msg()?;
let part_info_buff = part_info.marshal_msg()?;
drop(writers); // drop writers to close all files
let part_path = format!("{}/{}/{}", upload_id_path, fi.data_dir.unwrap_or_default(), part_suffix);
let _ = Self::rename_part(
@@ -5046,7 +5233,7 @@ impl StorageAPI for SetDisks {
&tmp_part_path,
RUSTFS_META_MULTIPART_BUCKET,
&part_path,
fi_buff.into(),
part_info_buff.into(),
write_quorum,
)
.await?;
@@ -5064,6 +5251,123 @@ impl StorageAPI for SetDisks {
Ok(ret)
}
#[tracing::instrument(skip(self))]
async fn list_object_parts(
&self,
bucket: &str,
object: &str,
upload_id: &str,
part_number_marker: Option<usize>,
mut max_parts: usize,
opts: &ObjectOptions,
) -> Result<ListPartsInfo> {
let (fi, _) = self.check_upload_id_exists(bucket, object, upload_id, false).await?;
let upload_id_path = Self::get_upload_id_dir(bucket, object, upload_id);
if max_parts > MAX_PARTS_COUNT {
max_parts = MAX_PARTS_COUNT;
}
let part_number_marker = part_number_marker.unwrap_or_default();
let mut ret = ListPartsInfo {
bucket: bucket.to_owned(),
object: object.to_owned(),
upload_id: upload_id.to_owned(),
max_parts,
part_number_marker,
user_defined: fi.metadata.clone(),
..Default::default()
};
if max_parts == 0 {
return Ok(ret);
}
let online_disks = self.get_disks_internal().await;
let read_quorum = fi.read_quorum(self.default_read_quorum());
let part_path = format!(
"{}{}",
path_join_buf(&[
&upload_id_path,
fi.data_dir.map(|v| v.to_string()).unwrap_or_default().as_str(),
]),
SLASH_SEPARATOR
);
let mut part_numbers = match Self::list_parts(&online_disks, &part_path, read_quorum).await {
Ok(parts) => parts,
Err(err) => {
if err == DiskError::FileNotFound {
return Ok(ret);
}
return Err(to_object_err(err.into(), vec![bucket, object]));
}
};
if part_numbers.is_empty() {
return Ok(ret);
}
let start_op = part_numbers.iter().find(|&&v| v != 0 && v == part_number_marker);
if part_number_marker > 0 && start_op.is_none() {
return Ok(ret);
}
if let Some(start) = start_op {
if start + 1 > part_numbers.len() {
return Ok(ret);
}
part_numbers = part_numbers[start + 1..].to_vec();
}
let mut parts = Vec::with_capacity(part_numbers.len());
let part_meta_paths = part_numbers
.iter()
.map(|v| format!("{part_path}part.{v}.meta"))
.collect::<Vec<String>>();
let object_parts =
Self::read_parts(&online_disks, RUSTFS_META_MULTIPART_BUCKET, &part_meta_paths, &part_numbers, read_quorum)
.await
.map_err(|e| to_object_err(e.into(), vec![bucket, object, upload_id]))?;
let mut count = max_parts;
for (i, part) in object_parts.iter().enumerate() {
if let Some(err) = &part.error {
warn!("list_object_parts part error: {:?}", &err);
}
parts.push(PartInfo {
etag: Some(part.etag.clone()),
part_num: part.number,
last_mod: part.mod_time,
size: part.size,
actual_size: part.actual_size,
});
count -= 1;
if count == 0 {
break;
}
}
ret.parts = parts;
if object_parts.len() > ret.parts.len() {
ret.is_truncated = true;
ret.next_part_number_marker = ret.parts.last().map(|v| v.part_num).unwrap_or_default();
}
Ok(ret)
}
#[tracing::instrument(skip(self))]
async fn list_multipart_uploads(
&self,
@@ -5139,8 +5443,8 @@ impl StorageAPI for SetDisks {
let splits: Vec<&str> = upload_id.split("x").collect();
if splits.len() == 2 {
if let Ok(unix) = splits[1].parse::<i64>() {
OffsetDateTime::from_unix_timestamp(unix)?
if let Ok(unix) = splits[1].parse::<i128>() {
OffsetDateTime::from_unix_timestamp_nanos(unix)?
} else {
now
}
@@ -5359,49 +5663,31 @@ impl StorageAPI for SetDisks {
let part_path = format!("{}/{}/", upload_id_path, fi.data_dir.unwrap_or(Uuid::nil()));
let files: Vec<String> = uploaded_parts.iter().map(|v| format!("part.{}.meta", v.part_num)).collect();
let part_meta_paths = uploaded_parts
.iter()
.map(|v| format!("{part_path}part.{0}.meta", v.part_num))
.collect::<Vec<String>>();
// readMultipleFiles
let part_numbers = uploaded_parts.iter().map(|v| v.part_num).collect::<Vec<usize>>();
let req = ReadMultipleReq {
bucket: RUSTFS_META_MULTIPART_BUCKET.to_string(),
prefix: part_path,
files,
max_size: 1 << 20,
metadata_only: true,
abort404: true,
max_results: 0,
};
let object_parts =
Self::read_parts(&disks, RUSTFS_META_MULTIPART_BUCKET, &part_meta_paths, &part_numbers, write_quorum).await?;
let part_files_resp = Self::read_multiple_files(&disks, req, write_quorum).await;
if part_files_resp.len() != uploaded_parts.len() {
if object_parts.len() != uploaded_parts.len() {
return Err(Error::other("part result number err"));
}
for (i, res) in part_files_resp.iter().enumerate() {
let part_id = uploaded_parts[i].part_num;
if !res.error.is_empty() || !res.exists {
error!("complete_multipart_upload part_id err {:?}, exists={}", res, res.exists);
return Err(Error::InvalidPart(part_id, bucket.to_owned(), object.to_owned()));
for (i, part) in object_parts.iter().enumerate() {
if let Some(err) = &part.error {
error!("complete_multipart_upload part error: {:?}", &err);
}
let part_fi = FileInfo::unmarshal(&res.data).map_err(|e| {
if uploaded_parts[i].part_num != part.number {
error!(
"complete_multipart_upload FileInfo::unmarshal err {:?}, part_id={}, bucket={}, object={}",
e, part_id, bucket, object
"complete_multipart_upload part_id err part_id != part_num {} != {}",
uploaded_parts[i].part_num, part.number
);
Error::InvalidPart(part_id, bucket.to_owned(), object.to_owned())
})?;
let part = &part_fi.parts[0];
let part_num = part.number;
// debug!("complete part {} file info {:?}", part_num, &part_fi);
// debug!("complete part {} object info {:?}", part_num, &part);
if part_id != part_num {
error!("complete_multipart_upload part_id err part_id != part_num {} != {}", part_id, part_num);
return Err(Error::InvalidPart(part_id, bucket.to_owned(), object.to_owned()));
return Err(Error::InvalidPart(uploaded_parts[i].part_num, bucket.to_owned(), object.to_owned()));
}
fi.add_object_part(

View File

@@ -17,6 +17,7 @@ use std::{collections::HashMap, sync::Arc};
use crate::disk::error_reduce::count_errs;
use crate::error::{Error, Result};
use crate::store_api::ListPartsInfo;
use crate::{
disk::{
DiskAPI, DiskInfo, DiskOption, DiskStore,
@@ -619,6 +620,20 @@ impl StorageAPI for Sets {
Ok((del_objects, del_errs))
}
async fn list_object_parts(
&self,
bucket: &str,
object: &str,
upload_id: &str,
part_number_marker: Option<usize>,
max_parts: usize,
opts: &ObjectOptions,
) -> Result<ListPartsInfo> {
self.get_disks_by_key(object)
.list_object_parts(bucket, object, upload_id, part_number_marker, max_parts, opts)
.await
}
#[tracing::instrument(skip(self))]
async fn list_multipart_uploads(
&self,

View File

@@ -38,7 +38,7 @@ use crate::new_object_layer_fn;
use crate::notification_sys::get_global_notification_sys;
use crate::pools::PoolMeta;
use crate::rebalance::RebalanceMeta;
use crate::store_api::{ListMultipartsInfo, ListObjectVersionsInfo, MultipartInfo, ObjectIO};
use crate::store_api::{ListMultipartsInfo, ListObjectVersionsInfo, ListPartsInfo, MultipartInfo, ObjectIO};
use crate::store_init::{check_disk_fatal_errs, ec_drives_no_config};
use crate::{
bucket::{lifecycle::bucket_lifecycle_ops::TransitionState, metadata::BucketMetadata},
@@ -1810,6 +1810,47 @@ impl StorageAPI for ECStore {
Ok((del_objects, del_errs))
}
#[tracing::instrument(skip(self))]
async fn list_object_parts(
&self,
bucket: &str,
object: &str,
upload_id: &str,
part_number_marker: Option<usize>,
max_parts: usize,
opts: &ObjectOptions,
) -> Result<ListPartsInfo> {
check_list_parts_args(bucket, object, upload_id)?;
// TODO: nslock
if self.single_pool() {
return self.pools[0]
.list_object_parts(bucket, object, upload_id, part_number_marker, max_parts, opts)
.await;
}
for pool in self.pools.iter() {
if self.is_suspended(pool.pool_idx).await {
continue;
}
match pool
.list_object_parts(bucket, object, upload_id, part_number_marker, max_parts, opts)
.await
{
Ok(res) => return Ok(res),
Err(err) => {
if is_err_invalid_upload_id(&err) {
continue;
}
return Err(err);
}
};
}
Err(StorageError::InvalidUploadID(bucket.to_owned(), object.to_owned(), upload_id.to_owned()))
}
#[tracing::instrument(skip(self))]
async fn list_multipart_uploads(
&self,
@@ -2508,14 +2549,14 @@ fn check_object_name_for_length_and_slash(bucket: &str, object: &str) -> Result<
#[cfg(target_os = "windows")]
{
if object.contains('\\')
|| object.contains(':')
if object.contains(':')
|| object.contains('*')
|| object.contains('?')
|| object.contains('"')
|| object.contains('|')
|| object.contains('<')
|| object.contains('>')
// || object.contains('\\')
{
return Err(StorageError::ObjectNameInvalid(bucket.to_owned(), object.to_owned()));
}
@@ -2549,9 +2590,9 @@ fn check_bucket_and_object_names(bucket: &str, object: &str) -> Result<()> {
return Err(StorageError::ObjectNameInvalid(bucket.to_string(), object.to_string()));
}
if cfg!(target_os = "windows") && object.contains('\\') {
return Err(StorageError::ObjectNameInvalid(bucket.to_string(), object.to_string()));
}
// if cfg!(target_os = "windows") && object.contains('\\') {
// return Err(StorageError::ObjectNameInvalid(bucket.to_string(), object.to_string()));
// }
Ok(())
}

View File

@@ -201,7 +201,7 @@ impl GetObjectReader {
}
}
#[derive(Debug)]
#[derive(Debug, Clone)]
pub struct HTTPRangeSpec {
pub is_suffix_length: bool,
pub start: i64,
@@ -548,6 +548,7 @@ impl ObjectInfo {
mod_time: part.mod_time,
checksums: part.checksums.clone(),
number: part.number,
error: part.error.clone(),
})
.collect();
@@ -844,6 +845,48 @@ pub struct ListMultipartsInfo {
// encoding_type: String, // Not supported yet.
}
/// ListPartsInfo - represents list of all parts.
#[derive(Debug, Clone, Default)]
pub struct ListPartsInfo {
/// Name of the bucket.
pub bucket: String,
/// Name of the object.
pub object: String,
/// Upload ID identifying the multipart upload whose parts are being listed.
pub upload_id: String,
/// The class of storage used to store the object.
pub storage_class: String,
/// Part number after which listing begins.
pub part_number_marker: usize,
/// When a list is truncated, this element specifies the last part in the list,
/// as well as the value to use for the part-number-marker request parameter
/// in a subsequent request.
pub next_part_number_marker: usize,
/// Maximum number of parts that were allowed in the response.
pub max_parts: usize,
/// Indicates whether the returned list of parts is truncated.
pub is_truncated: bool,
/// List of all parts.
pub parts: Vec<PartInfo>,
/// Any metadata set during InitMultipartUpload, including encryption headers.
pub user_defined: HashMap<String, String>,
/// ChecksumAlgorithm if set
pub checksum_algorithm: String,
/// ChecksumType if set
pub checksum_type: String,
}
#[derive(Debug, Default, Clone)]
pub struct ObjectToDelete {
pub object_name: String,
@@ -923,10 +966,7 @@ pub trait StorageAPI: ObjectIO {
) -> Result<ListObjectVersionsInfo>;
// Walk TODO:
// GetObjectNInfo ObjectIO
async fn get_object_info(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<ObjectInfo>;
// PutObject ObjectIO
// CopyObject
async fn copy_object(
&self,
src_bucket: &str,
@@ -949,7 +989,6 @@ pub trait StorageAPI: ObjectIO {
// TransitionObject TODO:
// RestoreTransitionedObject TODO:
// ListMultipartUploads
async fn list_multipart_uploads(
&self,
bucket: &str,
@@ -960,7 +999,6 @@ pub trait StorageAPI: ObjectIO {
max_uploads: usize,
) -> Result<ListMultipartsInfo>;
async fn new_multipart_upload(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<MultipartUploadResult>;
// CopyObjectPart
async fn copy_object_part(
&self,
src_bucket: &str,
@@ -984,7 +1022,6 @@ pub trait StorageAPI: ObjectIO {
data: &mut PutObjReader,
opts: &ObjectOptions,
) -> Result<PartInfo>;
// GetMultipartInfo
async fn get_multipart_info(
&self,
bucket: &str,
@@ -992,7 +1029,15 @@ pub trait StorageAPI: ObjectIO {
upload_id: &str,
opts: &ObjectOptions,
) -> Result<MultipartInfo>;
// ListObjectParts
async fn list_object_parts(
&self,
bucket: &str,
object: &str,
upload_id: &str,
part_number_marker: Option<usize>,
max_parts: usize,
opts: &ObjectOptions,
) -> Result<ListPartsInfo>;
async fn abort_multipart_upload(&self, bucket: &str, object: &str, upload_id: &str, opts: &ObjectOptions) -> Result<()>;
async fn complete_multipart_upload(
self: Arc<Self>,
@@ -1002,13 +1047,10 @@ pub trait StorageAPI: ObjectIO {
uploaded_parts: Vec<CompletePart>,
opts: &ObjectOptions,
) -> Result<ObjectInfo>;
// GetDisks
async fn get_disks(&self, pool_idx: usize, set_idx: usize) -> Result<Vec<Option<DiskStore>>>;
// SetDriveCounts
fn set_drive_counts(&self) -> Vec<usize>;
// Health TODO:
// PutObjectMetadata
async fn put_object_metadata(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<ObjectInfo>;
// DecomTieredObject
async fn get_object_tags(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<String>;

View File

@@ -19,6 +19,11 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "File metadata management for RustFS, providing efficient storage and retrieval of file metadata in a distributed system."
keywords = ["file-metadata", "storage", "retrieval", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "filesystem"]
documentation = "https://docs.rs/rustfs-filemeta/latest/rustfs_filemeta/"
[dependencies]
crc32fast = { workspace = true }

View File

@@ -3,7 +3,7 @@
# RustFS FileMeta - File Metadata Management
<p align="center">
<strong>High-performance file metadata management for RustFS distributed object storage</strong>
<strong>Advanced file metadata management and indexing module for RustFS distributed object storage</strong>
</p>
<p align="center">
@@ -17,246 +17,21 @@
## 📖 Overview
**RustFS FileMeta** is the metadata management module for the [RustFS](https://rustfs.com) distributed object storage system. It provides efficient storage, retrieval, and management of file metadata, supporting features like versioning, tagging, and extended attributes with high performance and reliability.
> **Note:** This is a core submodule of RustFS that provides essential metadata management capabilities for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
**RustFS FileMeta** provides advanced file metadata management and indexing capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
### 📝 Metadata Management
- **File Information**: Complete file metadata including size, timestamps, and checksums
- **Object Versioning**: Version-aware metadata management
- **Extended Attributes**: Custom metadata and tagging support
- **Inline Metadata**: Optimized storage for small metadata
### 🚀 Performance Features
- **FlatBuffers Serialization**: Zero-copy metadata serialization
- **Efficient Storage**: Optimized metadata storage layout
- **Fast Lookups**: High-performance metadata queries
- **Batch Operations**: Bulk metadata operations
### 🔧 Advanced Capabilities
- **Schema Evolution**: Forward and backward compatible metadata schemas
- **Compression**: Metadata compression for space efficiency
- **Validation**: Metadata integrity verification
- **Migration**: Seamless metadata format migration
## 📦 Installation
Add this to your `Cargo.toml`:
```toml
[dependencies]
rustfs-filemeta = "0.1.0"
```
## 🔧 Usage
### Basic Metadata Operations
```rust
use rustfs_filemeta::{FileInfo, XLMeta};
use std::collections::HashMap;
fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create file metadata
let mut file_info = FileInfo::new();
file_info.name = "example.txt".to_string();
file_info.size = 1024;
file_info.mod_time = chrono::Utc::now();
// Add custom metadata
let mut user_defined = HashMap::new();
user_defined.insert("author".to_string(), "john@example.com".to_string());
user_defined.insert("department".to_string(), "engineering".to_string());
file_info.user_defined = user_defined;
// Create XL metadata
let xl_meta = XLMeta::new(file_info);
// Serialize metadata
let serialized = xl_meta.serialize()?;
// Deserialize metadata
let deserialized = XLMeta::deserialize(&serialized)?;
println!("File: {}, Size: {}", deserialized.file_info.name, deserialized.file_info.size);
Ok(())
}
```
### Advanced Metadata Management
```rust
use rustfs_filemeta::{XLMeta, FileInfo, VersionInfo};
async fn advanced_metadata_example() -> Result<(), Box<dyn std::error::Error>> {
// Create versioned metadata
let mut xl_meta = XLMeta::new(FileInfo::default());
// Set version information
xl_meta.set_version_info(VersionInfo {
version_id: "v1.0.0".to_string(),
is_latest: true,
delete_marker: false,
restore_ongoing: false,
});
// Add checksums
xl_meta.add_checksum("md5", "d41d8cd98f00b204e9800998ecf8427e");
xl_meta.add_checksum("sha256", "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855");
// Set object tags
xl_meta.set_tags(vec![
("Environment".to_string(), "Production".to_string()),
("Owner".to_string(), "DataTeam".to_string()),
]);
// Set retention information
xl_meta.set_retention_info(
chrono::Utc::now() + chrono::Duration::days(365),
"GOVERNANCE".to_string(),
);
// Validate metadata
xl_meta.validate()?;
Ok(())
}
```
### Inline Metadata Operations
```rust
use rustfs_filemeta::{InlineMetadata, MetadataSize};
fn inline_metadata_example() -> Result<(), Box<dyn std::error::Error>> {
// Create inline metadata for small files
let mut inline_meta = InlineMetadata::new();
// Set basic properties
inline_meta.set_content_type("text/plain");
inline_meta.set_content_encoding("gzip");
inline_meta.set_cache_control("max-age=3600");
// Add custom headers
inline_meta.add_header("x-custom-field", "custom-value");
inline_meta.add_header("x-app-version", "1.2.3");
// Check if metadata fits inline storage
if inline_meta.size() <= MetadataSize::INLINE_THRESHOLD {
println!("Metadata can be stored inline");
} else {
println!("Metadata requires separate storage");
}
// Serialize for storage
let bytes = inline_meta.to_bytes()?;
// Deserialize from storage
let restored = InlineMetadata::from_bytes(&bytes)?;
Ok(())
}
```
## 🏗️ Architecture
### Metadata Storage Layout
```
FileMeta Architecture:
┌─────────────────────────────────────────────────────────────┐
│ Metadata API Layer │
├─────────────────────────────────────────────────────────────┤
│ XL Metadata │ Inline Metadata │ Version Info │
├─────────────────────────────────────────────────────────────┤
│ FlatBuffers Serialization │
├─────────────────────────────────────────────────────────────┤
│ Compression │ Validation │ Migration │
├─────────────────────────────────────────────────────────────┤
│ Storage Backend Integration │
└─────────────────────────────────────────────────────────────┘
```
### Metadata Types
| Type | Use Case | Storage | Performance |
|------|----------|---------|-------------|
| XLMeta | Large objects with rich metadata | Separate file | High durability |
| InlineMeta | Small objects with minimal metadata | Embedded | Fastest access |
| VersionMeta | Object versioning information | Version-specific | Version-aware |
## 🧪 Testing
Run the test suite:
```bash
# Run all tests
cargo test
# Run serialization benchmarks
cargo bench
# Test metadata validation
cargo test validation
# Test schema migration
cargo test migration
```
## 🚀 Performance
FileMeta is optimized for high-performance metadata operations:
- **Serialization**: Zero-copy FlatBuffers serialization
- **Storage**: Compact binary format reduces I/O
- **Caching**: Intelligent metadata caching
- **Batch Operations**: Efficient bulk metadata processing
## 📋 Requirements
- **Rust**: 1.70.0 or later
- **Platforms**: Linux, macOS, Windows
- **Memory**: Minimal memory footprint
- **Storage**: Compatible with RustFS storage backend
## 🌍 Related Projects
This module is part of the RustFS ecosystem:
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
- [RustFS ECStore](../ecstore) - Erasure coding storage engine
- [RustFS Utils](../utils) - Utility functions
- [RustFS Proto](../protos) - Protocol definitions
- High-performance metadata storage and retrieval
- Advanced indexing with full-text search capabilities
- File attribute management and custom metadata
- Version tracking and history management
- Distributed metadata replication
- Real-time metadata synchronization
## 📚 Documentation
For comprehensive documentation, visit:
- [RustFS Documentation](https://docs.rustfs.com)
- [FileMeta API Reference](https://docs.rustfs.com/filemeta/)
## 🔗 Links
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details.
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
---
<p align="center">
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
All other trademarks are the property of their respective owners.
</p>
<p align="center">
Made with ❤️ by the RustFS Team
</p>
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -46,6 +46,20 @@ pub struct ObjectPartInfo {
pub index: Option<Bytes>,
// Checksums holds checksums of the part
pub checksums: Option<HashMap<String, String>>,
pub error: Option<String>,
}
impl ObjectPartInfo {
pub fn marshal_msg(&self) -> Result<Vec<u8>> {
let mut buf = Vec::new();
self.serialize(&mut Serializer::new(&mut buf))?;
Ok(buf)
}
pub fn unmarshal(buf: &[u8]) -> Result<Self> {
let t: ObjectPartInfo = rmp_serde::from_slice(buf)?;
Ok(t)
}
}
#[derive(Serialize, Deserialize, Debug, PartialEq, Default, Clone)]
@@ -287,6 +301,7 @@ impl FileInfo {
actual_size,
index,
checksums: None,
error: None,
};
for p in self.parts.iter_mut() {

View File

@@ -19,6 +19,11 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Identity and Access Management (IAM) for RustFS, providing user management, roles, and permissions."
keywords = ["iam", "identity", "access-management", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "authentication"]
documentation = "https://docs.rs/rustfs-iam/latest/rustfs_iam/"
[lints]
workspace = true
@@ -40,7 +45,6 @@ base64-simd = { workspace = true }
jsonwebtoken = { workspace = true }
tracing.workspace = true
rustfs-madmin.workspace = true
lazy_static.workspace = true
rustfs-utils = { workspace = true, features = ["path"] }
[dev-dependencies]

View File

@@ -1,9 +1,9 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
# RustFS IAM - Identity and Access Management
# RustFS IAM - Identity & Access Management
<p align="center">
<strong>Enterprise-grade identity and access management for RustFS distributed object storage</strong>
<strong>Identity and access management system for RustFS distributed object storage</strong>
</p>
<p align="center">
@@ -17,592 +17,21 @@
## 📖 Overview
**RustFS IAM** is the identity and access management module for the [RustFS](https://rustfs.com) distributed object storage system. It provides comprehensive authentication, authorization, and access control capabilities, ensuring secure and compliant access to storage resources.
> **Note:** This is a core submodule of RustFS and provides essential security and access control features for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
**RustFS IAM** provides identity and access management capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
### 🔐 Authentication & Authorization
- **Multi-Factor Authentication**: Support for various authentication methods
- **Access Key Management**: Secure generation and management of access keys
- **JWT Token Support**: Stateless authentication with JWT tokens
- **Session Management**: Secure session handling and token refresh
### 👥 User Management
- **User Accounts**: Complete user lifecycle management
- **Service Accounts**: Automated service authentication
- **Temporary Accounts**: Time-limited access credentials
- **Group Management**: Organize users into groups for easier management
### 🛡️ Access Control
- **Role-Based Access Control (RBAC)**: Flexible role and permission system
- **Policy-Based Access Control**: Fine-grained access policies
- **Resource-Level Permissions**: Granular control over storage resources
- **API-Level Authorization**: Secure API access control
### 🔑 Credential Management
- **Secure Key Generation**: Cryptographically secure key generation
- **Key Rotation**: Automatic and manual key rotation capabilities
- **Credential Validation**: Real-time credential verification
- **Secret Management**: Secure storage and retrieval of secrets
### 🏢 Enterprise Features
- **LDAP Integration**: Enterprise directory service integration
- **SSO Support**: Single Sign-On capabilities
- **Audit Logging**: Comprehensive access audit trails
- **Compliance Features**: Meet regulatory compliance requirements
## 🏗️ Architecture
### IAM System Architecture
```
IAM Architecture:
┌─────────────────────────────────────────────────────────────┐
│ IAM API Layer │
├─────────────────────────────────────────────────────────────┤
│ Authentication │ Authorization │ User Management │
├─────────────────────────────────────────────────────────────┤
│ Policy Engine Integration │
├─────────────────────────────────────────────────────────────┤
│ Credential Store │ Cache Layer │ Token Manager │
├─────────────────────────────────────────────────────────────┤
│ Storage Backend Integration │
└─────────────────────────────────────────────────────────────┘
```
### Security Model
| Component | Description | Security Level |
|-----------|-------------|----------------|
| Access Keys | API authentication credentials | High |
| JWT Tokens | Stateless authentication tokens | High |
| Session Management | User session handling | Medium |
| Policy Enforcement | Access control policies | Critical |
| Audit Logging | Security event tracking | High |
## 📦 Installation
Add this to your `Cargo.toml`:
```toml
[dependencies]
rustfs-iam = "0.1.0"
```
## 🔧 Usage
### Basic IAM Setup
```rust
use rustfs_iam::{init_iam_sys, get};
use rustfs_ecstore::ECStore;
use std::sync::Arc;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Initialize with ECStore backend
let ecstore = Arc::new(ECStore::new("/path/to/storage").await?);
// Initialize IAM system
init_iam_sys(ecstore).await?;
// Get IAM system instance
let iam = get()?;
println!("IAM system initialized successfully");
Ok(())
}
```
### User Management
```rust
use rustfs_iam::{get, manager::UserInfo};
async fn user_management_example() -> Result<(), Box<dyn std::error::Error>> {
let iam = get()?;
// Create a new user
let user_info = UserInfo {
access_key: "AKIAIOSFODNN7EXAMPLE".to_string(),
secret_key: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY".to_string(),
status: "enabled".to_string(),
..Default::default()
};
iam.create_user("john-doe", user_info).await?;
// List users
let users = iam.list_users().await?;
for user in users {
println!("User: {}, Status: {}", user.name, user.status);
}
// Update user status
iam.set_user_status("john-doe", "disabled").await?;
// Delete user
iam.delete_user("john-doe").await?;
Ok(())
}
```
### Group Management
```rust
use rustfs_iam::{get, manager::GroupInfo};
async fn group_management_example() -> Result<(), Box<dyn std::error::Error>> {
let iam = get()?;
// Create a group
let group_info = GroupInfo {
name: "developers".to_string(),
members: vec!["john-doe".to_string(), "jane-smith".to_string()],
policies: vec!["read-only-policy".to_string()],
..Default::default()
};
iam.create_group(group_info).await?;
// Add user to group
iam.add_user_to_group("alice", "developers").await?;
// Remove user from group
iam.remove_user_from_group("alice", "developers").await?;
// List groups
let groups = iam.list_groups().await?;
for group in groups {
println!("Group: {}, Members: {}", group.name, group.members.len());
}
Ok(())
}
```
### Policy Management
```rust
use rustfs_iam::{get, manager::PolicyDocument};
async fn policy_management_example() -> Result<(), Box<dyn std::error::Error>> {
let iam = get()?;
// Create a policy
let policy_doc = PolicyDocument {
version: "2012-10-17".to_string(),
statement: vec![
Statement {
effect: "Allow".to_string(),
action: vec!["s3:GetObject".to_string()],
resource: vec!["arn:aws:s3:::my-bucket/*".to_string()],
..Default::default()
}
],
..Default::default()
};
iam.create_policy("read-only-policy", policy_doc).await?;
// Attach policy to user
iam.attach_user_policy("john-doe", "read-only-policy").await?;
// Detach policy from user
iam.detach_user_policy("john-doe", "read-only-policy").await?;
// List policies
let policies = iam.list_policies().await?;
for policy in policies {
println!("Policy: {}", policy.name);
}
Ok(())
}
```
### Service Account Management
```rust
use rustfs_iam::{get, manager::ServiceAccountInfo};
async fn service_account_example() -> Result<(), Box<dyn std::error::Error>> {
let iam = get()?;
// Create service account
let service_account = ServiceAccountInfo {
name: "backup-service".to_string(),
description: "Automated backup service".to_string(),
policies: vec!["backup-policy".to_string()],
..Default::default()
};
iam.create_service_account(service_account).await?;
// Generate credentials for service account
let credentials = iam.generate_service_account_credentials("backup-service").await?;
println!("Service Account Credentials: {:?}", credentials);
// Rotate service account credentials
iam.rotate_service_account_credentials("backup-service").await?;
Ok(())
}
```
### Authentication and Authorization
```rust
use rustfs_iam::{get, auth::Credentials};
async fn auth_example() -> Result<(), Box<dyn std::error::Error>> {
let iam = get()?;
// Authenticate user
let credentials = Credentials {
access_key: "AKIAIOSFODNN7EXAMPLE".to_string(),
secret_key: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY".to_string(),
session_token: None,
};
let auth_result = iam.authenticate(&credentials).await?;
println!("Authentication successful: {}", auth_result.user_name);
// Check authorization
let authorized = iam.is_authorized(
&auth_result.user_name,
"s3:GetObject",
"arn:aws:s3:::my-bucket/file.txt"
).await?;
if authorized {
println!("User is authorized to access the resource");
} else {
println!("User is not authorized to access the resource");
}
Ok(())
}
```
### Temporary Credentials
```rust
use rustfs_iam::{get, manager::TemporaryCredentials};
use std::time::Duration;
async fn temp_credentials_example() -> Result<(), Box<dyn std::error::Error>> {
let iam = get()?;
// Create temporary credentials
let temp_creds = iam.create_temporary_credentials(
"john-doe",
Duration::from_secs(3600), // 1 hour
Some("read-only-policy".to_string())
).await?;
println!("Temporary Access Key: {}", temp_creds.access_key);
println!("Expires at: {}", temp_creds.expiration);
// Validate temporary credentials
let is_valid = iam.validate_temporary_credentials(&temp_creds.access_key).await?;
println!("Temporary credentials valid: {}", is_valid);
Ok(())
}
```
## 🧪 Testing
Run the test suite:
```bash
# Run all tests
cargo test
# Run tests with specific features
cargo test --features "ldap,sso"
# Run integration tests
cargo test --test integration
# Run authentication tests
cargo test auth
# Run authorization tests
cargo test authz
```
## 🔒 Security Best Practices
### Key Management
- Rotate access keys regularly
- Use strong, randomly generated keys
- Store keys securely using environment variables or secret management systems
- Implement key rotation policies
### Access Control
- Follow the principle of least privilege
- Use groups for easier permission management
- Regularly audit user permissions
- Implement resource-based policies
### Monitoring and Auditing
- Enable comprehensive audit logging
- Monitor failed authentication attempts
- Set up alerts for suspicious activities
- Regular security reviews
## 📊 Performance Considerations
### Caching Strategy
- **User Cache**: Cache user information for faster lookups
- **Policy Cache**: Cache policy documents to reduce latency
- **Token Cache**: Cache JWT tokens for stateless authentication
- **Permission Cache**: Cache authorization decisions
### Scalability
- **Distributed Cache**: Use distributed caching for multi-node deployments
- **Database Optimization**: Optimize database queries for user/group lookups
- **Connection Pooling**: Use connection pooling for database connections
- **Async Operations**: Leverage async programming for better throughput
## 🔧 Configuration
### Basic Configuration
```toml
[iam]
# Authentication settings
jwt_secret = "your-jwt-secret-key"
jwt_expiration = "24h"
session_timeout = "30m"
# Password policy
min_password_length = 8
require_special_chars = true
require_numbers = true
require_uppercase = true
# Account lockout
max_login_attempts = 5
lockout_duration = "15m"
# Audit settings
audit_enabled = true
audit_log_path = "/var/log/rustfs/iam-audit.log"
```
### Advanced Configuration
```rust
use rustfs_iam::config::IamConfig;
let config = IamConfig {
// Authentication settings
jwt_secret: "your-secure-jwt-secret".to_string(),
jwt_expiration_hours: 24,
session_timeout_minutes: 30,
// Security settings
password_policy: PasswordPolicy {
min_length: 8,
require_special_chars: true,
require_numbers: true,
require_uppercase: true,
max_age_days: 90,
},
// Rate limiting
rate_limit: RateLimit {
max_requests_per_minute: 100,
burst_size: 10,
},
// Audit settings
audit_enabled: true,
audit_log_level: "info".to_string(),
..Default::default()
};
```
## 🤝 Integration with RustFS
IAM integrates seamlessly with other RustFS components:
- **ECStore**: Provides user and policy storage backend
- **Policy Engine**: Implements fine-grained access control
- **Crypto Module**: Handles secure key generation and JWT operations
- **API Server**: Provides authentication and authorization for S3 API
- **Admin Interface**: Manages users, groups, and policies
## 📋 Requirements
- **Rust**: 1.70.0 or later
- **Platforms**: Linux, macOS, Windows
- **Database**: Compatible with RustFS storage backend
- **Memory**: Minimum 2GB RAM for caching
- **Network**: Secure connections for authentication
## 🐛 Troubleshooting
### Common Issues
1. **Authentication Failures**:
- Check access key and secret key validity
- Verify user account status (enabled/disabled)
- Check for account lockout due to failed attempts
2. **Authorization Errors**:
- Verify user has required permissions
- Check policy attachments (user/group policies)
- Validate resource ARN format
3. **Performance Issues**:
- Monitor cache hit rates
- Check database connection pool utilization
- Verify JWT token size and complexity
### Debug Commands
```bash
# Check IAM system status
rustfs-cli iam status
# List all users
rustfs-cli iam list-users
# Validate user credentials
rustfs-cli iam validate-credentials --access-key <key>
# Test policy evaluation
rustfs-cli iam test-policy --user <user> --action <action> --resource <resource>
```
## 🌍 Related Projects
This module is part of the RustFS ecosystem:
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
- [RustFS ECStore](../ecstore) - Erasure coding storage engine
- [RustFS Policy](../policy) - Policy engine for access control
- [RustFS Crypto](../crypto) - Cryptographic operations
- [RustFS MadAdmin](../madmin) - Administrative interface
- User and group management with RBAC
- Service account and API key authentication
- Policy engine with fine-grained permissions
- LDAP/Active Directory integration
- Multi-factor authentication support
- Session management and token validation
## 📚 Documentation
For comprehensive documentation, visit:
- [RustFS Documentation](https://docs.rustfs.com)
- [IAM API Reference](https://docs.rustfs.com/iam/)
- [Security Guide](https://docs.rustfs.com/security/)
- [Authentication Guide](https://docs.rustfs.com/auth/)
## 🔗 Links
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details on:
- Security-first development practices
- IAM system architecture guidelines
- Authentication and authorization patterns
- Testing procedures for security features
- Documentation standards for security APIs
### Development Setup
```bash
# Clone the repository
git clone https://github.com/rustfs/rustfs.git
cd rustfs
# Navigate to IAM module
cd crates/iam
# Install dependencies
cargo build
# Run tests
cargo test
# Run security tests
cargo test security
# Format code
cargo fmt
# Run linter
cargo clippy
```
## 💬 Getting Help
- **Documentation**: [docs.rustfs.com](https://docs.rustfs.com)
- **Issues**: [GitHub Issues](https://github.com/rustfs/rustfs/issues)
- **Discussions**: [GitHub Discussions](https://github.com/rustfs/rustfs/discussions)
- **Security**: Report security issues to <security@rustfs.com>
## 📞 Contact
- **Bugs**: [GitHub Issues](https://github.com/rustfs/rustfs/issues)
- **Business**: <hello@rustfs.com>
- **Jobs**: <jobs@rustfs.com>
- **General Discussion**: [GitHub Discussions](https://github.com/rustfs/rustfs/discussions)
## 👥 Contributors
This module is maintained by the RustFS security team and community contributors. Special thanks to all who have contributed to making RustFS secure and compliant.
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
</a>
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
```
Copyright 2024 RustFS Team
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
```
---
<p align="center">
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
All other trademarks are the property of their respective owners.
</p>
<p align="center">
Made with 🔐 by the RustFS Security Team
</p>
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -13,6 +13,7 @@
// limitations under the License.
use crate::error::{Error, Result, is_err_config_not_found};
use crate::sys::get_claims_from_token_with_secret;
use crate::{
cache::{Cache, CacheEntity},
error::{Error as IamError, is_err_no_such_group, is_err_no_such_policy, is_err_no_such_user},
@@ -26,7 +27,7 @@ use rustfs_ecstore::global::get_global_action_cred;
use rustfs_madmin::{AccountStatus, AddOrUpdateUserReq, GroupDesc};
use rustfs_policy::{
arn::ARN,
auth::{self, Credentials, UserIdentity, get_claims_from_token_with_secret, is_secret_key_valid, jwt_sign},
auth::{self, Credentials, UserIdentity, is_secret_key_valid, jwt_sign},
format::Format,
policy::{
EMBEDDED_POLICY_TYPE, INHERITED_POLICY_TYPE, Policy, PolicyDoc, default::DEFAULT_POLICIES, iam_policy_claim_name_sa,

View File

@@ -20,7 +20,6 @@ use crate::{
manager::{extract_jwt_claims, get_default_policyes},
};
use futures::future::join_all;
use lazy_static::lazy_static;
use rustfs_ecstore::{
config::{
RUSTFS_CONFIG_PREFIX,
@@ -34,25 +33,28 @@ use rustfs_ecstore::{
use rustfs_policy::{auth::UserIdentity, policy::PolicyDoc};
use rustfs_utils::path::{SLASH_SEPARATOR, path_join_buf};
use serde::{Serialize, de::DeserializeOwned};
use std::sync::LazyLock;
use std::{collections::HashMap, sync::Arc};
use tokio::sync::broadcast::{self, Receiver as B_Receiver};
use tokio::sync::mpsc::{self, Sender};
use tracing::{info, warn};
use tracing::{debug, info, warn};
lazy_static! {
pub static ref IAM_CONFIG_PREFIX: String = format!("{}/iam", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_USERS_PREFIX: String = format!("{}/iam/users/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_SERVICE_ACCOUNTS_PREFIX: String = format!("{}/iam/service-accounts/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_GROUPS_PREFIX: String = format!("{}/iam/groups/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_POLICIES_PREFIX: String = format!("{}/iam/policies/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_STS_PREFIX: String = format!("{}/iam/sts/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_POLICY_DB_PREFIX: String = format!("{}/iam/policydb/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_POLICY_DB_USERS_PREFIX: String = format!("{}/iam/policydb/users/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_POLICY_DB_STS_USERS_PREFIX: String = format!("{}/iam/policydb/sts-users/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_POLICY_DB_SERVICE_ACCOUNTS_PREFIX: String =
format!("{}/iam/policydb/service-accounts/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_POLICY_DB_GROUPS_PREFIX: String = format!("{}/iam/policydb/groups/", RUSTFS_CONFIG_PREFIX);
}
pub static IAM_CONFIG_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam"));
pub static IAM_CONFIG_USERS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/users/"));
pub static IAM_CONFIG_SERVICE_ACCOUNTS_PREFIX: LazyLock<String> =
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/service-accounts/"));
pub static IAM_CONFIG_GROUPS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/groups/"));
pub static IAM_CONFIG_POLICIES_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policies/"));
pub static IAM_CONFIG_STS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/sts/"));
pub static IAM_CONFIG_POLICY_DB_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/"));
pub static IAM_CONFIG_POLICY_DB_USERS_PREFIX: LazyLock<String> =
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/users/"));
pub static IAM_CONFIG_POLICY_DB_STS_USERS_PREFIX: LazyLock<String> =
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/sts-users/"));
pub static IAM_CONFIG_POLICY_DB_SERVICE_ACCOUNTS_PREFIX: LazyLock<String> =
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/service-accounts/"));
pub static IAM_CONFIG_POLICY_DB_GROUPS_PREFIX: LazyLock<String> =
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/groups/"));
const IAM_IDENTITY_FILE: &str = "identity.json";
const IAM_POLICY_FILE: &str = "policy.json";
@@ -370,7 +372,15 @@ impl Store for ObjectStore {
async fn load_iam_config<Item: DeserializeOwned>(&self, path: impl AsRef<str> + Send) -> Result<Item> {
let mut data = read_config(self.object_api.clone(), path.as_ref()).await?;
data = Self::decrypt_data(&data)?;
data = match Self::decrypt_data(&data) {
Ok(v) => v,
Err(err) => {
debug!("decrypt_data failed: {}", err);
// delete the config file when decrypt failed
let _ = self.delete_iam_config(path.as_ref()).await;
return Err(Error::ConfigNotFound);
}
};
Ok(serde_json::from_slice(&data)?)
}

View File

@@ -23,6 +23,7 @@ use crate::store::GroupInfo;
use crate::store::MappedPolicy;
use crate::store::Store;
use crate::store::UserType;
use crate::utils::extract_claims;
use rustfs_ecstore::global::get_global_action_cred;
use rustfs_madmin::AddOrUpdateUserReq;
use rustfs_madmin::GroupDesc;
@@ -542,7 +543,7 @@ impl<T: Store> IamSys<T> {
}
};
if policies.is_empty() {
if !is_owner && policies.is_empty() {
return false;
}
@@ -732,3 +733,18 @@ pub struct UpdateServiceAccountOpts {
pub expiration: Option<OffsetDateTime>,
pub status: Option<String>,
}
pub fn get_claims_from_token_with_secret(token: &str, secret: &str) -> Result<HashMap<String, Value>> {
let mut ms =
extract_claims::<HashMap<String, Value>>(token, secret).map_err(|e| Error::other(format!("extract claims err {e}")))?;
if let Some(session_policy) = ms.claims.get(SESSION_POLICY_NAME) {
let policy_str = session_policy.as_str().unwrap_or_default();
let policy = base64_decode(policy_str.as_bytes()).map_err(|e| Error::other(format!("base64 decode err {e}")))?;
ms.claims.insert(
SESSION_POLICY_NAME_EXTRACTED.to_string(),
Value::String(String::from_utf8(policy).map_err(|e| Error::other(format!("utf8 decode err {e}")))?),
);
}
Ok(ms.claims)
}

View File

@@ -19,13 +19,17 @@ edition.workspace = true
license.workspace = true
repository.workspace = true
rust-version.workspace = true
homepage.workspace = true
description = "Distributed locking mechanism for RustFS, providing synchronization and coordination across distributed systems."
keywords = ["locking", "asynchronous", "distributed", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "asynchronous"]
documentation = "https://docs.rs/rustfs-lock/latest/rustfs_lock/"
[lints]
workspace = true
[dependencies]
async-trait.workspace = true
lazy_static.workspace = true
rustfs-protos.workspace = true
rand.workspace = true
serde.workspace = true

View File

@@ -3,7 +3,7 @@
# RustFS Lock - Distributed Locking
<p align="center">
<strong>Distributed locking and synchronization for RustFS object storage</strong>
<strong>High-performance distributed locking system for RustFS object storage</strong>
</p>
<p align="center">
@@ -17,376 +17,21 @@
## 📖 Overview
**RustFS Lock** provides distributed locking and synchronization primitives for the [RustFS](https://rustfs.com) distributed object storage system. It ensures data consistency and prevents race conditions in multi-node environments through various locking mechanisms and coordination protocols.
> **Note:** This is a core submodule of RustFS that provides essential distributed locking capabilities for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
**RustFS Lock** provides distributed locking capabilities for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
### 🔒 Distributed Locking
- **Exclusive Locks**: Mutual exclusion across cluster nodes
- **Shared Locks**: Reader-writer lock semantics
- **Timeout Support**: Configurable lock timeouts and expiration
- **Deadlock Prevention**: Automatic deadlock detection and resolution
### 🔄 Synchronization Primitives
- **Distributed Mutex**: Cross-node mutual exclusion
- **Distributed Semaphore**: Resource counting across nodes
- **Distributed Barrier**: Coordination point for multiple nodes
- **Distributed Condition Variables**: Wait/notify across nodes
### 🛡️ Consistency Guarantees
- **Linearizable Operations**: Strong consistency guarantees
- **Fault Tolerance**: Automatic recovery from node failures
- **Network Partition Handling**: CAP theorem aware implementations
- **Consensus Integration**: Raft-based consensus for critical locks
### 🚀 Performance Features
- **Lock Coalescing**: Efficient batching of lock operations
- **Adaptive Timeouts**: Dynamic timeout adjustment
- **Lock Hierarchy**: Hierarchical locking for better scalability
- **Optimistic Locking**: Reduced contention through optimistic approaches
## 📦 Installation
Add this to your `Cargo.toml`:
```toml
[dependencies]
rustfs-lock = "0.1.0"
```
## 🔧 Usage
### Basic Distributed Lock
```rust
use rustfs_lock::{DistributedLock, LockManager, LockOptions};
use std::time::Duration;
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create lock manager
let lock_manager = LockManager::new("cluster-endpoint").await?;
// Acquire distributed lock
let lock_options = LockOptions {
timeout: Duration::from_secs(30),
auto_renew: true,
..Default::default()
};
let lock = lock_manager.acquire_lock("resource-key", lock_options).await?;
// Critical section
{
println!("Lock acquired, performing critical operations...");
// Your critical code here
tokio::time::sleep(Duration::from_secs(2)).await;
}
// Release lock
lock.release().await?;
println!("Lock released");
Ok(())
}
```
### Distributed Mutex
```rust
use rustfs_lock::{DistributedMutex, LockManager};
use std::sync::Arc;
async fn distributed_mutex_example() -> Result<(), Box<dyn std::error::Error>> {
let lock_manager = Arc::new(LockManager::new("cluster-endpoint").await?);
// Create distributed mutex
let mutex = DistributedMutex::new(lock_manager.clone(), "shared-resource");
// Spawn multiple tasks
let mut handles = vec![];
for i in 0..5 {
let mutex = mutex.clone();
let handle = tokio::spawn(async move {
let _guard = mutex.lock().await.unwrap();
println!("Task {} acquired mutex", i);
// Simulate work
tokio::time::sleep(Duration::from_secs(1)).await;
println!("Task {} releasing mutex", i);
// Guard is automatically released when dropped
});
handles.push(handle);
}
// Wait for all tasks to complete
for handle in handles {
handle.await?;
}
Ok(())
}
```
### Distributed Semaphore
```rust
use rustfs_lock::{DistributedSemaphore, LockManager};
use std::sync::Arc;
async fn distributed_semaphore_example() -> Result<(), Box<dyn std::error::Error>> {
let lock_manager = Arc::new(LockManager::new("cluster-endpoint").await?);
// Create distributed semaphore with 3 permits
let semaphore = DistributedSemaphore::new(
lock_manager.clone(),
"resource-pool",
3
);
// Spawn multiple tasks
let mut handles = vec![];
for i in 0..10 {
let semaphore = semaphore.clone();
let handle = tokio::spawn(async move {
let _permit = semaphore.acquire().await.unwrap();
println!("Task {} acquired permit", i);
// Simulate work
tokio::time::sleep(Duration::from_secs(2)).await;
println!("Task {} releasing permit", i);
// Permit is automatically released when dropped
});
handles.push(handle);
}
// Wait for all tasks to complete
for handle in handles {
handle.await?;
}
Ok(())
}
```
### Distributed Barrier
```rust
use rustfs_lock::{DistributedBarrier, LockManager};
use std::sync::Arc;
async fn distributed_barrier_example() -> Result<(), Box<dyn std::error::Error>> {
let lock_manager = Arc::new(LockManager::new("cluster-endpoint").await?);
// Create distributed barrier for 3 participants
let barrier = DistributedBarrier::new(
lock_manager.clone(),
"sync-point",
3
);
// Spawn multiple tasks
let mut handles = vec![];
for i in 0..3 {
let barrier = barrier.clone();
let handle = tokio::spawn(async move {
println!("Task {} doing work...", i);
// Simulate different work durations
tokio::time::sleep(Duration::from_secs(i + 1)).await;
println!("Task {} waiting at barrier", i);
barrier.wait().await.unwrap();
println!("Task {} passed barrier", i);
});
handles.push(handle);
}
// Wait for all tasks to complete
for handle in handles {
handle.await?;
}
Ok(())
}
```
### Lock with Automatic Renewal
```rust
use rustfs_lock::{DistributedLock, LockManager, LockOptions};
use std::time::Duration;
async fn auto_renewal_example() -> Result<(), Box<dyn std::error::Error>> {
let lock_manager = LockManager::new("cluster-endpoint").await?;
let lock_options = LockOptions {
timeout: Duration::from_secs(10),
auto_renew: true,
renew_interval: Duration::from_secs(3),
max_renewals: 5,
..Default::default()
};
let lock = lock_manager.acquire_lock("long-running-task", lock_options).await?;
// Long-running operation
for i in 0..20 {
println!("Working on step {}", i);
tokio::time::sleep(Duration::from_secs(2)).await;
// Check if lock is still valid
if !lock.is_valid().await? {
println!("Lock lost, aborting operation");
break;
}
}
lock.release().await?;
Ok(())
}
```
### Hierarchical Locking
```rust
use rustfs_lock::{LockManager, LockHierarchy, LockOptions};
async fn hierarchical_locking_example() -> Result<(), Box<dyn std::error::Error>> {
let lock_manager = LockManager::new("cluster-endpoint").await?;
// Create lock hierarchy
let hierarchy = LockHierarchy::new(vec![
"global-lock".to_string(),
"bucket-lock".to_string(),
"object-lock".to_string(),
]);
// Acquire locks in hierarchy order
let locks = lock_manager.acquire_hierarchical_locks(
hierarchy,
LockOptions::default()
).await?;
// Critical section with hierarchical locks
{
println!("All hierarchical locks acquired");
// Perform operations that require the full lock hierarchy
tokio::time::sleep(Duration::from_secs(1)).await;
}
// Locks are automatically released in reverse order
locks.release_all().await?;
Ok(())
}
```
## 🏗️ Architecture
### Lock Architecture
```
Lock Architecture:
┌─────────────────────────────────────────────────────────────┐
│ Lock API Layer │
├─────────────────────────────────────────────────────────────┤
│ Mutex │ Semaphore │ Barrier │ Condition │
├─────────────────────────────────────────────────────────────┤
│ Lock Manager │
├─────────────────────────────────────────────────────────────┤
│ Consensus │ Heartbeat │ Timeout │ Recovery │
├─────────────────────────────────────────────────────────────┤
│ Distributed Coordination │
└─────────────────────────────────────────────────────────────┘
```
### Lock Types
| Type | Use Case | Guarantees |
|------|----------|------------|
| Exclusive | Critical sections | Mutual exclusion |
| Shared | Reader-writer | Multiple readers |
| Semaphore | Resource pooling | Counting semaphore |
| Barrier | Synchronization | Coordination point |
## 🧪 Testing
Run the test suite:
```bash
# Run all tests
cargo test
# Test distributed locking
cargo test distributed_lock
# Test synchronization primitives
cargo test sync_primitives
# Test fault tolerance
cargo test fault_tolerance
```
## 📋 Requirements
- **Rust**: 1.70.0 or later
- **Platforms**: Linux, macOS, Windows
- **Network**: Cluster connectivity required
- **Consensus**: Raft consensus for critical operations
## 🌍 Related Projects
This module is part of the RustFS ecosystem:
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
- [RustFS Common](../common) - Common types and utilities
- [RustFS Protos](../protos) - Protocol buffer definitions
- Distributed lock management across cluster nodes
- Read-write lock support with concurrent readers
- Lock timeout and automatic lease renewal
- Deadlock detection and prevention
- High-availability with leader election
- Performance-optimized locking algorithms
## 📚 Documentation
For comprehensive documentation, visit:
- [RustFS Documentation](https://docs.rustfs.com)
- [Lock API Reference](https://docs.rustfs.com/lock/)
- [Distributed Systems Guide](https://docs.rustfs.com/distributed/)
## 🔗 Links
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details.
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
---
<p align="center">
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
All other trademarks are the property of their respective owners.
</p>
<p align="center">
Made with 🔒 by the RustFS Team
</p>
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

View File

@@ -14,12 +14,12 @@
// limitations under the License.
use async_trait::async_trait;
use lazy_static::lazy_static;
use local_locker::LocalLocker;
use lock_args::LockArgs;
use remote_client::RemoteClient;
use std::io::Result;
use std::sync::Arc;
use std::sync::LazyLock;
use tokio::sync::RwLock;
pub mod drwmutex;
@@ -29,9 +29,7 @@ pub mod lrwmutex;
pub mod namespace_lock;
pub mod remote_client;
lazy_static! {
pub static ref GLOBAL_LOCAL_SERVER: Arc<RwLock<LocalLocker>> = Arc::new(RwLock::new(LocalLocker::new()));
}
pub static GLOBAL_LOCAL_SERVER: LazyLock<Arc<RwLock<LocalLocker>>> = LazyLock::new(|| Arc::new(RwLock::new(LocalLocker::new())));
type LockClient = dyn Locker;

View File

@@ -19,6 +19,11 @@ license.workspace = true
repository.workspace = true
rust-version.workspace = true
version.workspace = true
homepage.workspace = true
description = "Management and administration tools for RustFS, providing a web interface and API for system management."
keywords = ["management", "administration", "web-interface", "rustfs", "Minio"]
categories = ["web-programming", "development-tools", "config"]
documentation = "https://docs.rs/rustfs-madmin/latest/rustfs_madmin/"
[lints]
workspace = true

View File

@@ -3,7 +3,7 @@
# RustFS MadAdmin - Administrative Interface
<p align="center">
<strong>Administrative interface and management APIs for RustFS distributed object storage</strong>
<strong>Advanced administrative interface and management tools for RustFS distributed object storage</strong>
</p>
<p align="center">
@@ -17,335 +17,21 @@
## 📖 Overview
**RustFS MadAdmin** provides comprehensive administrative interfaces and management APIs for the [RustFS](https://rustfs.com) distributed object storage system. It enables cluster management, monitoring, configuration, and administrative operations through both programmatic APIs and interactive interfaces.
> **Note:** This is a core submodule of RustFS that provides essential administrative capabilities for the distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
**RustFS MadAdmin** provides advanced administrative interface and management tools for the [RustFS](https://rustfs.com) distributed object storage system. For the complete RustFS experience, please visit the [main RustFS repository](https://github.com/rustfs/rustfs).
## ✨ Features
### 🎛️ Cluster Management
- **Node Management**: Add, remove, and monitor cluster nodes
- **Service Discovery**: Automatic service discovery and registration
- **Load Balancing**: Distribute load across cluster nodes
- **Health Monitoring**: Real-time cluster health monitoring
### 📊 System Monitoring
- **Performance Metrics**: CPU, memory, disk, and network metrics
- **Storage Analytics**: Capacity planning and usage analytics
- **Alert Management**: Configurable alerts and notifications
- **Dashboard Interface**: Web-based monitoring dashboard
### ⚙️ Configuration Management
- **Dynamic Configuration**: Runtime configuration updates
- **Policy Management**: Access control and bucket policies
- **User Management**: User and group administration
- **Backup Configuration**: Backup and restore settings
### 🔧 Administrative Operations
- **Data Migration**: Cross-cluster data migration
- **Healing Operations**: Data integrity repair and healing
- **Rebalancing**: Storage rebalancing operations
- **Maintenance Mode**: Graceful maintenance operations
## 📦 Installation
Add this to your `Cargo.toml`:
```toml
[dependencies]
rustfs-madmin = "0.1.0"
```
## 🔧 Usage
### Basic Admin Client
```rust
use rustfs_madmin::{AdminClient, AdminConfig, ServerInfo};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
// Create admin client
let config = AdminConfig {
endpoint: "https://admin.rustfs.local:9001".to_string(),
access_key: "admin".to_string(),
secret_key: "password".to_string(),
region: "us-east-1".to_string(),
};
let client = AdminClient::new(config).await?;
// Get server information
let server_info = client.server_info().await?;
println!("Server Version: {}", server_info.version);
println!("Uptime: {}", server_info.uptime);
Ok(())
}
```
### Cluster Management
```rust
use rustfs_madmin::{AdminClient, AddServerRequest, RemoveServerRequest};
async fn cluster_management(client: &AdminClient) -> Result<(), Box<dyn std::error::Error>> {
// List cluster nodes
let nodes = client.list_servers().await?;
for node in nodes {
println!("Node: {} - Status: {}", node.endpoint, node.state);
}
// Add new server to cluster
let add_request = AddServerRequest {
endpoint: "https://new-node.rustfs.local:9000".to_string(),
access_key: "node-key".to_string(),
secret_key: "node-secret".to_string(),
};
client.add_server(add_request).await?;
println!("New server added successfully");
// Remove server from cluster
let remove_request = RemoveServerRequest {
endpoint: "https://old-node.rustfs.local:9000".to_string(),
};
client.remove_server(remove_request).await?;
println!("Server removed successfully");
Ok(())
}
```
### System Monitoring
```rust
use rustfs_madmin::{AdminClient, MetricsRequest, AlertConfig};
async fn monitoring_operations(client: &AdminClient) -> Result<(), Box<dyn std::error::Error>> {
// Get system metrics
let metrics = client.get_metrics(MetricsRequest::default()).await?;
println!("CPU Usage: {:.2}%", metrics.cpu_usage);
println!("Memory Usage: {:.2}%", metrics.memory_usage);
println!("Disk Usage: {:.2}%", metrics.disk_usage);
// Get storage information
let storage_info = client.storage_info().await?;
println!("Total Capacity: {} GB", storage_info.total_capacity / 1024 / 1024 / 1024);
println!("Used Capacity: {} GB", storage_info.used_capacity / 1024 / 1024 / 1024);
// Configure alerts
let alert_config = AlertConfig {
name: "high-cpu-usage".to_string(),
condition: "cpu_usage > 80".to_string(),
notification_endpoint: "https://webhook.example.com/alerts".to_string(),
enabled: true,
};
client.set_alert_config(alert_config).await?;
Ok(())
}
```
### User and Policy Management
```rust
use rustfs_madmin::{AdminClient, UserInfo, PolicyDocument};
async fn user_management(client: &AdminClient) -> Result<(), Box<dyn std::error::Error>> {
// Create user
let user_info = UserInfo {
access_key: "user123".to_string(),
secret_key: "user-secret".to_string(),
status: "enabled".to_string(),
policy: Some("readwrite-policy".to_string()),
};
client.add_user("new-user", user_info).await?;
// List users
let users = client.list_users().await?;
for (username, info) in users {
println!("User: {} - Status: {}", username, info.status);
}
// Set user policy
let policy_doc = PolicyDocument {
version: "2012-10-17".to_string(),
statement: vec![/* policy statements */],
};
client.set_user_policy("new-user", policy_doc).await?;
Ok(())
}
```
### Data Operations
```rust
use rustfs_madmin::{AdminClient, HealRequest, RebalanceRequest};
async fn data_operations(client: &AdminClient) -> Result<(), Box<dyn std::error::Error>> {
// Start healing operation
let heal_request = HealRequest {
bucket: Some("important-bucket".to_string()),
prefix: Some("documents/".to_string()),
recursive: true,
dry_run: false,
};
let heal_result = client.heal(heal_request).await?;
println!("Healing started: {}", heal_result.heal_sequence);
// Check healing status
let heal_status = client.heal_status(&heal_result.heal_sequence).await?;
println!("Healing progress: {:.2}%", heal_status.progress);
// Start rebalancing
let rebalance_request = RebalanceRequest {
servers: vec![], // Empty means all servers
};
client.start_rebalance(rebalance_request).await?;
println!("Rebalancing started");
Ok(())
}
```
### Configuration Management
```rust
use rustfs_madmin::{AdminClient, ConfigUpdate, NotificationTarget};
async fn configuration_management(client: &AdminClient) -> Result<(), Box<dyn std::error::Error>> {
// Get current configuration
let config = client.get_config().await?;
println!("Current config version: {}", config.version);
// Update configuration
let config_update = ConfigUpdate {
region: Some("us-west-2".to_string()),
browser: Some(true),
compression: Some(true),
// ... other config fields
};
client.set_config(config_update).await?;
// Configure notification targets
let notification_target = NotificationTarget {
arn: "arn:aws:sns:us-east-1:123456789012:my-topic".to_string(),
target_type: "webhook".to_string(),
endpoint: "https://webhook.example.com/notifications".to_string(),
};
client.set_notification_target("bucket1", notification_target).await?;
Ok(())
}
```
## 🏗️ Architecture
### MadAdmin Architecture
```
MadAdmin Architecture:
┌─────────────────────────────────────────────────────────────┐
│ Admin API Layer │
├─────────────────────────────────────────────────────────────┤
│ Cluster Mgmt │ Monitoring │ User Management │
├─────────────────────────────────────────────────────────────┤
│ Data Ops │ Config Mgmt │ Notification │
├─────────────────────────────────────────────────────────────┤
│ HTTP/gRPC Client Layer │
├─────────────────────────────────────────────────────────────┤
│ Storage System Integration │
└─────────────────────────────────────────────────────────────┘
```
### Administrative Operations
| Category | Operations | Description |
|----------|------------|-------------|
| Cluster | Add/Remove nodes, Health checks | Cluster management |
| Monitoring | Metrics, Alerts, Dashboard | System monitoring |
| Data | Healing, Rebalancing, Migration | Data operations |
| Config | Settings, Policies, Notifications | Configuration |
| Users | Authentication, Authorization | User management |
## 🧪 Testing
Run the test suite:
```bash
# Run all tests
cargo test
# Test admin operations
cargo test admin_ops
# Test cluster management
cargo test cluster
# Test monitoring
cargo test monitoring
```
## 📋 Requirements
- **Rust**: 1.70.0 or later
- **Platforms**: Linux, macOS, Windows
- **Network**: Administrative access to RustFS cluster
- **Permissions**: Administrative credentials required
## 🌍 Related Projects
This module is part of the RustFS ecosystem:
- [RustFS Main](https://github.com/rustfs/rustfs) - Core distributed storage system
- [RustFS IAM](../iam) - Identity and access management
- [RustFS Policy](../policy) - Policy engine
- [RustFS Common](../common) - Common types and utilities
- Comprehensive cluster management and monitoring
- Real-time performance metrics and analytics
- Automated backup and disaster recovery tools
- User and permission management interface
- System health monitoring and alerting
- Configuration management and deployment tools
## 📚 Documentation
For comprehensive documentation, visit:
- [RustFS Documentation](https://docs.rustfs.com)
- [MadAdmin API Reference](https://docs.rustfs.com/madmin/)
- [Administrative Guide](https://docs.rustfs.com/admin/)
## 🔗 Links
- [Documentation](https://docs.rustfs.com) - Complete RustFS manual
- [Changelog](https://github.com/rustfs/rustfs/releases) - Release notes and updates
- [GitHub Discussions](https://github.com/rustfs/rustfs/discussions) - Community support
## 🤝 Contributing
We welcome contributions! Please see our [Contributing Guide](https://github.com/rustfs/rustfs/blob/main/CONTRIBUTING.md) for details.
For comprehensive documentation, examples, and usage guides, please visit the main [RustFS repository](https://github.com/rustfs/rustfs).
## 📄 License
Licensed under the Apache License, Version 2.0. See [LICENSE](https://github.com/rustfs/rustfs/blob/main/LICENSE) for details.
---
<p align="center">
<strong>RustFS</strong> is a trademark of RustFS, Inc.<br>
All other trademarks are the property of their respective owners.
</p>
<p align="center">
Made with 🎛️ by the RustFS Team
</p>
This project is licensed under the Apache License 2.0 - see the [LICENSE](../../LICENSE) file for details.

Some files were not shown because too many files have changed in this diff Show More