Compare commits

..

73 Commits

Author SHA1 Message Date
weisd
e67980ff3c Fix/range content length (#251)
* fix:getobject range length
2025-07-17 23:25:21 +08:00
weisd
96760bba5a fix:getobject range length (#250) 2025-07-17 23:14:19 +08:00
overtrue
2501d7d241 fix: remove branch restriction from Docker workflow_run trigger
The Docker workflow was not triggering for tag-based releases because it had
'branches: [main]' restriction in the workflow_run configuration. When pushing
tags, the triggering workflow runs on the tag, not on main branch.

Changes:
- Remove 'branches: [main]' from workflow_run trigger
- Simplify tag detection using github.event.workflow_run context instead of API calls
- Use official workflow_run event properties (head_branch, event) for reliable detection
- Support both 'refs/tags/VERSION' and direct 'VERSION' formats
- Add better logging for debugging workflow trigger issues

This fixes the issue where Docker images were not built for tagged releases.
2025-07-17 08:13:34 +08:00
overtrue
55b84262b5 fix: use GitHub API for reliable tag detection in Docker workflow
- Replace git commands with GitHub API calls for tag detection
- Add proper commit checkout for workflow_run events
- Use gh CLI and curl fallback for better reliability
- Add debug output to help troubleshoot tag detection issues

This should fix the issue where Docker builds were not triggered for tagged releases
due to missing tag information in the workflow_run environment.
2025-07-17 08:01:33 +08:00
overtrue
ce4252eb1a fix: correct Docker workflow trigger logic for tag-based releases
BREAKING CHANGE: Fixed Docker workflow that was incorrectly skipping builds for tagged releases
- Fix logic to detect tag pushes using git refs instead of branch names
- Properly identify tag pushes vs branch pushes using git show-ref
- Support both v-prefixed and bare version formats
- Ensure Docker images are built for all tagged releases including prereleases
2025-07-17 07:46:54 +08:00
overtrue
db708917b4 docs: update .docker/README.md to reflect simplified Makefile commands
- Add new Makefile Commands section with simplified docker-dev* commands
- Update Development Workflow to use new dev-env-* commands
- Update directory structure (remove deleted alpine/ directory)
- Reorganize build instructions to prioritize Makefile over direct scripts
- Add Common Development Tasks section with make help commands
2025-07-17 07:30:13 +08:00
overtrue
8ddb45627d refactor: simplify Docker build commands and fix version matching
- Remove obsolete .docker/alpine/Dockerfile.protoc (superseded by Dockerfile.source)
- Simplify Makefile commands by removing backward compatibility aliases
  * Replace docker-buildx-source* with shorter docker-dev* commands
  * Replace start/stop with explicit dev-env-start/dev-env-stop commands
- Fix Docker workflow version matching logic to correctly distinguish:
  * 1.0.0 vs 1.0.0-alpha.11 (prerelease detection)
  * Support both v1.0.0 and 1.0.0 formats (with/without v prefix)
  * Reorder case patterns to match prereleases before releases

BREAKING CHANGE: Removed legacy command aliases
- Use 'make docker-dev-local' instead of 'make docker-buildx-source-local'
- Use 'make dev-env-start' instead of 'make start'
2025-07-17 07:29:00 +08:00
overtrue
550c225b79 wip 2025-07-17 07:07:02 +08:00
overtrue
0d46b550a8 refactor: merge release workflow into build workflow and clean up
- Merge release logic into build.yml to avoid cross-workflow artifact access issues
- Add release jobs (create-release, upload-release-assets, update-latest-version, publish-release) that run only for tag pushes
- Use standard actions/download-artifact@v4 within the same workflow (no cross-workflow limitations)
- Deprecate standalone release.yml workflow with warning job and confirmation requirement
- Remove references to deleted release-notes-template.md file from both workflows
- Update build summary messages to reflect integrated release process

This resolves the 'Prepare release assets' failure by eliminating the need for cross-workflow artifact access.
2025-07-17 07:06:51 +08:00
overtrue
0693cca1a4 fix: resolve workflow_run artifact access issue in release pipeline
- Replace actions/download-artifact@v4 with GitHub API calls to access artifacts from triggering workflow
- Add proper permissions (contents: read, actions: read) to prepare-assets job
- Handle both workflow_run and workflow_dispatch trigger scenarios
- Fix the root cause: workflow_run events cannot access artifacts from triggering workflows using standard download-artifact action

Fixes the 'Prepare release assets' step failure by implementing cross-workflow artifact access through GitHub API.
2025-07-17 06:58:09 +08:00
安正超
0d9f9e381a refactor: use workflow_run trigger for release workflow to eliminate timing issues (#241)
* fix: use correct tag reference in release workflow wait-for-artifacts step

- Change ref from github.ref to needs.release-check.outputs.tag
- Fix issue where wait-on-check-action receives full git reference (refs/tags/1.0.0-alpha.21)
  instead of clean tag name (1.0.0-alpha.21)
- This resolves timeout errors when waiting for build artifacts during release process

Fixes the release workflow failure for tag 1.0.0-alpha.21

* refactor: use workflow_run trigger for release workflow instead of push

- Replace push trigger with workflow_run to eliminate timing issues
- Release workflow now triggers only after Build workflow completes successfully
- Remove wait-for-artifacts step completely (no longer needed)
- Add should_release condition to control release execution
- Support both tag pushes and manual releases via workflow_dispatch
- Align with docker.yml pattern for better reliability

This completely resolves the release workflow timeout issues by ensuring
build artifacts are always available before the release process starts.

Fixes the fundamental timing issue where release.yml and build.yml
were racing against each other when triggered by the same tag push.
2025-07-17 06:48:09 +08:00
安正超
6c7aa5a7ae fix: use correct tag reference in release workflow wait-for-artifacts step (#240)
- Change ref from github.ref to needs.release-check.outputs.tag
- Fix issue where wait-on-check-action receives full git reference (refs/tags/1.0.0-alpha.21)
  instead of clean tag name (1.0.0-alpha.21)
- This resolves timeout errors when waiting for build artifacts during release process

Fixes the release workflow failure for tag 1.0.0-alpha.21
2025-07-17 06:36:57 +08:00
overtrue
a27d935925 wip 2025-07-17 06:31:25 +08:00
安正超
b4f87a4fee feat: disable Docker builds for development versions (#239)
* feat: disable Docker builds for development versions

- Remove dev-latest, main-latest, and dev-* version options from manual triggers
- Skip Docker builds for development versions in workflow_run events
- Only build Docker images for releases (v1.0.0) and prereleases (v1.0.0-alpha1)
- Simplify tags generation logic by removing development branch handling
- Update workflow documentation to reflect release-only Docker strategy

BREAKING CHANGE: Development Docker images are no longer built automatically

* feat: remove dev channel support from Dockerfile

- Remove CHANNEL build argument (no longer needed)
- Simplify download logic to only support release channel
- Remove dev-specific package download paths
- Update BASE_URL to point directly to release directory
- Remove channel label from Docker image metadata
- Streamline version handling (latest vs specific release)

This aligns with the workflow changes that disabled dev Docker builds.
2025-07-17 06:06:40 +08:00
安正超
ee5f94a2e2 fix: use consistent short SHA generation across workflows (#238)
- Replace manual cut -c1-7 with git rev-parse --short in docker.yml
- Ensures consistent short SHA length between build.yml and docker.yml
- Git automatically adjusts length for uniqueness, preventing conflicts
2025-07-17 05:48:30 +08:00
安正超
9c3cf554d3 fix: correct Docker build logic for dev version downloads (#237) 2025-07-17 05:36:15 +08:00
安正超
addbfa5487 fix: resolve Docker workflow manual build parameter issues (#236)
- Remove unsupported 'scopes' parameter from docker/login-action@v3
  * Fixes 'Unexpected input(s) scopes' error during Docker Hub login

- Add version format conversion for Dockerfile compatibility
  * main-latest/dev-latest → RELEASE=latest + CHANNEL=dev
  * latest → RELEASE=latest + CHANNEL=release
  * dev-* → RELEASE=dev-* + CHANNEL=dev
  * v* → RELEASE={version without v} + CHANNEL=release

- Fix Docker build parameter passing
  * Use converted docker_release and docker_channel values
  * Ensures correct binary download URLs in Dockerfile

Resolves manual Docker build failures reported in:
https://github.com/rustfs/rustfs/actions/runs/16330398463/job/46131302262
2025-07-17 05:21:06 +08:00
安正超
5eb461d7b7 refactor: remove redundant linux_builds_success logic in docker workflow (#235)
- Remove linux_builds_success output and related variables
- Simplify build-docker condition to only check should_build
- The should_build check already includes workflow success verification
- Reduce code complexity while maintaining the same functionality
2025-07-17 05:09:41 +08:00
安正超
1ea45afcd7 feat: Implement precise Docker build triggering using workflow_run event (#233)
* fix: correct YAML indentation error in docker workflow

- Fix incorrect indentation at line 237 in .github/workflows/docker.yml
- Step 'Extract metadata and generate tags' had 12 spaces instead of 6
- This was causing YAML syntax validation to fail

* fix: restore unified build-rustfs task with correct YAML syntax

- Revert complex job separation back to single build-rustfs task
- Maintain Linux and macOS builds in unified matrix
- Fix YAML indentation and syntax issues
- Docker builds will use only Linux binaries as designed in Dockerfile

* feat: implement precise Docker build triggering using workflow_run

- Use workflow_run event to trigger Docker builds independently
- Add precise Linux build status checking via GitHub API
- Only trigger Docker builds when both Linux architectures succeed
- Remove coupling between build.yml and docker.yml workflows
- Improve TARGETPLATFORM consistency in Dockerfile

This resolves the issue where Docker builds would trigger even if
Linux ARM64 builds failed, causing missing binary artifacts during
multi-architecture Docker image creation.
2025-07-17 04:51:08 +08:00
安正超
dbd86f6aee fix: correct YAML indentation error in docker workflow (#232)
- Fix incorrect indentation at line 237 in .github/workflows/docker.yml
- Step 'Extract metadata and generate tags' had 12 spaces instead of 6
- This was causing YAML syntax validation to fail
2025-07-17 04:28:31 +08:00
overtrue
af693f7b3f refactor: restructure Docker build pipeline to depend on binary builds
- Change docker.yml to use workflow_call triggered by build.yml
- Remove redundant force_build parameter from build.yml
- Simplify build_docker parameter (build implies push in CI/CD)
- Add proper dependency chain: build.yml -> docker.yml -> registry
- Update documentation to reflect new architecture
- Mark Dockerfile.source as local development only
2025-07-17 04:19:20 +08:00
安正超
3be5ee6445 fix: simplify Dockerfile.source and resolve build issues (#231)
- Remove complex dependency caching to fix workspace structure issues
- Remove sccache to eliminate rustc wrapper errors
- Ensure target installation in build step for cross-compilation
- Add debug output and error handling for unsupported platforms
- Use simple COPY . . approach for more reliable builds
2025-07-16 23:53:28 +08:00
overtrue
0acc8fe26a fix: docker build from source 2025-07-16 23:46:30 +08:00
overtrue
ecf40eb86c fix: docker build from source 2025-07-16 23:43:34 +08:00
overtrue
48ce7055f8 fix: remove dockerhub username 2025-07-16 22:35:14 +08:00
weisd
749f55d688 feat: enhance version function with automatic version increment (#227) 2025-07-16 18:09:43 +08:00
loverustfs
e5d17f5382 Disable Dockerfile.source 2025-07-16 18:03:09 +08:00
weisd
982cc66c74 fix: Refactor session policy handling and fix owner permission check (#226) 2025-07-16 16:40:51 +08:00
loverustfs
74bf4909c8 Modify docker source file 2025-07-15 23:17:39 +08:00
loverustfs
9c956b4445 Disable other docker mode 2025-07-15 22:10:00 +08:00
weisd
4c1fc9317e fix: content-range (#216) 2025-07-15 17:23:33 +08:00
weisd
a9d77a618f feat: implement list_parts API for S3 multipart upload compatibility (#209)
* feat: add list_parts api
2025-07-15 16:04:03 +08:00
overtrue
38cdc87e93 fix 2025-07-15 02:36:07 +08:00
安正超
f5ff93b65e fix: restore working build configuration by removing cargo.config.toml (#206)
- Remove cargo.config.toml file that was causing build issues
- Restore .github/workflows/build.yml to working state from commit 2e9792577f
- These changes ensure the build system works correctly again
2025-07-15 02:24:13 +08:00
安正超
6ef6f188e5 fix: Restore working build configuration from 4fb4b353 (#204)
* fix: Resolve zstd-sys Zig compilation issues

- Remove specific Zig version constraint in action.yml to use default version
- Clean up duplicate environment variable settings in build-rustfs.sh
- Add CARGO_TARGET_*_LINKER environment variables for better cross-compilation support
- Optimize build configuration for consistent cross-platform compilation

Fixes compilation issues with zstd-sys when using Zig cross-compilation.
Aligns with previously working configuration that uses default Zig version.

* fix: Restore working build configuration from 4fb4b353

- Restore matrix.cross parameter to differentiate cross-compilation
- Use simple cargo zigbuild instead of complex build-rustfs.sh script
- Remove unnecessary zstd dependencies from action.yml
- Restore console asset download step
- Use correct target directory path for packaging
- Align with known working configuration from commit 4fb4b353

This reverts to the proven working build approach that successfully
performed cross-platform compilation.

* fix: Align build-rustfs.sh with working version logic

- Simplify build logic to match working version 4fb4b353
- Use exact same build commands as the working build.yml:
  * cargo build for native compilation
  * cargo zigbuild for Linux ARM64 cross-compilation
  * cross build for Windows ARM64 cross-compilation
- Remove complex environment variable setup that caused conflicts
- Add touch rustfs/build.rs to match working version
- Use -p rustfs --bins flag consistent with working version

This ensures build-rustfs.sh (if used) follows the proven working approach.
2025-07-14 20:22:29 +08:00
安正超
ccad91a4a9 fix: resolve zstd-sys compilation issues with zig cross-compilation (#203)
- Update to mlugg/setup-zig@v2 for better stability and features
- Use Zig 0.13.0 for improved musl target support
- Add system zstd libraries (libzstd-dev, zstd) to Ubuntu dependencies
- Configure environment variables for zstd-sys to use pkg-config
- Enable pkg-config feature for zstd dependency to prefer system library
- Add proper C/C++ compiler configuration for musl targets

Fixes the 'error: unable to parse target query x86_64-unknown-linux-musl: UnknownOperatingSystem'
compilation error in zstd-sys during cross-compilation.
2025-07-14 20:01:52 +08:00
安正超
63b79ae151 fix: add cross-platform SHA256 checksum generation (#202)
- Add generate_sha256() function to handle cross-platform SHA256 generation
- Use shasum -a 256 on macOS instead of sha256sum
- Use sha256sum on Linux with shasum as fallback
- Replace direct sha256sum usage in build script with new function
- Fixes 'sha256sum: command not found' error on macOS builds
2025-07-14 19:45:42 +08:00
安正超
9284f64e2a fix: resolve aarch64-unknown-linux-musl build issue with cargo-zigbuild integration (#201)
- Enable install-cross-tools in GitHub Actions build workflow
- Add cargo-zigbuild support for Linux targets in build-rustfs.sh
- Prioritize cargo-zigbuild over cross tool for better glibc compatibility
- Add musl-specific environment variables for proper static linking
- Update error messages with Linux-specific build suggestions
- Configure Zig compiler environment for musl targets
2025-07-14 19:31:30 +08:00
安正超
b9bbae27de fix: resolve macOS build issue by disabling cross tool for apple-darwin targets (#200) 2025-07-14 19:25:01 +08:00
安正超
36e3efb5a5 feat: implement Docker improvements and binary build scripts (#191)
* feat: implement Docker improvements and binary build scripts

This commit transforms the RustFS Docker build system to follow MinIO's best practices:

## 🏗️ Binary Build Script (build-rustfs.sh)
- Create independent binary compilation script for multi-platform builds
- Support x86_64 and aarch64 Linux musl targets
- Include checksum generation and optional binary signing
- Support cross-compilation and upload functionality
- Automated target installation and environment setup

## 🐳 Docker Improvements
- Rewrite Dockerfiles to download precompiled binaries instead of building from source
- Follow MinIO's approach for security and binary verification
- Add comprehensive LABEL metadata (version, build-date, vcs-ref)
- Implement proper environment variable management
- Add signature verification with minisign (commented for future use)
- Include static curl download for minimal runtime dependencies

## 🚀 Enhanced Build Script (docker-buildx.sh)
- Inspired by MinIO's docker-buildx.sh for consistency and reliability
- Support multiple platforms with proper build arguments
- Auto-detect git versions and pass metadata to containers
- Improved error messages with helpful troubleshooting hints
- Cleanup and cache management between builds

## 🛠️ Supporting Scripts
- scripts/download-static-curl.sh: Download statically compiled curl
- scripts/setup-test-binaries.sh: Create test binaries for local development

## 📋 Key Benefits
- Faster Docker builds (download vs compile)
- Better security with signature verification
- Consistent with industry standards (MinIO approach)
- Proper multi-platform support
- Enhanced metadata and traceability
- Independent binary distribution capability

* feat: update Docker files to use Aliyun OSS for binary downloads

* feat: merge stash with OSS binary download improvements

- Remove old build_rustfs.sh script
- Keep Aliyun OSS download URLs for binary retrieval
- Maintain Docker build improvements from stash
- Resolve merge conflicts between stash and OSS updates

* feat: improve build-rustfs.sh with auto platform detection

- Auto-detect current platform using uname (like old build_rustfs.sh)
- Default to building for current platform only
- Add --all-platforms flag for cross-compilation to Linux musl targets
- Support macOS (darwin) and Linux platforms
- Auto-enable cross compilation when needed
- Provide better usage examples and platform detection info

This makes the script much more user-friendly by default while
maintaining flexibility for cross-compilation scenarios.

* refactor: simplify build-rustfs.sh for CI/CD pipeline usage

- Remove cross-compilation complexity (each CI runner builds natively)
- Focus on single platform builds per runner
- Remove --all-platforms and --cross options
- Simplify to match CI/CD workflow where:
  * Linux x86_64 runner builds Linux x86_64 binary
  * Linux ARM64 runner builds Linux ARM64 binary
  * macOS x86_64 runner builds macOS x86_64 binary
  * macOS ARM64 runner builds macOS ARM64 binary
- Keep signing and upload functionality for release CI
- Make the script's purpose and usage clearer

This aligns with the user's understanding that build scripts should
focus on native compilation for the current platform only.

* feat: update download server domain to dl.rustfs.com

- Update Dockerfile to use dl.rustfs.com/dev/ for development binaries
- Update Dockerfile.release to use dl.rustfs.com/release/ for release binaries
- Update docker-buildx.sh error messages with new URLs
- Update build-rustfs.sh upload target to dl.rustfs.com
- Update test scripts to reference new domain
- Clean up remaining git conflict markers

This centralizes all binary downloads through the official
dl.rustfs.com domain instead of direct OSS access.

* fix: correct dl.rustfs.com path structure to include /artifacts/rustfs/

- Update all download URLs to use correct path structure:
  * Dev: https://dl.rustfs.com/artifacts/rustfs/dev/
  * Release: https://dl.rustfs.com/artifacts/rustfs/release/
- Test confirmed both paths return HTTP 200 with application/zip content-type
- Update Dockerfile, Dockerfile.release, docker-buildx.sh, and build-rustfs.sh
- Update test scripts with correct base path

The dl.rustfs.com domain requires the /artifacts/rustfs/ prefix
to access the binary files correctly.

* feat: refactor Dockerfile to download binaries from GitHub Releases

- Changed binary download source from dl.rustfs.com to GitHub Releases
- Added support for latest release auto-detection via GitHub API
- Enhanced error handling with detailed messages and helpful links
- Added optional checksum verification using SHA256SUMS
- Improved architecture support for amd64 and arm64
- Removed unnecessary minisign installation
- Added jq dependency for JSON parsing

* feat: consolidate Docker build to use single Dockerfile

- Removed Dockerfile.release and use unified Dockerfile instead
- Updated docker-buildx.sh to use single Dockerfile with build args
- Both latest and release variants now use GitHub Releases
- Simplified build process and reduced maintenance overhead
- Updated error messages to point to GitHub releases

* chore: remove unused Dockerfile.obs

- Removed Dockerfile.obs as it's no longer needed
- Simplified Docker build configuration

* feat: unify Docker prebuild variants to use GitHub Releases

- Updated .docker/alpine/Dockerfile.prebuild to download from GitHub Releases
- Updated .docker/ubuntu/Dockerfile.prebuild to download from GitHub Releases
- All prebuild variants now consistently use GitHub Releases as binary source
- Added checksum verification for all prebuild variants
- Updated .docker/README.md to reflect unified GitHub Releases approach
- Improved error handling and user guidance in all prebuild Dockerfiles

* feat: major Docker structure simplification and consolidation

## 🎯 Simplified Docker Structure

Moved from complex multi-directory structure to clean root-level organization:

### Before:
- Dockerfile (production)
- .docker/alpine/Dockerfile.prebuild (duplicate)
- .docker/alpine/Dockerfile.source
- .docker/ubuntu/Dockerfile.prebuild (duplicate)
- .docker/ubuntu/Dockerfile.source
- .docker/ubuntu/Dockerfile.dev

### After:
- Dockerfile (production - Alpine + GitHub Releases)
- Dockerfile.source (source build - Ubuntu + cross-compilation)
- Dockerfile.dev (development - Ubuntu + full toolchain)

## 🔧 Key Changes

- **Eliminated Duplicates**: Removed redundant prebuild variants
- **Moved Core Files**: Dockerfile.{source,dev} now in root directory
- **Unified Configuration**: cargo.config.toml moved to root
- **Updated References**: Fixed all GitHub Actions and docker-compose paths
- **Simplified CI Matrix**: Reduced from 5 to 3 Docker variants

## 📦 Preserved Valuable Diversity

- **Production**: Alpine-based for minimal size
- **Source**: Ubuntu-based with cross-compilation support
- **Development**: Ubuntu-based with full development tools

## 🚀 Benefits

-  Cleaner project structure
-  Easier maintenance and navigation
-  Reduced CI/CD complexity
-  Faster build matrix execution
-  Maintained functionality and flexibility

* chore: remove duplicate cargo.config.toml from .docker directory

The file is now in the root directory and no longer needed in .docker/

* fix: update all references to removed Dockerfile files

- Updated .docker/compose/README.md to reference Dockerfile.source instead of Dockerfile.obs
- Updated docker-compose.yml to use Dockerfile.source instead of Dockerfile.dev
- Updated scripts/build-docker-multiarch.sh to use Dockerfile.source for devenv builds
- Updated .github/workflows/docker.yml to use Dockerfile.source for dev builds
- Updated Makefile to use Dockerfile.source for init-devenv target
- Updated .docker/README.md to remove references to non-existent Dockerfile.dev
- Ensured all Docker configurations consistently use the unified Dockerfile structure

* chore: remove unnecessary console static assets download

- Remove obsolete download steps from build.yml and performance.yml
- Console static assets are already embedded via rust-embed in rustfs/static/
- The download from dl.rustfs.com is no longer needed as project contains complete console assets
- This improves build reliability and reduces external dependencies
- Replaced with verification steps that confirm embedded assets are present

* feat: update Makefile and README.md for new Docker build system

- Updated Makefile to use unified Docker build system:
  - Replace references to non-existent Dockerfile.ubuntu22.04 and Dockerfile.rockylinux9.3
  - Add new docker-buildx targets using docker-buildx.sh script
  - Deprecate old docker-build-multiarch targets with warnings
  - Add docker-build-production and docker-build-source targets
  - Update help-docker with new command structure

- Updated README.md with docker-buildx.sh usage:
  - Add comprehensive Docker build from source section
  - Document multi-architecture build capabilities
  - Include both script and Make target examples
  - Show registry flexibility and build optimization features
  - Update step numbers in quickstart guide

- Improve developer experience with clear documentation and updated tooling
- Maintain backward compatibility with deprecation warnings

* feat: integrate console assets download into build-rustfs.sh

- Added console download functionality to build-rustfs.sh:
  - New flags: --download-console, --force-console-update, --console-version
  - Intelligent detection of existing console assets
  - Retry logic with fallback error handling
  - Consistent with Docker build asset management

- Updated scripts to use unified build process:
  - scripts/static.sh: Now uses build-rustfs.sh for console downloads
  - scripts/run.sh: Uses build-rustfs.sh instead of direct curl
  - scripts/run.ps1: Updated with guidance for Windows users

- Benefits:
  - Unified asset management across all build processes
  - Consistent version handling and retry logic
  - Eliminates duplicate download logic
  - Better error handling and user feedback
  - Preparation for CI/CD integration

- Removed unused download-static-curl.sh script

This change centralizes console asset management and prepares for
streamlined CI/CD processes where build-rustfs.sh becomes the
single point of truth for binary and asset builds.

* fix: update PowerShell script to use unified console asset management

- Updated scripts/run.ps1 to use build-rustfs.sh for console asset downloads
- Added guidance for Windows users to use the unified build script
- Maintains consistency across all platform-specific scripts

* feat: add binary verification to build script

- Add verify_binary function to test built binaries
- Test --help and --version commands
- Verify binary structure with readelf/otool
- Add --skip-verification option for cross-compilation
- Include verification status in build output
- Automatic error handling if verification fails

* feat: add platform selection support to build script

- Add --platform parameter to build-rustfs.sh for target platform selection
- Implement cross-compilation support with automatic 'cross' tool detection
- Auto-enable --skip-verification for cross-compilation scenarios
- Update all Makefile build targets to use unified build-rustfs.sh script
- Add helpful error messages and suggestions for cross-compilation failures
- Update help documentation with platform selection examples
- Improve build consistency across different architectures

* feat: modernize CI/CD build process with build-rustfs.sh

- Replace manual cargo build commands with unified build-rustfs.sh script
- Simplify matrix configuration by removing cross-compilation flags
- Ensure consistency between local and CI/CD builds
- Automatic cross-compilation tool detection and installation
- Built-in binary verification for quality assurance
- Unified console asset management
- Better error handling and suggestions

Benefits:
- Consistent build process across all environments
- Automatic detection and handling of cross-compilation scenarios
- Built-in quality checks with binary verification
- Reduced CI/CD configuration complexity
- Better maintainability with single source of truth for build logic

* feat: optimize CI/CD workspace path management

- Add WORKSPACE_DIR environment variable to cache github.workspace
- Set default working-directory at job level for consistency
- Use explicit workspace paths in critical operations
- Improve reliability and maintainability of CI/CD paths
- Ensure consistent behavior across different GitHub Actions environments

Benefits:
- More explicit and reliable path handling
- Better maintainability with centralized workspace reference
- Reduced risk of path-related issues in CI/CD
- Consistent working directory across all job steps

* refactor: simplify CI/CD path management - remove redundant workspace references

- Remove unnecessary WORKSPACE_DIR environment variable
- Remove redundant defaults.run.working-directory setting
- Use relative paths since GITHUB_WORKSPACE is the default working directory
- Follow GitHub Actions best practices by leveraging default behavior

As per GitHub Actions documentation, GITHUB_WORKSPACE is already the default
working directory, so explicit specification is unnecessary in most cases.

* docs: update Docker README to reflect current project state

- Fix directory structure: remove non-existent nginx/ directory
- Correct base OS: Dockerfile.source uses Debian Bookworm, not Ubuntu 22.04
- Add docker-buildx.sh script documentation
- Update Docker tag examples to match actual CI/CD workflows
- Add CI/CD integration section explaining automated builds
- Document build variants and manual build options
- Reflect current project architecture and tooling

These updates ensure the documentation accurately represents the current
Docker build system and CI/CD workflows.

* fix: update Docker command in rustfs README

- Replace quay.io registry with Docker Hub (rustfs/rustfs:latest)
- Remove separate console port 9001, console now runs on main port 9000
- Add both Docker and Podman examples for user choice
- Fix console access URL to use unified port

This aligns with the recent console port consolidation changes
and the project's move to Docker Hub as the primary registry.

* wip

* fix: remove unnecessary entrypoint.sh and fix Docker paths

* Update Dockerfile

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* cleanup: remove unused DOCKERFILE_PATH variable from Makefile

* feat: update Docker build to use dl.rustfs.com for binary downloads

- Replace GitHub releases download with dl.rustfs.com
- Add CHANNEL parameter support (release/dev)
- Update docker-buildx.sh to support channel-specific builds
- Improve error messages with new download URLs
- Support both latest and specific version downloads
- Add channel validation in build script

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-14 19:15:46 +08:00
Nugine
04d1c8724d build: upgrade s3s (#193) 2025-07-14 15:19:01 +08:00
houseme
4fb4b353f8 improve code for cargo.toml 2025-07-13 23:13:08 +08:00
houseme
564a02f344 feat(obs, net): Add Tempo service and enable dual-stack listener (#192)
This commit introduces two key enhancements: the integration of Grafana Tempo for distributed tracing and the implementation of a dual-stack TCP listener for improved network compatibility.

- **Observability**:
  - Adds the `tempo` service to the `docker-compose.yml` observability stack.
  - Tempo is configured to collect and store traces, integrating with the existing OpenTelemetry setup.
  - A custom `tempo-entrypoint.sh` script is included to manage volume permissions on startup.

- **Networking**:
  - Modifies `http.rs` to support dual-stack (IPv4/IPv6) connections on a single socket.
  - By setting the `IPV6_V6ONLY` socket option to `false`, the server can now accept both IPv6 and IPv4-mapped IPv6 traffic, enhancing cross-platform support.
2025-07-13 20:22:46 +08:00
安正超
5b582a4234 feat: disable GitHub Packages uploads in Docker workflow (#189) 2025-07-12 19:07:56 +08:00
安正超
2e9792577f fix: correct SHA length matching in GitHub Actions workflow (#188)
* feat: enhance Docker build system with advanced version selection

## New Features
- Add force_rebuild parameter for Docker workflow manual triggers
- Improve version pattern matching with better regex validation
- Add comprehensive Docker Build Guide documentation
- Enhanced logging and error reporting for build process
- Support for prerelease version detection (alpha, beta, rc)

## Improvements
- Better version pattern validation for releases and dev builds
- More detailed build logs with context and warnings
- Clear documentation for all Docker image variants and use cases
- Updated README with Docker version examples and guide reference

## Documentation
- New comprehensive Docker Build Guide (docs/DOCKER_BUILD_GUIDE.md)
- Updated README with version-specific Docker examples
- Workflow dependency diagram and troubleshooting guide
- Complete reference for all supported version patterns

This enhancement provides a robust, well-documented Docker build system
that supports flexible version selection while maintaining deterministic
build behavior without fallback mechanisms.

* fix: simplify dev version regex pattern in docker workflow

* fix: simplify version number regex pattern in docker workflow

* feat: remove docs directory

* fix: correct SHA length matching in main-latest filename generation

* refactor: use bash string operations instead of sed for main-latest filename generation

* refactor: simplify filename generation by removing redundant intermediate variables

* feat: add dev-latest version generation for all development builds

* feat: add dev-latest support to Docker workflow
2025-07-12 18:46:37 +08:00
安正超
2066e0a03b fix: correct SHA length matching in main-latest filename generation (#187)
* feat: enhance Docker build system with advanced version selection

## New Features
- Add force_rebuild parameter for Docker workflow manual triggers
- Improve version pattern matching with better regex validation
- Add comprehensive Docker Build Guide documentation
- Enhanced logging and error reporting for build process
- Support for prerelease version detection (alpha, beta, rc)

## Improvements
- Better version pattern validation for releases and dev builds
- More detailed build logs with context and warnings
- Clear documentation for all Docker image variants and use cases
- Updated README with Docker version examples and guide reference

## Documentation
- New comprehensive Docker Build Guide (docs/DOCKER_BUILD_GUIDE.md)
- Updated README with version-specific Docker examples
- Workflow dependency diagram and troubleshooting guide
- Complete reference for all supported version patterns

This enhancement provides a robust, well-documented Docker build system
that supports flexible version selection while maintaining deterministic
build behavior without fallback mechanisms.

* fix: simplify dev version regex pattern in docker workflow

* fix: simplify version number regex pattern in docker workflow

* feat: remove docs directory

* fix: correct SHA length matching in main-latest filename generation
2025-07-12 11:43:17 +08:00
安正超
a4d49a500f feat: Enhanced Docker Build System with Advanced Version Selection (#186)
* feat: enhance Docker build system with advanced version selection

## New Features
- Add force_rebuild parameter for Docker workflow manual triggers
- Improve version pattern matching with better regex validation
- Add comprehensive Docker Build Guide documentation
- Enhanced logging and error reporting for build process
- Support for prerelease version detection (alpha, beta, rc)

## Improvements
- Better version pattern validation for releases and dev builds
- More detailed build logs with context and warnings
- Clear documentation for all Docker image variants and use cases
- Updated README with Docker version examples and guide reference

## Documentation
- New comprehensive Docker Build Guide (docs/DOCKER_BUILD_GUIDE.md)
- Updated README with version-specific Docker examples
- Workflow dependency diagram and troubleshooting guide
- Complete reference for all supported version patterns

This enhancement provides a robust, well-documented Docker build system
that supports flexible version selection while maintaining deterministic
build behavior without fallback mechanisms.

* fix: simplify dev version regex pattern in docker workflow

* fix: simplify version number regex pattern in docker workflow

* feat: remove docs directory
2025-07-12 11:31:00 +08:00
安正超
a8fbced928 feat: improve Docker build with version selection and remove fallback mechanism (#185)
- Add version input parameter to docker.yml workflow_dispatch
- Support main-latest, latest, dev-xxx, and specific version patterns
- Remove complex fallback mechanism from all Dockerfile variants
- Add clear error handling with helpful user guidance
- Create main-latest versions for development builds
- Ensure Docker builds require explicit VERSION parameter
- Update all Docker variants (production, alpine, ubuntu) consistently

This change solves the build dependency issue where Docker builds
could fail when expected binary artifacts don't exist, by providing
a clean version selection mechanism without unpredictable fallbacks.
2025-07-12 11:09:44 +08:00
安正超
99ca405279 feat: enhance cursor rules with strict main branch protection (#184) 2025-07-12 10:59:17 +08:00
overtrue
2e1d1018aa refactor: simplify version handling by removing unnecessary CLEAN_VERSION variable 2025-07-12 10:42:47 +08:00
overtrue
c57b4be1c7 refactor: use bash variable expansion for dev- prefix handling 2025-07-12 10:41:08 +08:00
overtrue
238a016242 chore: ignore .secrets 2025-07-12 10:30:37 +08:00
shiro.lee
2c0c7fafa3 fix: Optimized io::ErrorKind::NotFound error handling during Windows system startup (#181) 2025-07-12 06:54:21 +08:00
overtrue
ee4962fe31 fix: docker build 2025-07-11 23:42:46 +08:00
安正超
55895d0a10 fix: resolve Docker Hub authentication issues in multi-platform builds (#180) 2025-07-11 23:36:37 +08:00
overtrue
676897d389 fix: docker 2025-07-11 23:29:47 +08:00
loverustfs
5205ff6695 Add hellogithub icon 2025-07-11 23:24:55 +08:00
overtrue
15cf3ce92b fix: docker 2025-07-11 23:22:48 +08:00
安正超
c0441b2412 fix: resolve GitHub Actions workflow validation errors in docker.yml (#179)
* fix: resolve GitHub Actions workflow validation errors in docker.yml

- Fix usage of secrets context in conditional expressions
- Add environment variables to build-docker and create-manifest jobs
- Replace 'secrets.DOCKERHUB_USERNAME' with 'env.DOCKERHUB_USERNAME' in if conditions
- Maintain secure handling of Docker Hub credentials through proper env context

* Update .github/workflows/docker.yml

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-07-11 23:12:58 +08:00
安正超
6267872ddb feat: add latest version support for release builds (#178)
- Add automatic creation of latest version files for release and prerelease builds
- Simplify installation script by providing direct latest URLs
- Support rustfs-linux-{arch}-latest.zip naming convention
- Improve build artifact management and user experience
2025-07-11 23:01:36 +08:00
安正超
618779a89d feat: implement multi-channel release system with artifact naming (#176)
* feat: implement multi-channel release system with artifact naming

- Add dedicated release.yml workflow for handling GitHub releases
- Refactor build.yml to support dev/release/prerelease artifact naming
- Update docker.yml to support version-specific image tagging
- Implement artifact naming rules:
  - Dev: rustfs-{platform}-{arch}-dev-{sha}.zip
  - Release: rustfs-{platform}-{arch}-v{version}.zip
  - Prerelease: rustfs-{platform}-{arch}-v{version}.zip
- Add OSS upload directory separation (dev/ vs release/)
- Only stable releases update latest.json and create latest tags
- Separate GitHub Release creation from build workflow
- Add comprehensive build summaries and status reporting

This enables proper multi-channel distribution with clear artifact
identification and prevents confusion between dev and stable releases.

* fix: support version tags without v prefix (1.0.0 instead of v1.0.0)

- Update trigger patterns from 'v*.*.*' to '*.*.*' in all workflows
- Fix version extraction logic to handle tags without v prefix
- Maintain backward compatibility with existing logic

Note: Artifact naming still includes 'v' prefix for clarity
(e.g., tag '1.0.0' creates 'rustfs-linux-x86_64-v1.0.0.zip')

* feat: update Dockerfile to support multi-channel release system

- Add build arguments for VERSION, BUILD_TYPE, and TARGETARCH
- Support dynamic artifact download based on build type:
  - Development: downloads from artifacts/rustfs/dev/
  - Release: downloads from artifacts/rustfs/release/
- Auto-generate correct filenames based on new naming convention:
  - Dev: rustfs-linux-{arch}-dev-{sha}.zip
  - Release: rustfs-linux-{arch}-v{version}.zip
- Add architecture mapping for multi-platform builds
- Pass BUILD_TYPE parameter from docker.yml workflow
- Improve error handling with helpful download path suggestions

This ensures Docker images use the correct pre-built binaries
from the new multi-channel release system.

* feat: optimize and consolidate Dockerfile structure

## Major Improvements:

###  Created Missing Files
- Add .docker/Dockerfile.alpine for lightweight Alpine-based builds
- Support both pre-built binary download and source compilation

### 🔧 Fixed Critical Issues
- Fix Dockerfile.obs: ubuntu:latest → ubuntu:22.04 (stable version)
- Add proper security practices (non-root user, health checks)
- Add proper error handling and environment variables

### 🗑️ Eliminated Redundancy
- Remove .docker/Dockerfile.ubuntu22.04 (duplicate of devenv)
- Update docker.yml workflow to use devenv for ubuntu variant
- Consolidate similar functionality into fewer, better files

### 🚀 Enhanced Functionality
- Make devenv Dockerfile dual-purpose (dev environment + runtime)
- Add VERSION/BUILD_TYPE support for dynamic binary downloads
- Improve security with proper user management
- Add comprehensive health checks and error handling

### 📊 Final Dockerfile Structure:
1. Dockerfile (production, Alpine-based, pre-built binaries)
2. Dockerfile.multi-stage (full source builds, Ubuntu-based)
3. Dockerfile.obs (observability builds, Ubuntu-based)
4. .docker/Dockerfile.alpine (lightweight Alpine variant)
5. .docker/Dockerfile.devenv (development + ubuntu variant)
6. .docker/Dockerfile.rockylinux9.3 (RockyLinux variant)

This reduces redundancy while maintaining all necessary build variants
and improving maintainability across the entire container ecosystem.

* refactor: streamline Dockerfile structure and remove unused files

## 🎯 Major Cleanup:

### 🗑️ Removed Unused Files (2 files)
- Delete Dockerfile.obs (not referenced anywhere)
- Delete .docker/Dockerfile.rockylinux9.3 (not referenced anywhere)

### 📁 Reorganized File Layout
- Move Dockerfile.multi-stage → .docker/Dockerfile.multi-stage
- Update docker-compose.yml to use new path
- Keep main Dockerfile in root (production use)
- Consolidate variants in .docker/ directory

###  Final Clean Structure:

### 📊 Before vs After:
- **Before**: 7 files (1 missing, 2 unused, scattered layout)
- **After**: 4 files (all used, organized layout)
- **Reduction**: 43% fewer files, 100% utilization

This eliminates confusion and reduces maintenance overhead while
keeping all actually needed functionality intact.

* refactor: implement comprehensive Docker tag strategy with production variant

- Restore production variant as default with explicit naming
- Add support for prerelease channels (alpha, beta, rc)
- Implement rolling development tags (dev, dev-variant)
- Support semantic versioning with variant combinations
- Update documentation with complete tag strategy examples
- Align with GPT-suggested comprehensive tagging approach

Tag examples:
- rustfs/rustfs:1.2.3 (main production)
- rustfs/rustfs:1.2.3-production (explicit production)
- rustfs/rustfs:1.2.3-alpine (Alpine variant)
- rustfs/rustfs:alpha (latest alpha)
- rustfs/rustfs:dev (latest development)
- rustfs/rustfs:dev-13e4a0b (specific commit)

* perf: optimize Docker build speed with comprehensive caching and compilation improvements

- Add dual caching strategy: GitHub Actions + Registry cache
- Implement sccache for Rust compilation caching across builds
- Configure parallel compilation with all available CPU cores
- Add optimized cargo configuration for faster builds
- Enable sparse registry protocol for dependency resolution
- Configure LLD linker for faster linking
- Add BuildKit optimizations with inline cache
- Disable provenance/SBOM generation for faster builds
- Document build performance improvements and timings

Performance improvements:
- Source builds: ~40-50% faster with cache hits
- Pre-built binaries: ~30-40% faster
- Parallel matrix builds reduce total CI time significantly
- Registry cache provides persistent cross-run benefits

* refactor: consolidate Docker variants and eliminate duplication

- Replace root Dockerfile with enhanced Alpine prebuild version
- Remove redundant alpine variant from build matrix
- Root Dockerfile now includes:
  - Non-root user security
  - Health checks
  - Better error handling
  - protoc/flatc tool support
- Update documentation to reflect simplified 4-variant strategy
- Remove duplicate .docker/alpine/Dockerfile.prebuild

Build matrix now:
- production (root Dockerfile - Alpine prebuild)
- alpine-source (Alpine source build)
- ubuntu (Ubuntu prebuild)
- ubuntu-source (Ubuntu source build)

Benefits:
- Eliminates functional duplication
- Improves security with non-root execution
- Maintains same image variants with better quality
- Simplifies maintenance

* fix: restore alpine variant for better user choice

- Restore alpine variant (rustfs/rustfs:1.2.3-alpine)
- Re-add .docker/alpine/Dockerfile.prebuild
- Update build matrix to include 5 variants again:
  - production (default)
  - alpine (explicit Alpine choice)
  - alpine-source (Alpine source build)
  - ubuntu (Ubuntu pre-built)
  - ubuntu-source (Ubuntu source build)
- Update documentation to reflect restored alpine tags
- Fix build performance table to include all variants

User feedback: Alpine variant provides explicit choice even if
similar to production variant. Better UX with clear options.

* fix: remove redundant rustup target add commands in Alpine Dockerfiles

- Remove 'rustup target add x86_64-unknown-linux-musl' from Alpine source build
- Remove redundant target add from Alpine prebuild fallback path
- Remove redundant target add from root Dockerfile fallback path

Reason: rust:alpine base image already has x86_64-unknown-linux-musl
as the default target since Alpine uses musl libc by default.

Thanks to @houseme for spotting this redundancy in code review.

* fix: add missing RUSTFS_VOLUMES environment variable in Dockerfiles

- Add RUSTFS_VOLUMES=/data to all Dockerfile variants
- This fixes the issue where CMD ['/app/rustfs'] was used without providing the required volumes parameter
- The volumes parameter is required by the application and can be provided via command line or RUSTFS_VOLUMES environment variable

* fix: update docker-compose configurations to ensure all environments work correctly

- Added missing access key and secret key environment variables to docker-compose.yaml
- This ensures the distributed test environment has proper authentication credentials
- Complementary fix to the previous Dockerfile updates for consistent configuration

* fix: recreate missing Dockerfile.obs with complete content

- The file was accidentally left empty after initial creation
- Now contains proper Ubuntu-based configuration for observability environment
- Includes all necessary environment variables including RUSTFS_VOLUMES
- Supports docker-compose-obs.yaml configuration

* refactor: organize Docker Compose configurations and eliminate duplication

- Move specialized configurations to .docker/compose/ directory
- Rename docker-compose.yaml → docker-compose.cluster.yaml (distributed testing)
- Rename docker-compose-obs.yaml → docker-compose.observability.yaml (observability testing)
- Keep docker-compose.yml as the main production configuration
- Add comprehensive README explaining different configuration purposes
- Eliminates confusion between similar filenames
- Provides clear guidance on when to use each configuration

* fix: correct relative paths in moved Docker Compose configurations

- Fix binary volume mount paths in docker-compose.cluster.yaml (./target → ../../target)
- Fix Dockerfile.obs context path in docker-compose.observability.yaml (. → ../..)
- Fix observability config file paths (./.docker → ../../.docker)
- Update README.md with correct usage instructions for new locations
- All configurations now correctly reference files relative to their new positions

* refactor: move Dockerfile.obs to .docker/compose/ directory for better organization

- Move Dockerfile.obs from root to .docker/compose/ directory
- Update all dockerfile references in docker-compose.observability.yaml
- Keep related files (Dockerfile.obs + docker-compose.observability.yaml) together
- Clean up root directory by removing specialized-purpose Dockerfile
- Update README.md to document new file organization
- Improves project structure and file discoverability

* refactor: improve Docker build configuration for better clarity

- Move Dockerfile.obs back to project root for simpler build context
- Update docker-compose.observability.yaml to use cleaner dockerfile reference
- Change from '.docker/compose/Dockerfile.obs' to simply 'Dockerfile.obs'
- Maintain context as '../..' for access to project files
- Remove redundant Dockerfile.obs documentation from compose README
- This follows Docker best practices: simple context + Dockerfile at context root

* wip
2025-07-11 22:18:33 +08:00
houseme
b3ec2325ed improve docker comprose config file and remove docs dir (#174)
* refactor(config): Unify S3 API and Console ports

This commit streamlines the server configuration by unifying the S3 API and the WebUI (Console) to serve on a single port.

Previously, the console was managed by separate configuration options (`RUSTFS_CONSOLE_ENABLE` and `RUSTFS_CONSOLE_ADDRESS`), requiring a distinct port. This added complexity to deployment and configuration.

With this change:
- The `RUSTFS_CONSOLE_ADDRESS` and `RUSTFS_CONSOLE_FS_ENDPOINT` environment variables are removed.
- The WebUI is now always available and served directly from the main application port defined by `RUSTFS_ADDRESS`.
- This simplifies setup, reduces the number of exposed ports, and makes the application easier to manage and deploy, especially in containerized environments.

Users should update their startup scripts and remove the deprecated `RUSTFS_CONSOLE_*` variables.

* improve docker comprose config file and remove docs dir
2025-07-11 16:55:24 +08:00
houseme
49a5643e76 refactor(config): Unify S3 API and Console ports (#173)
This commit streamlines the server configuration by unifying the S3 API and the WebUI (Console) to serve on a single port.

Previously, the console was managed by separate configuration options (`RUSTFS_CONSOLE_ENABLE` and `RUSTFS_CONSOLE_ADDRESS`), requiring a distinct port. This added complexity to deployment and configuration.

With this change:
- The `RUSTFS_CONSOLE_ADDRESS` and `RUSTFS_CONSOLE_FS_ENDPOINT` environment variables are removed.
- The WebUI is now always available and served directly from the main application port defined by `RUSTFS_ADDRESS`.
- This simplifies setup, reduces the number of exposed ports, and makes the application easier to manage and deploy, especially in containerized environments.

Users should update their startup scripts and remove the deprecated `RUSTFS_CONSOLE_*` variables.
2025-07-11 14:20:22 +08:00
loverustfs
657395af8a fix docker quickstart 2025-07-11 10:59:11 +08:00
loverustfs
4de62ed77e fix quickstart 2025-07-11 10:58:22 +08:00
houseme
505f493729 chore: bump workspace dependencies versions (#168)
* upgrade package version

# Conflicts:
#	crates/rio/Cargo.toml

* fix

* upgrade version

* upgrade version

* cargo fmt
2025-07-11 10:35:27 +08:00
weisd
be05b704b0 feat: add Content-Length headers to admin API responses (#169) 2025-07-11 09:40:57 +08:00
安正超
b33c2fa3cf Update build.yml 2025-07-11 09:00:06 +08:00
安正超
98674c60d4 Update README.md 2025-07-11 08:44:50 +08:00
安正超
e39eb86967 fix: remove unused command 2025-07-11 08:03:29 +08:00
weisd
646070ae7a Feat/browser redirect layer (#167)
* feat: add browser redirect layer to route GET requests to console

* refactor: move RedirectLayer to separate layer.rs file

* feat: restrict redirect layer to only handle root path and index.html

* feat: restrict redirect layer to only handle root path /rustfs and index.html
2025-07-11 07:38:42 +08:00
Nugine
2525b66658 refactor: replace lazy_static with LazyLock (#164)
* refactor: replace `lazy_static` with `LazyLock`

* update cursorrules

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
2025-07-10 23:50:46 +08:00
Nugine
58c5a633e2 ci: fix cache (#165) 2025-07-10 23:50:26 +08:00
120 changed files with 4725 additions and 2643 deletions

View File

@@ -1,22 +1,39 @@
# RustFS Project Cursor Rules
## ⚠️ CRITICAL DEVELOPMENT RULES ⚠️
## 🚨🚨🚨 CRITICAL DEVELOPMENT RULES - ZERO TOLERANCE 🚨🚨🚨
### 🚨 NEVER COMMIT DIRECTLY TO MASTER/MAIN BRANCH 🚨
### ⛔️ ABSOLUTE PROHIBITION: NEVER COMMIT DIRECTLY TO MASTER/MAIN BRANCH ⛔️
- **This is the most important rule - NEVER modify code directly on main or master branch**
- **ALL CHANGES MUST GO THROUGH PULL REQUESTS - NO EXCEPTIONS**
- **Always work on feature branches and use pull requests for all changes**
- **Any direct commits to master/main branch are strictly forbidden**
- **Pull requests are the ONLY way to merge code to main branch**
- Before starting any development, always:
1. `git checkout main` (switch to main branch)
2. `git pull` (get latest changes)
3. `git checkout -b feat/your-feature-name` (create and switch to feature branch)
4. Make your changes on the feature branch
5. Commit and push to the feature branch
6. **Create a pull request for review - THIS IS MANDATORY**
7. **Wait for PR approval and merge through GitHub interface only**
**🔥 THIS IS THE MOST CRITICAL RULE - VIOLATION WILL RESULT IN IMMEDIATE REVERSAL 🔥**
- **🚫 ZERO DIRECT COMMITS TO MAIN/MASTER BRANCH - ABSOLUTELY FORBIDDEN**
- **🚫 ANY DIRECT COMMIT TO MAIN BRANCH MUST BE IMMEDIATELY REVERTED**
- **🚫 NO EXCEPTIONS FOR HOTFIXES, EMERGENCIES, OR URGENT CHANGES**
- **🚫 NO EXCEPTIONS FOR SMALL CHANGES, TYPOS, OR DOCUMENTATION UPDATES**
- **🚫 NO EXCEPTIONS FOR ANYONE - MAINTAINERS, CONTRIBUTORS, OR ADMINS**
### 📋 MANDATORY WORKFLOW - STRICTLY ENFORCED
**EVERY SINGLE CHANGE MUST FOLLOW THIS WORKFLOW:**
1. **Check current branch**: `git branch` (MUST NOT be on main/master)
2. **Switch to main**: `git checkout main`
3. **Pull latest**: `git pull origin main`
4. **Create feature branch**: `git checkout -b feat/your-feature-name`
5. **Make changes ONLY on feature branch**
6. **Test thoroughly before committing**
7. **Commit and push to feature branch**: `git push origin feat/your-feature-name`
8. **Create Pull Request**: Use `gh pr create` (MANDATORY)
9. **Wait for PR approval**: NO self-merging allowed
10. **Merge through GitHub interface**: ONLY after approval
### 🔒 ENFORCEMENT MECHANISMS
- **Branch protection rules**: Main branch is protected
- **Pre-commit hooks**: Will block direct commits to main
- **CI/CD checks**: All PRs must pass before merging
- **Code review requirement**: At least one approval needed
- **Automated reversal**: Direct commits to main will be automatically reverted
## Project Overview
@@ -517,7 +534,7 @@ let results = join_all(futures).await;
### 3. Caching Strategy
- Use `lazy_static` or `OnceCell` for global caching
- Use `LazyLock` for global caching
- Implement LRU cache to avoid memory leaks
## Testing Guidelines

View File

@@ -1,27 +0,0 @@
FROM ubuntu:22.04
ENV LANG C.UTF-8
RUN sed -i s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g /etc/apt/sources.list
RUN apt-get clean && apt-get update && apt-get install wget git curl unzip gcc pkg-config libssl-dev lld libdbus-1-dev libwayland-dev libwebkit2gtk-4.1-dev libxdo-dev -y
# install protoc
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v31.1/protoc-31.1-linux-x86_64.zip \
&& unzip protoc-31.1-linux-x86_64.zip -d protoc3 \
&& mv protoc3/bin/* /usr/local/bin/ && chmod +x /usr/local/bin/protoc \
&& mv protoc3/include/* /usr/local/include/ && rm -rf protoc-31.1-linux-x86_64.zip protoc3
# install flatc
RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.flatc.binary.g++-13.zip \
&& unzip Linux.flatc.binary.g++-13.zip \
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc && rm -rf Linux.flatc.binary.g++-13.zip
# install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
COPY .docker/cargo.config.toml /root/.cargo/config.toml
WORKDIR /root/s3-rustfs
CMD [ "bash", "-c", "while true; do sleep 1; done" ]

View File

@@ -1,32 +0,0 @@
FROM rockylinux:9.3 AS builder
ENV LANG C.UTF-8
RUN sed -e 's|^mirrorlist=|#mirrorlist=|g' \
-e 's|^#baseurl=http://dl.rockylinux.org/$contentdir|baseurl=https://mirrors.ustc.edu.cn/rocky|g' \
-i.bak \
/etc/yum.repos.d/rocky-extras.repo \
/etc/yum.repos.d/rocky.repo
RUN dnf makecache
RUN yum install wget git unzip gcc openssl-devel pkgconf-pkg-config -y
# install protoc
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v31.1/protoc-31.1-linux-x86_64.zip \
&& unzip protoc-31.1-linux-x86_64.zip -d protoc3 \
&& mv protoc3/bin/* /usr/local/bin/ && chmod +x /usr/local/bin/protoc \
&& mv protoc3/include/* /usr/local/include/ && rm -rf protoc-31.1-linux-x86_64.zip protoc3
# install flatc
RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.flatc.binary.g++-13.zip \
&& unzip Linux.flatc.binary.g++-13.zip \
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc \
&& rm -rf Linux.flatc.binary.g++-13.zip
# install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
COPY .docker/cargo.config.toml /root/.cargo/config.toml
WORKDIR /root/s3-rustfs

View File

@@ -1,25 +0,0 @@
FROM ubuntu:22.04
ENV LANG C.UTF-8
RUN sed -i s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g /etc/apt/sources.list
RUN apt-get clean && apt-get update && apt-get install wget git curl unzip gcc pkg-config libssl-dev lld libdbus-1-dev libwayland-dev libwebkit2gtk-4.1-dev libxdo-dev -y
# install protoc
RUN wget https://github.com/protocolbuffers/protobuf/releases/download/v31.1/protoc-31.1-linux-x86_64.zip \
&& unzip protoc-31.1-linux-x86_64.zip -d protoc3 \
&& mv protoc3/bin/* /usr/local/bin/ && chmod +x /usr/local/bin/protoc \
&& mv protoc3/include/* /usr/local/include/ && rm -rf protoc-31.1-linux-x86_64.zip protoc3
# install flatc
RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.flatc.binary.g++-13.zip \
&& unzip Linux.flatc.binary.g++-13.zip \
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc && rm -rf Linux.flatc.binary.g++-13.zip
# install rust
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y
COPY .docker/cargo.config.toml /root/.cargo/config.toml
WORKDIR /root/s3-rustfs

261
.docker/README.md Normal file
View File

@@ -0,0 +1,261 @@
# RustFS Docker Images
This directory contains Docker configuration files and supporting infrastructure for building and running RustFS container images.
## 📁 Directory Structure
```
rustfs/
├── Dockerfile # Production image (Alpine + pre-built binaries)
├── Dockerfile.source # Development image (Debian + source build)
├── docker-buildx.sh # Multi-architecture build script
├── Makefile # Build automation with simplified commands
└── .docker/ # Supporting infrastructure
├── observability/ # Monitoring and observability configs
├── compose/ # Docker Compose configurations
├── mqtt/ # MQTT broker configs
└── openobserve-otel/ # OpenObserve + OpenTelemetry configs
```
## 🎯 Image Variants
### Core Images
| Image | Base OS | Build Method | Size | Use Case |
|-------|---------|--------------|------|----------|
| `production` (default) | Alpine 3.18 | GitHub Releases | Smallest | Production deployment |
| `source` | Debian Bookworm | Source build | Medium | Custom builds with cross-compilation |
| `dev` | Debian Bookworm | Development tools | Large | Interactive development |
## 🚀 Usage Examples
### Quick Start (Production)
```bash
# Default production image (Alpine + GitHub Releases)
docker run -p 9000:9000 rustfs/rustfs:latest
# Specific version
docker run -p 9000:9000 rustfs/rustfs:1.2.3
```
### Complete Tag Strategy Examples
```bash
# Stable Releases
docker run rustfs/rustfs:1.2.3 # Main version (production)
docker run rustfs/rustfs:1.2.3-production # Explicit production variant
docker run rustfs/rustfs:1.2.3-source # Source build variant
docker run rustfs/rustfs:latest # Latest stable
# Prerelease Versions
docker run rustfs/rustfs:1.3.0-alpha.2 # Specific alpha version
docker run rustfs/rustfs:alpha # Latest alpha
docker run rustfs/rustfs:beta # Latest beta
docker run rustfs/rustfs:rc # Latest release candidate
# Development Versions
docker run rustfs/rustfs:dev # Latest main branch development
docker run rustfs/rustfs:dev-13e4a0b # Specific commit
docker run rustfs/rustfs:dev-latest # Latest development
docker run rustfs/rustfs:main-latest # Main branch latest
```
### Development Environment
```bash
# Quick setup using Makefile (recommended)
make docker-dev-local # Build development image locally
make dev-env-start # Start development container
# Manual Docker commands
docker run -it -v $(pwd):/workspace -p 9000:9000 rustfs/rustfs:latest-dev
# Build from source locally
docker build -f Dockerfile.source -t rustfs:custom .
# Development with hot reload
docker-compose up rustfs-dev
```
## 🏗️ Build Arguments and Scripts
### Using Makefile Commands (Recommended)
The easiest way to build images using simplified commands:
```bash
# Development images (build from source)
make docker-dev-local # Build for local use (single arch)
make docker-dev # Build multi-arch (for CI/CD)
make docker-dev-push REGISTRY=xxx # Build and push to registry
# Production images (using pre-built binaries)
make docker-buildx # Build multi-arch production images
make docker-buildx-push # Build and push production images
make docker-buildx-version VERSION=v1.0.0 # Build specific version
# Development environment
make dev-env-start # Start development container
make dev-env-stop # Stop development container
make dev-env-restart # Restart development container
# Help
make help-docker # Show all Docker-related commands
```
### Using docker-buildx.sh (Advanced)
For direct script usage and advanced scenarios:
```bash
# Build latest version for all architectures
./docker-buildx.sh
# Build and push to registry
./docker-buildx.sh --push
# Build specific version
./docker-buildx.sh --release v1.2.3
# Build and push specific version
./docker-buildx.sh --release v1.2.3 --push
```
### Manual Docker Builds
All images support dynamic version selection:
```bash
# Build production image with latest release
docker build --build-arg RELEASE="latest" -t rustfs:latest .
# Build from source with specific target
docker build -f Dockerfile.source \
--build-arg TARGETPLATFORM="linux/amd64" \
-t rustfs:source .
# Development build
docker build -f Dockerfile.source -t rustfs:dev .
```
## 🔧 Binary Download Sources
### Unified GitHub Releases
The production image downloads from GitHub Releases for reliability and transparency:
-**production** → GitHub Releases API with automatic latest detection
-**Checksum verification** → SHA256SUMS validation when available
-**Multi-architecture** → Supports amd64 and arm64
### Source Build
The source variant compiles from source code with advanced features:
- 🔧 **Cross-compilation** → Supports multiple target platforms via `TARGETPLATFORM`
-**Build caching** → sccache for faster compilation
- 🎯 **Optimized builds** → Release optimizations with LTO and symbol stripping
## 📋 Architecture Support
All variants support multi-architecture builds:
- **linux/amd64** (x86_64)
- **linux/arm64** (aarch64)
Architecture is automatically detected during build using Docker's `TARGETARCH` build argument.
## 🔐 Security Features
- **Checksum Verification**: Production image verifies SHA256SUMS when available
- **Non-root User**: All images run as user `rustfs` (UID 1000)
- **Minimal Runtime**: Production image only includes necessary dependencies
- **Secure Defaults**: No hardcoded credentials or keys
## 🛠️ Development Workflow
### Quick Start with Makefile (Recommended)
```bash
# 1. Start development environment
make dev-env-start
# 2. Your development container is now running with:
# - Port 9000 exposed for RustFS
# - Port 9010 exposed for admin console
# - Current directory mounted as /workspace
# 3. Stop when done
make dev-env-stop
```
### Manual Development Setup
```bash
# Build development image from source
make docker-dev-local
# Or use traditional Docker commands
docker build -f Dockerfile.source -t rustfs:dev .
# Run with development tools
docker run -it -v $(pwd):/workspace -p 9000:9000 rustfs:dev bash
# Or use docker-compose for complex setups
docker-compose up rustfs-dev
```
### Common Development Tasks
```bash
# Build and test locally
make build # Build binary natively
make docker-dev-local # Build development Docker image
make test # Run tests
make fmt # Format code
make clippy # Run linter
# Get help
make help # General help
make help-docker # Docker-specific help
make help-build # Build-specific help
```
## 🚀 CI/CD Integration
The project uses GitHub Actions for automated multi-architecture Docker builds:
### Automated Builds
- **Tags**: Automatic builds triggered on version tags (e.g., `v1.2.3`)
- **Main Branch**: Development builds with `dev-latest` and `main-latest` tags
- **Pull Requests**: Test builds without registry push
### Build Variants
Each build creates three image variants:
- `rustfs/rustfs:v1.2.3` (production - Alpine-based)
- `rustfs/rustfs:v1.2.3-source` (source build - Debian-based)
- `rustfs/rustfs:v1.2.3-dev` (development - Debian-based with tools)
### Manual Builds
Trigger custom builds via GitHub Actions:
```bash
# Use workflow_dispatch to build specific versions
# Available options: latest, main-latest, dev-latest, v1.2.3, dev-abc123
```
## 📦 Supporting Infrastructure
The `.docker/` directory contains supporting configuration files:
- **observability/** - Prometheus, Grafana, OpenTelemetry configs
- **compose/** - Multi-service Docker Compose setups
- **mqtt/** - MQTT broker configurations
- **openobserve-otel/** - Log aggregation and tracing setup
See individual README files in each subdirectory for specific usage instructions.

View File

@@ -1,19 +0,0 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
[source.crates-io]
registry = "https://github.com/rust-lang/crates.io-index"
[net]
git-fetch-with-cli = true

80
.docker/compose/README.md Normal file
View File

@@ -0,0 +1,80 @@
# Docker Compose Configurations
This directory contains specialized Docker Compose configurations for different use cases.
## 📁 Configuration Files
This directory contains specialized Docker Compose configurations and their associated Dockerfiles, keeping related files organized together.
### Main Configuration (Root Directory)
- **`../../docker-compose.yml`** - **Default Production Setup**
- Complete production-ready configuration
- Includes RustFS server + full observability stack
- Supports multiple profiles: `dev`, `observability`, `cache`, `proxy`
- Recommended for most users
### Specialized Configurations
- **`docker-compose.cluster.yaml`** - **Distributed Testing**
- 4-node cluster setup for testing distributed storage
- Uses local compiled binaries
- Simulates multi-node environment
- Ideal for development and cluster testing
- **`docker-compose.observability.yaml`** - **Observability Focus**
- Specialized setup for testing observability features
- Includes OpenTelemetry, Jaeger, Prometheus, Loki, Grafana
- Uses `../../Dockerfile.source` for builds
- Perfect for observability development
## 🚀 Usage Examples
### Production Setup
```bash
# Start main service
docker-compose up -d
# Start with development profile
docker-compose --profile dev up -d
# Start with full observability
docker-compose --profile observability up -d
```
### Cluster Testing
```bash
# Build and start 4-node cluster (run from project root)
cd .docker/compose
docker-compose -f docker-compose.cluster.yaml up -d
# Or run directly from project root
docker-compose -f .docker/compose/docker-compose.cluster.yaml up -d
```
### Observability Testing
```bash
# Start observability-focused environment (run from project root)
cd .docker/compose
docker-compose -f docker-compose.observability.yaml up -d
# Or run directly from project root
docker-compose -f .docker/compose/docker-compose.observability.yaml up -d
```
## 🔧 Configuration Overview
| Configuration | Nodes | Storage | Observability | Use Case |
|---------------|-------|---------|---------------|----------|
| **Main** | 1 | Volume mounts | Full stack | Production |
| **Cluster** | 4 | HTTP endpoints | Basic | Testing |
| **Observability** | 4 | Local data | Advanced | Development |
## 📝 Notes
- Always ensure you have built the required binaries before starting cluster tests
- The main configuration is sufficient for most use cases
- Specialized configurations are for specific testing scenarios

View File

@@ -14,70 +14,69 @@
services:
node0:
image: rustfs:v1 # 替换为你的镜像名称和标签
image: rustfs/rustfs:latest # Replace with your image name and label
container_name: node0
hostname: node0
environment:
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
- RUSTFS_ACCESS_KEY=rustfsadmin
- RUSTFS_SECRET_KEY=rustfsadmin
platform: linux/amd64
ports:
- "9000:9000" # 映射宿主机的 9001 端口到容器的 9000 端口
- "8000:9001" # 映射宿主机的 9001 端口到容器的 9000 端口
- "9000:9000" # Map port 9001 of the host to port 9000 of the container
volumes:
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
# - ./data/node0:/data # 将当前路径挂载到容器内的 /root/data
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
command: "/app/rustfs"
node1:
image: rustfs:v1
image: rustfs/rustfs:latest
container_name: node1
hostname: node1
environment:
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
- RUSTFS_ACCESS_KEY=rustfsadmin
- RUSTFS_SECRET_KEY=rustfsadmin
platform: linux/amd64
ports:
- "9001:9000" # 映射宿主机的 9002 端口到容器的 9000 端口
- "9001:9000" # Map port 9002 of the host to port 9000 of the container
volumes:
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
# - ./data/node1:/data
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
command: "/app/rustfs"
node2:
image: rustfs:v1
image: rustfs/rustfs:latest
container_name: node2
hostname: node2
environment:
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
- RUSTFS_ACCESS_KEY=rustfsadmin
- RUSTFS_SECRET_KEY=rustfsadmin
platform: linux/amd64
ports:
- "9002:9000" # 映射宿主机的 9003 端口到容器的 9000 端口
- "9002:9000" # Map port 9003 of the host to port 9000 of the container
volumes:
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
# - ./data/node2:/data
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
command: "/app/rustfs"
node3:
image: rustfs:v1
image: rustfs/rustfs:latest
container_name: node3
hostname: node3
environment:
- RUSTFS_VOLUMES=http://node{0...3}:9000/data/rustfs{0...3}
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9002
- RUSTFS_ACCESS_KEY=rustfsadmin
- RUSTFS_SECRET_KEY=rustfsadmin
platform: linux/amd64
ports:
- "9003:9000" # 映射宿主机的 9004 端口到容器的 9000 端口
- "9003:9000" # Map port 9004 of the host to port 9000 of the container
volumes:
- ./target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
# - ./data/node3:/data
- ../../target/x86_64-unknown-linux-musl/release/rustfs:/app/rustfs
command: "/app/rustfs"

View File

@@ -14,11 +14,11 @@
services:
otel-collector:
image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.127.0
image: otel/opentelemetry-collector-contrib:0.129.1
environment:
- TZ=Asia/Shanghai
volumes:
- ./.docker/observability/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
- ../../.docker/observability/otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
ports:
- 1888:1888
- 8888:8888
@@ -30,7 +30,7 @@ services:
networks:
- rustfs-network
jaeger:
image: jaegertracing/jaeger:2.6.0
image: jaegertracing/jaeger:2.8.0
environment:
- TZ=Asia/Shanghai
ports:
@@ -40,11 +40,11 @@ services:
networks:
- rustfs-network
prometheus:
image: prom/prometheus:v3.4.1
image: prom/prometheus:v3.4.2
environment:
- TZ=Asia/Shanghai
volumes:
- ./.docker/observability/prometheus.yml:/etc/prometheus/prometheus.yml
- ../../.docker/observability/prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "9090:9090"
networks:
@@ -54,16 +54,16 @@ services:
environment:
- TZ=Asia/Shanghai
volumes:
- ./.docker/observability/loki-config.yaml:/etc/loki/local-config.yaml
- ../../.docker/observability/loki-config.yaml:/etc/loki/local-config.yaml
ports:
- "3100:3100"
command: -config.file=/etc/loki/local-config.yaml
networks:
- rustfs-network
grafana:
image: grafana/grafana:12.0.1
image: grafana/grafana:12.0.2
ports:
- "3000:3000" # Web UI
- "3000:3000" # Web UI
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
- TZ=Asia/Shanghai
@@ -72,85 +72,69 @@ services:
node1:
build:
context: .
dockerfile: Dockerfile.obs
context: ../..
dockerfile: Dockerfile.source
container_name: node1
environment:
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
- RUSTFS_ADDRESS=:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=:9002
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
- RUSTFS_OBS_LOGGER_LEVEL=debug
platform: linux/amd64
ports:
- "9001:9000" # 映射宿主机的 9001 端口到容器的 9000 端口
- "9101:9002"
volumes:
# - ./data:/root/data # 将当前路径挂载到容器内的 /root/data
- ./.docker/observability/config:/etc/observability/config
- "9001:9000" # Map port 9001 of the host to port 9000 of the container
networks:
- rustfs-network
node2:
build:
context: .
dockerfile: Dockerfile.obs
context: ../..
dockerfile: Dockerfile.source
container_name: node2
environment:
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
- RUSTFS_ADDRESS=:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=:9002
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
- RUSTFS_OBS_LOGGER_LEVEL=debug
platform: linux/amd64
ports:
- "9002:9000" # 映射宿主机的 9002 端口到容器的 9000 端口
- "9102:9002"
volumes:
# - ./data:/root/data
- ./.docker/observability/config:/etc/observability/config
- "9002:9000" # Map port 9002 of the host to port 9000 of the container
networks:
- rustfs-network
node3:
build:
context: .
dockerfile: Dockerfile.obs
context: ../..
dockerfile: Dockerfile.source
container_name: node3
environment:
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
- RUSTFS_ADDRESS=:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=:9002
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
- RUSTFS_OBS_LOGGER_LEVEL=debug
platform: linux/amd64
ports:
- "9003:9000" # 映射宿主机的 9003 端口到容器的 9000 端口
- "9103:9002"
volumes:
# - ./data:/root/data
- ./.docker/observability/config:/etc/observability/config
- "9003:9000" # Map port 9003 of the host to port 9000 of the container
networks:
- rustfs-network
node4:
build:
context: .
dockerfile: Dockerfile.obs
context: ../..
dockerfile: Dockerfile.source
container_name: node4
environment:
- RUSTFS_VOLUMES=http://node{1...4}:9000/root/data/target/volume/test{1...4}
- RUSTFS_ADDRESS=:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=:9002
- RUSTFS_OBS_CONFIG=/etc/observability/config/obs-multi.toml
- RUSTFS_OBS_ENDPOINT=http://otel-collector:4317
- RUSTFS_OBS_LOGGER_LEVEL=debug
platform: linux/amd64
ports:
- "9004:9000" # 映射宿主机的 9004 端口到容器的 9000 端口
- "9104:9002"
volumes:
# - ./data:/root/data
- ./.docker/observability/config:/etc/observability/config
- "9004:9000" # Map port 9004 of the host to port 9000 of the container
networks:
- rustfs-network

View File

@@ -13,24 +13,40 @@
# limitations under the License.
services:
tempo:
image: grafana/tempo:latest
#user: root # The container must be started with root to execute chown in the script
#entrypoint: [ "/etc/tempo/entrypoint.sh" ] # Specify a custom entry point
command: [ "-config.file=/etc/tempo.yaml" ] # This is passed as a parameter to the entry point script
volumes:
- ./tempo-entrypoint.sh:/etc/tempo/entrypoint.sh # Mount entry point script
- ./tempo.yaml:/etc/tempo.yaml
- ./tempo-data:/var/tempo
ports:
- "3200:3200" # tempo
- "24317:4317" # otlp grpc
networks:
- otel-network
otel-collector:
image: ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-contrib:0.127.0
image: otel/opentelemetry-collector-contrib:0.129.1
environment:
- TZ=Asia/Shanghai
volumes:
- ./otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml
ports:
- 1888:1888
- 8888:8888
- 8889:8889
- 13133:13133
- 4317:4317
- 4318:4318
- 55679:55679
- "1888:1888"
- "8888:8888"
- "8889:8889"
- "13133:13133"
- "4317:4317"
- "4318:4318"
- "55679:55679"
networks:
- otel-network
jaeger:
image: jaegertracing/jaeger:2.7.0
image: jaegertracing/jaeger:2.8.0
environment:
- TZ=Asia/Shanghai
ports:
@@ -40,7 +56,7 @@ services:
networks:
- otel-network
prometheus:
image: prom/prometheus:v3.4.1
image: prom/prometheus:v3.4.2
environment:
- TZ=Asia/Shanghai
volumes:
@@ -64,6 +80,8 @@ services:
image: grafana/grafana:12.0.2
ports:
- "3000:3000" # Web UI
volumes:
- ./grafana-datasources.yaml:/etc/grafana/provisioning/datasources/datasources.yaml
environment:
- GF_SECURITY_ADMIN_PASSWORD=admin
- TZ=Asia/Shanghai

View File

@@ -0,0 +1,32 @@
apiVersion: 1
datasources:
- name: Prometheus
type: prometheus
uid: prometheus
access: proxy
orgId: 1
url: http://prometheus:9090
basicAuth: false
isDefault: false
version: 1
editable: false
jsonData:
httpMethod: GET
- name: Tempo
type: tempo
access: proxy
orgId: 1
url: http://tempo:3200
basicAuth: false
isDefault: true
version: 1
editable: false
apiVersion: 1
uid: tempo
jsonData:
httpMethod: GET
serviceMap:
datasourceUid: prometheus
streamingEnabled:
search: true

View File

@@ -33,6 +33,10 @@ exporters:
endpoint: "jaeger:4317" # Jaeger 的 OTLP gRPC 端点
tls:
insecure: true # 开发环境禁用 TLS生产环境需配置证书
otlp/tempo: # OTLP 导出器,用于跟踪数据
endpoint: "tempo:4317" # tempo 的 OTLP gRPC 端点
tls:
insecure: true # 开发环境禁用 TLS生产环境需配置证书
prometheus: # Prometheus 导出器,用于指标数据
endpoint: "0.0.0.0:8889" # Prometheus 刮取端点
namespace: "rustfs" # 指标前缀
@@ -53,7 +57,7 @@ service:
traces:
receivers: [ otlp ]
processors: [ memory_limiter,batch ]
exporters: [ otlp/traces ]
exporters: [ otlp/traces,otlp/tempo ]
metrics:
receivers: [ otlp ]
processors: [ batch ]
@@ -66,6 +70,12 @@ service:
logs:
level: "info" # Collector 日志级别
metrics:
address: "0.0.0.0:8888" # Collector 自身指标暴露
level: "detailed" # 可以是 basic, normal, detailed
readers:
- periodic:
exporter:
otlp:
protocol: http/protobuf
endpoint: http://otel-collector:4318

View File

@@ -18,8 +18,11 @@ global:
scrape_configs:
- job_name: 'otel-collector'
static_configs:
- targets: ['otel-collector:8888'] # 从 Collector 刮取指标
- targets: [ 'otel-collector:8888' ] # 从 Collector 刮取指标
- job_name: 'otel-metrics'
static_configs:
- targets: ['otel-collector:8889'] # 应用指标
- targets: [ 'otel-collector:8889' ] # 应用指标
- job_name: 'tempo'
static_configs:
- targets: [ 'tempo:3200' ]

View File

@@ -0,0 +1 @@
*

View File

@@ -0,0 +1,8 @@
#!/bin/sh
# Run as root to fix directory permissions
chown -R 10001:10001 /var/tempo
# Use su-exec (a lightweight sudo/gosu alternative, commonly used in Alpine mirroring)
# Switch to user 10001 and execute the original command (CMD) passed to the script
# "$@" represents all parameters passed to this script, i.e. command in docker-compose
exec su-exec 10001:10001 /tempo "$@"

View File

@@ -0,0 +1,55 @@
stream_over_http_enabled: true
server:
http_listen_port: 3200
log_level: info
query_frontend:
search:
duration_slo: 5s
throughput_bytes_slo: 1.073741824e+09
metadata_slo:
duration_slo: 5s
throughput_bytes_slo: 1.073741824e+09
trace_by_id:
duration_slo: 5s
distributor:
receivers:
otlp:
protocols:
grpc:
endpoint: "tempo:4317"
ingester:
max_block_duration: 5m # cut the headblock when this much time passes. this is being set for demo purposes and should probably be left alone normally
compactor:
compaction:
block_retention: 1h # overall Tempo trace retention. set for demo purposes
metrics_generator:
registry:
external_labels:
source: tempo
cluster: docker-compose
storage:
path: /var/tempo/generator/wal
remote_write:
- url: http://prometheus:9090/api/v1/write
send_exemplars: true
traces_storage:
path: /var/tempo/generator/traces
storage:
trace:
backend: local # backend configuration to use
wal:
path: /var/tempo/wal # where to store the wal locally
local:
path: /var/tempo/blocks
overrides:
defaults:
metrics_generator:
processors: [ service-graphs, span-metrics, local-blocks ] # enables metrics generator
generate_native_histograms: both

View File

@@ -60,15 +60,7 @@ runs:
pkg-config \
libssl-dev
- name: Cache protoc binary
id: cache-protoc
uses: actions/cache@v4
with:
path: ~/.local/bin/protoc
key: protoc-31.1-${{ runner.os }}-${{ runner.arch }}
- name: Install protoc
if: steps.cache-protoc.outputs.cache-hit != 'true'
uses: arduino/setup-protoc@v3
with:
version: "31.1"
@@ -104,7 +96,3 @@ runs:
cache-on-failure: true
shared-key: ${{ inputs.cache-shared-key }}
save-if: ${{ inputs.cache-save-if }}
# Cache workspace dependencies
workspaces: |
. -> target
cli/rustfs-gui -> cli/rustfs-gui/target

View File

@@ -12,11 +12,23 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Build and Release Workflow
#
# This workflow builds RustFS binaries and automatically triggers Docker image builds.
#
# Flow:
# 1. Build binaries for multiple platforms
# 2. Upload binaries to OSS storage
# 3. Trigger docker.yml to build and push images using the uploaded binaries
#
# Manual Parameters:
# - build_docker: Build and push Docker images (default: true)
name: Build and Release
on:
push:
tags: ["*"]
tags: ["*.*.*"]
branches: [main]
paths-ignore:
- "**.md"
@@ -52,10 +64,10 @@ on:
- cron: "0 0 * * 0" # Weekly on Sunday at midnight UTC
workflow_dispatch:
inputs:
force_build:
description: "Force build even without changes"
build_docker:
description: "Build and push Docker images after binary build"
required: false
default: false
default: true
type: boolean
env:
@@ -65,39 +77,78 @@ env:
CARGO_INCREMENTAL: 0
jobs:
# Second layer: Business logic level checks (handling build strategy)
# Build strategy check - determine build type based on trigger
build-check:
name: Build Strategy Check
runs-on: ubuntu-latest
outputs:
should_build: ${{ steps.check.outputs.should_build }}
build_type: ${{ steps.check.outputs.build_type }}
version: ${{ steps.check.outputs.version }}
short_sha: ${{ steps.check.outputs.short_sha }}
is_prerelease: ${{ steps.check.outputs.is_prerelease }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Determine build strategy
id: check
run: |
should_build=false
build_type="none"
version=""
short_sha=""
is_prerelease=false
# Business logic: when we need to build
if [[ "${{ github.event_name }}" == "schedule" ]] || \
[[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
[[ "${{ github.event.inputs.force_build }}" == "true" ]] || \
[[ "${{ contains(github.event.head_commit.message, '--build') }}" == "true" ]]; then
# Get short SHA for all builds
short_sha=$(git rev-parse --short HEAD)
# Determine build type based on trigger
if [[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]]; then
# Tag push - release or prerelease
should_build=true
tag_name="${GITHUB_REF#refs/tags/}"
version="${tag_name}"
# Check if this is a prerelease
if [[ "$tag_name" == *"alpha"* ]] || [[ "$tag_name" == *"beta"* ]] || [[ "$tag_name" == *"rc"* ]]; then
build_type="prerelease"
is_prerelease=true
echo "🚀 Prerelease build detected: $tag_name"
else
build_type="release"
echo "📦 Release build detected: $tag_name"
fi
elif [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
# Main branch push - development build
should_build=true
build_type="development"
fi
# Always build for tag pushes (version releases)
if [[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]]; then
version="dev-${short_sha}"
echo "🛠️ Development build detected"
elif [[ "${{ github.event_name }}" == "schedule" ]] || \
[[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
[[ "${{ contains(github.event.head_commit.message, '--build') }}" == "true" ]]; then
# Scheduled or manual build
should_build=true
build_type="release"
echo "🏷️ Tag detected: forcing release build"
build_type="development"
version="dev-${short_sha}"
echo "⚡ Manual/scheduled build detected"
fi
echo "should_build=$should_build" >> $GITHUB_OUTPUT
echo "build_type=$build_type" >> $GITHUB_OUTPUT
echo "Build needed: $should_build (type: $build_type)"
echo "version=$version" >> $GITHUB_OUTPUT
echo "short_sha=$short_sha" >> $GITHUB_OUTPUT
echo "is_prerelease=$is_prerelease" >> $GITHUB_OUTPUT
echo "📊 Build Summary:"
echo " - Should build: $should_build"
echo " - Build type: $build_type"
echo " - Version: $version"
echo " - Short SHA: $short_sha"
echo " - Is prerelease: $is_prerelease"
# Build RustFS binaries
build-rustfs:
@@ -158,7 +209,6 @@ jobs:
- name: Download static console assets
run: |
mkdir -p ./rustfs/static
rm -rf ./rustfs/static/*
if [[ "${{ matrix.platform }}" == "windows" ]]; then
curl.exe -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" -o console.zip --retry 3 --retry-delay 5 --max-time 300
if [[ $? -eq 0 ]]; then
@@ -170,7 +220,6 @@ jobs:
fi
else
chmod +w ./rustfs/static/LICENSE || true
rm -f ./rustfs/static/LICENSE
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
-o console.zip --retry 3 --retry-delay 5 --max-time 300
if [[ $? -eq 0 ]]; then
@@ -193,7 +242,7 @@ jobs:
cargo install cross --git https://github.com/cross-rs/cross
cross build --release --target ${{ matrix.target }} -p rustfs --bins
else
# Use zigbuild for Linux ARM64
# Use zigbuild for other cross-compilation
cargo zigbuild --release --target ${{ matrix.target }} -p rustfs --bins
fi
else
@@ -204,7 +253,38 @@ jobs:
id: package
shell: bash
run: |
PACKAGE_NAME="rustfs-${{ matrix.target }}"
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
VERSION="${{ needs.build-check.outputs.version }}"
SHORT_SHA="${{ needs.build-check.outputs.short_sha }}"
# Extract platform and arch from target
TARGET="${{ matrix.target }}"
PLATFORM="${{ matrix.platform }}"
# Map target to architecture
case "$TARGET" in
*x86_64*)
ARCH="x86_64"
;;
*aarch64*|*arm64*)
ARCH="aarch64"
;;
*armv7*)
ARCH="armv7"
;;
*)
ARCH="unknown"
;;
esac
# Generate package name based on build type
if [[ "$BUILD_TYPE" == "development" ]]; then
# Development build: rustfs-${platform}-${arch}-dev-${short_sha}.zip
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-dev-${SHORT_SHA}"
else
# Release/Prerelease build: rustfs-${platform}-${arch}-v${version}.zip
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-v${VERSION}"
fi
# Create zip packages for all platforms
# Ensure zip is available
@@ -217,9 +297,15 @@ jobs:
cd target/${{ matrix.target }}/release
zip "../../../${PACKAGE_NAME}.zip" rustfs
cd ../../..
echo "package_name=${PACKAGE_NAME}" >> $GITHUB_OUTPUT
echo "package_file=${PACKAGE_NAME}.zip" >> $GITHUB_OUTPUT
echo "Package created: ${PACKAGE_NAME}.zip"
echo "build_type=${BUILD_TYPE}" >> $GITHUB_OUTPUT
echo "version=${VERSION}" >> $GITHUB_OUTPUT
echo "📦 Package created: ${PACKAGE_NAME}.zip"
echo "🔧 Build type: ${BUILD_TYPE}"
echo "📊 Version: ${VERSION}"
- name: Upload artifacts
uses: actions/upload-artifact@v4
@@ -229,13 +315,15 @@ jobs:
retention-days: ${{ startsWith(github.ref, 'refs/tags/') && 30 || 7 }}
- name: Upload to Aliyun OSS
if: needs.build-check.outputs.build_type == 'release' && env.OSS_ACCESS_KEY_ID != ''
if: env.OSS_ACCESS_KEY_ID != '' && (needs.build-check.outputs.build_type == 'release' || needs.build-check.outputs.build_type == 'prerelease' || needs.build-check.outputs.build_type == 'development')
env:
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
OSS_REGION: cn-beijing
OSS_ENDPOINT: https://oss-cn-beijing.aliyuncs.com
run: |
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
# Install ossutil (platform-specific)
OSSUTIL_VERSION="2.1.1"
case "${{ matrix.platform }}" in
@@ -273,148 +361,392 @@ jobs:
;;
esac
# Upload the package file directly to OSS
echo "Uploading ${{ steps.package.outputs.package_file }} to OSS..."
$OSSUTIL_BIN cp "${{ steps.package.outputs.package_file }}" oss://rustfs-artifacts/artifacts/rustfs/ --force
# Create latest.json (only for the first Linux build to avoid duplication)
if [[ "${{ matrix.target }}" == "x86_64-unknown-linux-musl" ]]; then
VERSION="${GITHUB_REF#refs/tags/v}"
echo "{\"version\":\"${VERSION}\",\"release_date\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\"}" > latest.json
$OSSUTIL_BIN cp latest.json oss://rustfs-version/latest.json --force
# Determine upload path based on build type
if [[ "$BUILD_TYPE" == "development" ]]; then
OSS_PATH="oss://rustfs-artifacts/artifacts/rustfs/dev/"
echo "📤 Uploading development build to OSS dev directory"
else
OSS_PATH="oss://rustfs-artifacts/artifacts/rustfs/release/"
echo "📤 Uploading release build to OSS release directory"
fi
# Release management
release:
name: GitHub Release
# Upload the package file to OSS
echo "Uploading ${{ steps.package.outputs.package_file }} to $OSS_PATH..."
$OSSUTIL_BIN cp "${{ steps.package.outputs.package_file }}" "$OSS_PATH" --force
# For release and prerelease builds, also create a latest version
if [[ "$BUILD_TYPE" == "release" ]] || [[ "$BUILD_TYPE" == "prerelease" ]]; then
# Extract platform and arch from package name
PACKAGE_NAME="${{ steps.package.outputs.package_name }}"
# Create latest version filename
# Convert from rustfs-linux-x86_64-v1.0.0 to rustfs-linux-x86_64-latest
LATEST_FILE="${PACKAGE_NAME%-v*}-latest.zip"
# Copy the original file to latest version
cp "${{ steps.package.outputs.package_file }}" "$LATEST_FILE"
# Upload the latest version
echo "Uploading latest version: $LATEST_FILE to $OSS_PATH..."
$OSSUTIL_BIN cp "$LATEST_FILE" "$OSS_PATH" --force
echo "✅ Latest version uploaded: $LATEST_FILE"
fi
# For development builds, create dev-latest version
if [[ "$BUILD_TYPE" == "development" ]]; then
# Extract platform and arch from package name
PACKAGE_NAME="${{ steps.package.outputs.package_name }}"
# Create dev-latest version filename
# Convert from rustfs-linux-x86_64-dev-abc123 to rustfs-linux-x86_64-dev-latest
DEV_LATEST_FILE="${PACKAGE_NAME%-*}-latest.zip"
# Copy the original file to dev-latest version
cp "${{ steps.package.outputs.package_file }}" "$DEV_LATEST_FILE"
# Upload the dev-latest version
echo "Uploading dev-latest version: $DEV_LATEST_FILE to $OSS_PATH..."
$OSSUTIL_BIN cp "$DEV_LATEST_FILE" "$OSS_PATH" --force
echo "✅ Dev-latest version uploaded: $DEV_LATEST_FILE"
# For main branch builds, also create a main-latest version
if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
# Create main-latest version filename
# Convert from rustfs-linux-x86_64-dev-abc123 to rustfs-linux-x86_64-main-latest
MAIN_LATEST_FILE="${PACKAGE_NAME%-dev-*}-main-latest.zip"
# Copy the original file to main-latest version
cp "${{ steps.package.outputs.package_file }}" "$MAIN_LATEST_FILE"
# Upload the main-latest version
echo "Uploading main-latest version: $MAIN_LATEST_FILE to $OSS_PATH..."
$OSSUTIL_BIN cp "$MAIN_LATEST_FILE" "$OSS_PATH" --force
echo "✅ Main-latest version uploaded: $MAIN_LATEST_FILE"
# Also create a generic main-latest for Docker builds
if [[ "${{ matrix.platform }}" == "linux" ]]; then
DOCKER_MAIN_LATEST_FILE="rustfs-linux-${{ matrix.target == 'x86_64-unknown-linux-musl' && 'x86_64' || 'aarch64' }}-main-latest.zip"
cp "${{ steps.package.outputs.package_file }}" "$DOCKER_MAIN_LATEST_FILE"
$OSSUTIL_BIN cp "$DOCKER_MAIN_LATEST_FILE" "$OSS_PATH" --force
echo "✅ Docker main-latest version uploaded: $DOCKER_MAIN_LATEST_FILE"
fi
fi
fi
echo "✅ Upload completed successfully"
# Build summary
build-summary:
name: Build Summary
needs: [build-check, build-rustfs]
if: always() && needs.build-check.outputs.build_type == 'release'
if: always() && needs.build-check.outputs.should_build == 'true'
runs-on: ubuntu-latest
steps:
- name: Build completion summary
run: |
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
VERSION="${{ needs.build-check.outputs.version }}"
echo "🎉 Build completed successfully!"
echo "📦 Build type: $BUILD_TYPE"
echo "🔢 Version: $VERSION"
echo ""
# Check build status
BUILD_STATUS="${{ needs.build-rustfs.result }}"
echo "📊 Build Results:"
echo " 📦 All platforms: $BUILD_STATUS"
echo ""
case "$BUILD_TYPE" in
"development")
echo "🛠️ Development build artifacts have been uploaded to OSS dev directory"
echo "⚠️ This is a development build - not suitable for production use"
;;
"release")
echo "🚀 Release build artifacts have been uploaded to OSS release directory"
echo "✅ This build is ready for production use"
echo "🏷️ GitHub Release will be created in this workflow"
;;
"prerelease")
echo "🧪 Prerelease build artifacts have been uploaded to OSS release directory"
echo "⚠️ This is a prerelease build - use with caution"
echo "🏷️ GitHub Release will be created in this workflow"
;;
esac
echo ""
echo "🐳 Docker Images:"
if [[ "${{ github.event.inputs.build_docker }}" == "false" ]]; then
echo "⏭️ Docker image build was skipped (binary only build)"
elif [[ "$BUILD_STATUS" == "success" ]]; then
echo "🔄 Docker images will be built and pushed automatically via workflow_run event"
else
echo "❌ Docker image build will be skipped due to build failure"
fi
# Create GitHub Release (only for tag pushes)
create-release:
name: Create GitHub Release
needs: [build-check, build-rustfs]
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
runs-on: ubuntu-latest
permissions:
contents: write
outputs:
release_id: ${{ steps.create.outputs.release_id }}
release_url: ${{ steps.create.outputs.release_url }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Download all artifacts
uses: actions/download-artifact@v4
with:
path: ./release-artifacts
- name: Prepare release assets
id: release_prep
run: |
VERSION="${GITHUB_REF#refs/tags/}"
VERSION_CLEAN="${VERSION#v}"
echo "version=${VERSION}" >> $GITHUB_OUTPUT
echo "version_clean=${VERSION_CLEAN}" >> $GITHUB_OUTPUT
# Organize artifacts
mkdir -p ./release-files
# Copy all artifacts (.zip files)
find ./release-artifacts -name "*.zip" -exec cp {} ./release-files/ \;
# Generate checksums for all files
cd ./release-files
if ls *.zip >/dev/null 2>&1; then
sha256sum *.zip >> SHA256SUMS
sha512sum *.zip >> SHA512SUMS
fi
cd ..
# Display what we're releasing
echo "=== Release Files ==="
ls -la ./release-files/
- name: Create GitHub Release
id: create
env:
GH_TOKEN: ${{ github.token }}
run: |
VERSION="${{ steps.release_prep.outputs.version }}"
VERSION_CLEAN="${{ steps.release_prep.outputs.version_clean }}"
TAG="${{ needs.build-check.outputs.version }}"
VERSION="${{ needs.build-check.outputs.version }}"
IS_PRERELEASE="${{ needs.build-check.outputs.is_prerelease }}"
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
# Determine release type for title
if [[ "$BUILD_TYPE" == "prerelease" ]]; then
if [[ "$TAG" == *"alpha"* ]]; then
RELEASE_TYPE="alpha"
elif [[ "$TAG" == *"beta"* ]]; then
RELEASE_TYPE="beta"
elif [[ "$TAG" == *"rc"* ]]; then
RELEASE_TYPE="rc"
else
RELEASE_TYPE="prerelease"
fi
else
RELEASE_TYPE="release"
fi
# Check if release already exists
if gh release view "$VERSION" >/dev/null 2>&1; then
echo "Release $VERSION already exists, skipping creation"
if gh release view "$TAG" >/dev/null 2>&1; then
echo "Release $TAG already exists"
RELEASE_ID=$(gh release view "$TAG" --json databaseId --jq '.databaseId')
RELEASE_URL=$(gh release view "$TAG" --json url --jq '.url')
else
# Get release notes from tag message
RELEASE_NOTES=$(git tag -l --format='%(contents)' "${VERSION}")
RELEASE_NOTES=$(git tag -l --format='%(contents)' "${TAG}")
if [[ -z "$RELEASE_NOTES" || "$RELEASE_NOTES" =~ ^[[:space:]]*$ ]]; then
RELEASE_NOTES="Release ${VERSION_CLEAN}"
if [[ "$IS_PRERELEASE" == "true" ]]; then
RELEASE_NOTES="Pre-release ${VERSION} (${RELEASE_TYPE})"
else
RELEASE_NOTES="Release ${VERSION}"
fi
fi
# Determine if this is a prerelease
# Create release title
if [[ "$IS_PRERELEASE" == "true" ]]; then
TITLE="RustFS $VERSION (${RELEASE_TYPE})"
else
TITLE="RustFS $VERSION"
fi
# Create the release
PRERELEASE_FLAG=""
if [[ "$VERSION" == *"alpha"* ]] || [[ "$VERSION" == *"beta"* ]] || [[ "$VERSION" == *"rc"* ]]; then
if [[ "$IS_PRERELEASE" == "true" ]]; then
PRERELEASE_FLAG="--prerelease"
fi
# Create the release only if it doesn't exist
gh release create "$VERSION" \
--title "RustFS $VERSION_CLEAN" \
gh release create "$TAG" \
--title "$TITLE" \
--notes "$RELEASE_NOTES" \
$PRERELEASE_FLAG
$PRERELEASE_FLAG \
--draft
RELEASE_ID=$(gh release view "$TAG" --json databaseId --jq '.databaseId')
RELEASE_URL=$(gh release view "$TAG" --json url --jq '.url')
fi
- name: Upload release assets
env:
GH_TOKEN: ${{ github.token }}
echo "release_id=$RELEASE_ID" >> $GITHUB_OUTPUT
echo "release_url=$RELEASE_URL" >> $GITHUB_OUTPUT
echo "Created release: $RELEASE_URL"
# Prepare and upload release assets
upload-release-assets:
name: Upload Release Assets
needs: [build-check, build-rustfs, create-release]
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
runs-on: ubuntu-latest
permissions:
contents: write
actions: read
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Download all build artifacts
uses: actions/download-artifact@v4
with:
path: ./artifacts
pattern: rustfs-*
merge-multiple: true
- name: Prepare release assets
id: prepare
run: |
VERSION="${{ steps.release_prep.outputs.version }}"
VERSION="${{ needs.build-check.outputs.version }}"
TAG="${{ needs.build-check.outputs.version }}"
cd ./release-files
mkdir -p ./release-assets
# Upload all binary files
for file in *.zip; do
# Copy and verify artifacts
ASSETS_COUNT=0
for file in ./artifacts/*.zip; do
if [[ -f "$file" ]]; then
echo "Uploading $file..."
gh release upload "$VERSION" "$file" --clobber
cp "$file" ./release-assets/
ASSETS_COUNT=$((ASSETS_COUNT + 1))
fi
done
# Upload checksum files
if [[ -f "SHA256SUMS" ]]; then
echo "Uploading SHA256SUMS..."
gh release upload "$VERSION" "SHA256SUMS" --clobber
if [[ $ASSETS_COUNT -eq 0 ]]; then
echo "❌ No artifacts found!"
exit 1
fi
if [[ -f "SHA512SUMS" ]]; then
echo "Uploading SHA512SUMS..."
gh release upload "$VERSION" "SHA512SUMS" --clobber
cd ./release-assets
# Generate checksums
if ls *.zip >/dev/null 2>&1; then
sha256sum *.zip > SHA256SUMS
sha512sum *.zip > SHA512SUMS
fi
- name: Update release notes
# Create signature placeholder files
for file in *.zip; do
echo "# Signature for $file" > "${file}.asc"
echo "# GPG signature will be added in future versions" >> "${file}.asc"
done
echo "📦 Prepared assets:"
ls -la
echo "🔢 Asset count: $ASSETS_COUNT"
- name: Upload to GitHub Release
env:
GH_TOKEN: ${{ github.token }}
run: |
VERSION="${{ steps.release_prep.outputs.version }}"
VERSION_CLEAN="${{ steps.release_prep.outputs.version_clean }}"
TAG="${{ needs.build-check.outputs.version }}"
# Check if release already has custom notes (not auto-generated)
EXISTING_NOTES=$(gh release view "$VERSION" --json body --jq '.body' 2>/dev/null || echo "")
cd ./release-assets
# Only update if release notes are empty or auto-generated
if [[ -z "$EXISTING_NOTES" ]] || [[ "$EXISTING_NOTES" == *"Release ${VERSION_CLEAN}"* ]]; then
echo "Updating release notes for $VERSION"
# Get original release notes from tag
ORIGINAL_NOTES=$(git tag -l --format='%(contents)' "${VERSION}")
if [[ -z "$ORIGINAL_NOTES" || "$ORIGINAL_NOTES" =~ ^[[:space:]]*$ ]]; then
ORIGINAL_NOTES="Release ${VERSION_CLEAN}"
# Upload all files
for file in *; do
if [[ -f "$file" ]]; then
echo "📤 Uploading $file..."
gh release upload "$TAG" "$file" --clobber
fi
done
# Use external template file and substitute variables
sed -e "s/\${VERSION}/$VERSION/g" \
-e "s/\${VERSION_CLEAN}/$VERSION_CLEAN/g" \
-e "s/\${ORIGINAL_NOTES}/$(echo "$ORIGINAL_NOTES" | sed 's/[[\.*^$()+?{|]/\\&/g')/g" \
.github/workflows/release-notes-template.md > enhanced_notes.md
echo "✅ All assets uploaded successfully"
# Update the release with enhanced notes
gh release edit "$VERSION" --notes-file enhanced_notes.md
else
echo "Release $VERSION already has custom notes, skipping update to preserve manual edits"
# Update latest.json for stable releases only
update-latest-version:
name: Update Latest Version
needs: [build-check, upload-release-assets]
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.is_prerelease == 'false'
runs-on: ubuntu-latest
steps:
- name: Update latest.json
env:
OSS_ACCESS_KEY_ID: ${{ secrets.ALICLOUDOSS_KEY_ID }}
OSS_ACCESS_KEY_SECRET: ${{ secrets.ALICLOUDOSS_KEY_SECRET }}
run: |
if [[ -z "$OSS_ACCESS_KEY_ID" ]]; then
echo "⚠️ OSS credentials not available, skipping latest.json update"
exit 0
fi
VERSION="${{ needs.build-check.outputs.version }}"
TAG="${{ needs.build-check.outputs.version }}"
# Install ossutil
OSSUTIL_VERSION="2.1.1"
OSSUTIL_ZIP="ossutil-${OSSUTIL_VERSION}-linux-amd64.zip"
OSSUTIL_DIR="ossutil-${OSSUTIL_VERSION}-linux-amd64"
curl -o "$OSSUTIL_ZIP" "https://gosspublic.alicdn.com/ossutil/v2/${OSSUTIL_VERSION}/${OSSUTIL_ZIP}"
unzip "$OSSUTIL_ZIP"
chmod +x "${OSSUTIL_DIR}/ossutil"
# Create latest.json
cat > latest.json << EOF
{
"version": "${VERSION}",
"tag": "${TAG}",
"release_date": "$(date -u +%Y-%m-%dT%H:%M:%SZ)",
"release_type": "stable",
"download_url": "https://github.com/${{ github.repository }}/releases/tag/${TAG}"
}
EOF
# Upload to OSS
./${OSSUTIL_DIR}/ossutil cp latest.json oss://rustfs-version/latest.json --force
echo "✅ Updated latest.json for stable release $VERSION"
# Publish release (remove draft status)
publish-release:
name: Publish Release
needs: [build-check, create-release, upload-release-assets]
if: startsWith(github.ref, 'refs/tags/') && needs.build-check.outputs.build_type != 'development'
runs-on: ubuntu-latest
permissions:
contents: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Update release notes and publish
env:
GH_TOKEN: ${{ github.token }}
run: |
TAG="${{ needs.build-check.outputs.version }}"
VERSION="${{ needs.build-check.outputs.version }}"
IS_PRERELEASE="${{ needs.build-check.outputs.is_prerelease }}"
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
# Determine release type
if [[ "$BUILD_TYPE" == "prerelease" ]]; then
if [[ "$TAG" == *"alpha"* ]]; then
RELEASE_TYPE="alpha"
elif [[ "$TAG" == *"beta"* ]]; then
RELEASE_TYPE="beta"
elif [[ "$TAG" == *"rc"* ]]; then
RELEASE_TYPE="rc"
else
RELEASE_TYPE="prerelease"
fi
else
RELEASE_TYPE="release"
fi
# Get original release notes from tag
ORIGINAL_NOTES=$(git tag -l --format='%(contents)' "${TAG}")
if [[ -z "$ORIGINAL_NOTES" || "$ORIGINAL_NOTES" =~ ^[[:space:]]*$ ]]; then
if [[ "$IS_PRERELEASE" == "true" ]]; then
ORIGINAL_NOTES="Pre-release ${VERSION} (${RELEASE_TYPE})"
else
ORIGINAL_NOTES="Release ${VERSION}"
fi
fi
# Publish the release (remove draft status)
gh release edit "$TAG" --draft=false
echo "🎉 Released $TAG successfully!"
echo "📄 Release URL: ${{ needs.create-release.outputs.release_url }}"

View File

@@ -81,7 +81,7 @@ jobs:
cancel_others: true
paths_ignore: '["*.md", "docs/**", "deploy/**"]'
# Never skip release events and tag pushes
do_not_skip: '["release", "push"]'
do_not_skip: '["workflow_dispatch", "schedule", "merge_group", "release", "push"]'
test-and-lint:
name: Test and Lint

View File

@@ -12,42 +12,33 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# Docker Images Workflow
#
# This workflow builds Docker images using pre-built binaries from the build workflow.
#
# Trigger Types:
# 1. workflow_run: Automatically triggered when "Build and Release" workflow completes
# 2. workflow_dispatch: Manual trigger for standalone Docker builds
#
# Key Features:
# - Only triggers when Linux builds (x86_64 + aarch64) are successful
# - Independent of macOS/Windows build status
# - Uses workflow_run event for precise control
# - Only builds Docker images for releases and prereleases (development builds are skipped)
name: Docker Images
# Permissions needed for workflow_run event and Docker registry access
permissions:
contents: read
packages: write
on:
push:
tags: ["*"]
branches: [main]
paths-ignore:
- "**.md"
- "**.txt"
- ".github/**"
- "docs/**"
- "deploy/**"
- "scripts/dev_*.sh"
- "LICENSE*"
- "README*"
- "**/*.png"
- "**/*.jpg"
- "**/*.svg"
- ".gitignore"
- ".dockerignore"
pull_request:
branches: [main]
paths-ignore:
- "**.md"
- "**.txt"
- ".github/**"
- "docs/**"
- "deploy/**"
- "scripts/dev_*.sh"
- "LICENSE*"
- "README*"
- "**/*.png"
- "**/*.jpg"
- "**/*.svg"
- ".gitignore"
- ".dockerignore"
# Automatically triggered when build workflow completes
workflow_run:
workflows: ["Build and Release"]
types: [completed]
# Manual trigger with same parameters for consistency
workflow_dispatch:
inputs:
push_images:
@@ -55,156 +46,372 @@ on:
required: false
default: true
type: boolean
version:
description: "Version to build (latest for stable release, or specific version like v1.0.0, v1.0.0-alpha1)"
required: false
default: "latest"
type: string
force_rebuild:
description: "Force rebuild even if binary exists (useful for testing)"
required: false
default: false
type: boolean
env:
DOCKERHUB_USERNAME: rustfs
CARGO_TERM_COLOR: always
REGISTRY_DOCKERHUB: rustfs/rustfs
REGISTRY_GHCR: ghcr.io/${{ github.repository }}
DOCKER_PLATFORMS: linux/amd64,linux/arm64
jobs:
# Check if we should build
# Check if we should build Docker images
build-check:
name: Build Check
name: Docker Build Check
runs-on: ubuntu-latest
outputs:
should_build: ${{ steps.check.outputs.should_build }}
should_push: ${{ steps.check.outputs.should_push }}
build_type: ${{ steps.check.outputs.build_type }}
version: ${{ steps.check.outputs.version }}
short_sha: ${{ steps.check.outputs.short_sha }}
is_prerelease: ${{ steps.check.outputs.is_prerelease }}
create_latest: ${{ steps.check.outputs.create_latest }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 0
# For workflow_run events, checkout the specific commit that triggered the workflow
ref: ${{ github.event.workflow_run.head_sha || github.sha }}
- name: Check build conditions
id: check
run: |
should_build=false
should_push=false
build_type="none"
version=""
short_sha=""
is_prerelease=false
create_latest=false
# Always build on workflow_dispatch or when changes detected
if [[ "${{ github.event_name }}" == "workflow_dispatch" ]] || \
[[ "${{ github.event_name }}" == "push" ]] || \
[[ "${{ github.event_name }}" == "pull_request" ]]; then
if [[ "${{ github.event_name }}" == "workflow_run" ]]; then
# Triggered by build workflow completion
echo "🔗 Triggered by build workflow completion"
# Check if the triggering workflow was successful
# If the workflow succeeded, it means ALL builds (including Linux x86_64 and aarch64) succeeded
if [[ "${{ github.event.workflow_run.conclusion }}" == "success" ]]; then
echo "✅ Build workflow succeeded, all builds including Linux are successful"
should_build=true
should_push=true
else
echo "❌ Build workflow failed (conclusion: ${{ github.event.workflow_run.conclusion }}), skipping Docker build"
should_build=false
fi
# Extract version info from commit message or use commit SHA
# Use Git to generate consistent short SHA (ensures uniqueness like build.yml)
short_sha=$(git rev-parse --short "${{ github.event.workflow_run.head_sha }}")
# Determine build type based on triggering workflow event and ref
triggering_event="${{ github.event.workflow_run.event }}"
head_branch="${{ github.event.workflow_run.head_branch }}"
echo "🔍 Analyzing triggering workflow:"
echo " 📋 Event: $triggering_event"
echo " 🌿 Head branch: $head_branch"
echo " 📎 Head SHA: ${{ github.event.workflow_run.head_sha }}"
# Check if this was triggered by a tag push
if [[ "$triggering_event" == "push" ]]; then
# For tag pushes, head_branch will be like "refs/tags/v1.0.0" or just "v1.0.0"
if [[ "$head_branch" == refs/tags/* ]]; then
# Extract tag name from refs/tags/TAG_NAME
tag_name="${head_branch#refs/tags/}"
version="$tag_name"
elif [[ "$head_branch" =~ ^v?[0-9]+\.[0-9]+\.[0-9]+ ]]; then
# Direct tag name like "v1.0.0" or "1.0.0-alpha.1"
version="$head_branch"
elif [[ "$head_branch" == "main" ]]; then
# Regular branch push to main
build_type="development"
version="dev-${short_sha}"
should_build=false
echo "⏭️ Skipping Docker build for development version (main branch push)"
else
# Other branch push
build_type="development"
version="dev-${short_sha}"
should_build=false
echo "⏭️ Skipping Docker build for development version (branch: $head_branch)"
fi
# If we extracted a version (tag), determine release type
if [[ -n "$version" ]] && [[ "$version" != "dev-${short_sha}" ]]; then
# Remove 'v' prefix if present for consistent version format
if [[ "$version" == v* ]]; then
version="${version#v}"
fi
if [[ "$version" == *"alpha"* ]] || [[ "$version" == *"beta"* ]] || [[ "$version" == *"rc"* ]]; then
build_type="prerelease"
is_prerelease=true
echo "🧪 Building Docker image for prerelease: $version"
else
build_type="release"
create_latest=true
echo "🚀 Building Docker image for release: $version"
fi
fi
else
# Non-push events
build_type="development"
version="dev-${short_sha}"
should_build=false
echo "⏭️ Skipping Docker build for development version (event: $triggering_event)"
fi
echo "🔄 Build triggered by workflow_run:"
echo " 📋 Conclusion: ${{ github.event.workflow_run.conclusion }}"
echo " 🌿 Branch: ${{ github.event.workflow_run.head_branch }}"
echo " 📎 SHA: ${{ github.event.workflow_run.head_sha }}"
echo " 🎯 Event: ${{ github.event.workflow_run.event }}"
elif [[ "${{ github.event_name }}" == "workflow_dispatch" ]]; then
# Manual trigger
input_version="${{ github.event.inputs.version }}"
version="${input_version}"
should_push="${{ github.event.inputs.push_images }}"
should_build=true
fi
# Push only on main branch, tags, or manual trigger
if [[ "${{ github.ref }}" == "refs/heads/main" ]] || \
[[ "${{ startsWith(github.ref, 'refs/tags/') }}" == "true" ]] || \
[[ "${{ github.event.inputs.push_images }}" == "true" ]]; then
should_push=true
# Get short SHA
short_sha=$(git rev-parse --short HEAD)
echo "🎯 Manual Docker build triggered:"
echo " 📋 Requested version: $input_version"
echo " 🔧 Force rebuild: ${{ github.event.inputs.force_rebuild }}"
echo " 🚀 Push images: $should_push"
case "$input_version" in
"latest")
build_type="release"
create_latest=true
echo "🚀 Building with latest stable release version"
;;
# Prerelease versions (must match first, more specific)
v*alpha*|v*beta*|v*rc*|*alpha*|*beta*|*rc*)
build_type="prerelease"
is_prerelease=true
echo "🧪 Building with prerelease version: $input_version"
;;
# Release versions (match after prereleases, more general)
v[0-9]*|[0-9]*.*.*)
build_type="release"
create_latest=true
echo "📦 Building with specific release version: $input_version"
;;
*)
# Invalid version for Docker build
should_build=false
echo "❌ Invalid version for Docker build: $input_version"
echo "⚠️ Only release versions (latest, v1.0.0, 1.0.0) and prereleases (v1.0.0-alpha1, 1.0.0-beta2) are supported"
;;
esac
fi
echo "should_build=$should_build" >> $GITHUB_OUTPUT
echo "should_push=$should_push" >> $GITHUB_OUTPUT
echo "Build: $should_build, Push: $should_push"
echo "build_type=$build_type" >> $GITHUB_OUTPUT
echo "version=$version" >> $GITHUB_OUTPUT
echo "short_sha=$short_sha" >> $GITHUB_OUTPUT
echo "is_prerelease=$is_prerelease" >> $GITHUB_OUTPUT
echo "create_latest=$create_latest" >> $GITHUB_OUTPUT
echo "🐳 Docker Build Summary:"
echo " - Should build: $should_build"
echo " - Should push: $should_push"
echo " - Build type: $build_type"
echo " - Version: $version"
echo " - Short SHA: $short_sha"
echo " - Is prerelease: $is_prerelease"
echo " - Create latest: $create_latest"
# Build multi-arch Docker images
# Strategy: Build images using pre-built binaries from dl.rustfs.com
# Supports both release and dev channel binaries based on build context
# Only runs when should_build is true (which includes workflow success check)
build-docker:
name: Build Docker Images
needs: build-check
if: needs.build-check.outputs.should_build == 'true'
runs-on: ubuntu-latest
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
variant:
- name: production
dockerfile: Dockerfile
platforms: linux/amd64,linux/arm64
- name: ubuntu
dockerfile: .docker/Dockerfile.ubuntu22.04
platforms: linux/amd64,linux/arm64
- name: alpine
dockerfile: .docker/Dockerfile.alpine
platforms: linux/amd64,linux/arm64
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ env.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
# - name: Login to GitHub Container Registry
# uses: docker/login-action@v3
# with:
# registry: ghcr.io
# username: ${{ github.actor }}
# password: ${{ secrets.GITHUB_TOKEN }}
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
- name: Login to Docker Hub
if: needs.build-check.outputs.should_push == 'true' && secrets.DOCKERHUB_USERNAME != ''
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Login to GitHub Container Registry
if: needs.build-check.outputs.should_push == 'true'
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
- name: Extract metadata and generate tags
id: meta
uses: docker/metadata-action@v5
with:
images: |
${{ env.REGISTRY_DOCKERHUB }}
${{ env.REGISTRY_GHCR }}
tags: |
type=ref,event=branch,suffix=-${{ matrix.variant.name }}
type=ref,event=pr,suffix=-${{ matrix.variant.name }}
type=semver,pattern={{version}},suffix=-${{ matrix.variant.name }}
type=semver,pattern={{major}}.{{minor}},suffix=-${{ matrix.variant.name }}
type=raw,value=latest,suffix=-${{ matrix.variant.name }},enable={{is_default_branch}}
flavor: |
latest=false
run: |
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
VERSION="${{ needs.build-check.outputs.version }}"
SHORT_SHA="${{ needs.build-check.outputs.short_sha }}"
CREATE_LATEST="${{ needs.build-check.outputs.create_latest }}"
# Convert version format for Dockerfile compatibility
case "$VERSION" in
"latest")
# For stable latest, use RELEASE=latest + release CHANNEL
DOCKER_RELEASE="latest"
DOCKER_CHANNEL="release"
;;
v*)
# For versioned releases (v1.0.0), remove 'v' prefix for Dockerfile
DOCKER_RELEASE="${VERSION#v}"
DOCKER_CHANNEL="release"
;;
*)
# For other versions, pass as-is
DOCKER_RELEASE="${VERSION}"
DOCKER_CHANNEL="release"
;;
esac
echo "docker_release=$DOCKER_RELEASE" >> $GITHUB_OUTPUT
echo "docker_channel=$DOCKER_CHANNEL" >> $GITHUB_OUTPUT
echo "🐳 Docker build parameters:"
echo " - Original version: $VERSION"
echo " - Docker RELEASE: $DOCKER_RELEASE"
echo " - Docker CHANNEL: $DOCKER_CHANNEL"
# Generate tags based on build type
# Only support release and prerelease builds (no development builds)
TAGS="${{ env.REGISTRY_DOCKERHUB }}:${VERSION}"
# Add channel tags for prereleases and latest for stable
if [[ "$CREATE_LATEST" == "true" ]]; then
# Stable release
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:latest"
elif [[ "$BUILD_TYPE" == "prerelease" ]]; then
# Prerelease channel tags (alpha, beta, rc)
if [[ "$VERSION" == *"alpha"* ]]; then
CHANNEL="alpha"
elif [[ "$VERSION" == *"beta"* ]]; then
CHANNEL="beta"
elif [[ "$VERSION" == *"rc"* ]]; then
CHANNEL="rc"
fi
if [[ -n "$CHANNEL" ]]; then
TAGS="$TAGS,${{ env.REGISTRY_DOCKERHUB }}:${CHANNEL}"
fi
fi
# Output tags
echo "tags=$TAGS" >> $GITHUB_OUTPUT
# Generate labels
LABELS="org.opencontainers.image.title=RustFS"
LABELS="$LABELS,org.opencontainers.image.description=RustFS distributed object storage system"
LABELS="$LABELS,org.opencontainers.image.version=$VERSION"
LABELS="$LABELS,org.opencontainers.image.revision=${{ github.sha }}"
LABELS="$LABELS,org.opencontainers.image.source=${{ github.server_url }}/${{ github.repository }}"
LABELS="$LABELS,org.opencontainers.image.created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')"
LABELS="$LABELS,org.opencontainers.image.build-type=$BUILD_TYPE"
echo "labels=$LABELS" >> $GITHUB_OUTPUT
echo "🐳 Generated Docker tags:"
echo "$TAGS" | tr ',' '\n' | sed 's/^/ - /'
echo "📋 Build type: $BUILD_TYPE"
echo "🔖 Version: $VERSION"
- name: Build and push Docker image
uses: docker/build-push-action@v5
uses: docker/build-push-action@v6
with:
context: .
file: ${{ matrix.variant.dockerfile }}
platforms: ${{ matrix.variant.platforms }}
file: Dockerfile
platforms: ${{ env.DOCKER_PLATFORMS }}
push: ${{ needs.build-check.outputs.should_push == 'true' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha,scope=docker-${{ matrix.variant.name }}
cache-to: type=gha,mode=max,scope=docker-${{ matrix.variant.name }}
cache-from: |
type=gha,scope=docker-binary
cache-to: |
type=gha,mode=max,scope=docker-binary
build-args: |
BUILDTIME=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.created'] }}
VERSION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.version'] }}
REVISION=${{ fromJSON(steps.meta.outputs.json).labels['org.opencontainers.image.revision'] }}
BUILDTIME=$(date -u +'%Y-%m-%dT%H:%M:%SZ')
VERSION=${{ needs.build-check.outputs.version }}
BUILD_TYPE=${{ needs.build-check.outputs.build_type }}
REVISION=${{ github.sha }}
RELEASE=${{ steps.meta.outputs.docker_release }}
CHANNEL=${{ steps.meta.outputs.docker_channel }}
BUILDKIT_INLINE_CACHE=1
# Enable advanced BuildKit features for better performance
provenance: false
sbom: false
# Add retry mechanism by splitting the build process
no-cache: false
pull: true
# Create manifest for main production image
create-manifest:
name: Create Manifest
# Note: Manifest creation is no longer needed as we only build one variant
# Multi-arch manifests are automatically created by docker/build-push-action
# Docker build summary
docker-summary:
name: Docker Build Summary
needs: [build-check, build-docker]
if: needs.build-check.outputs.should_push == 'true' && startsWith(github.ref, 'refs/tags/')
if: always() && needs.build-check.outputs.should_build == 'true'
runs-on: ubuntu-latest
steps:
- name: Login to Docker Hub
if: secrets.DOCKERHUB_USERNAME != ''
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Create and push manifest
- name: Docker build completion summary
run: |
VERSION=${GITHUB_REF#refs/tags/}
BUILD_TYPE="${{ needs.build-check.outputs.build_type }}"
VERSION="${{ needs.build-check.outputs.version }}"
CREATE_LATEST="${{ needs.build-check.outputs.create_latest }}"
# Create main image tag (without variant suffix)
if [[ -n "${{ secrets.DOCKERHUB_USERNAME }}" ]]; then
docker buildx imagetools create \
-t ${{ env.REGISTRY_DOCKERHUB }}:${VERSION} \
-t ${{ env.REGISTRY_DOCKERHUB }}:latest \
${{ env.REGISTRY_DOCKERHUB }}:${VERSION}-production
fi
echo "🐳 Docker build completed successfully!"
echo "📦 Build type: $BUILD_TYPE"
echo "🔢 Version: $VERSION"
echo "🚀 Strategy: Images using pre-built binaries (release channel only)"
echo ""
docker buildx imagetools create \
-t ${{ env.REGISTRY_GHCR }}:${VERSION} \
-t ${{ env.REGISTRY_GHCR }}:latest \
${{ env.REGISTRY_GHCR }}:${VERSION}-production
case "$BUILD_TYPE" in
"release")
echo "🚀 Release Docker image has been built with ${VERSION} tags"
echo "✅ This image is ready for production use"
if [[ "$CREATE_LATEST" == "true" ]]; then
echo "🏷️ Latest tag has been created for stable release"
fi
;;
"prerelease")
echo "🧪 Prerelease Docker image has been built with ${VERSION} tags"
echo "⚠️ This is a prerelease image - use with caution"
echo "🚫 Latest tag NOT created for prerelease"
;;
*)
echo "❌ Unexpected build type: $BUILD_TYPE"
;;
esac

View File

@@ -1,8 +1,22 @@
name: 'issue-translator'
on:
issue_comment:
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
name: "issue-translator"
on:
issue_comment:
types: [created]
issues:
issues:
types: [opened]
jobs:
@@ -14,5 +28,5 @@ jobs:
IS_MODIFY_TITLE: false
# not require, default false, . Decide whether to modify the issue title
# if true, the robot account @Issues-translate-bot must have modification permissions, invite @Issues-translate-bot to your project or use your custom bot.
CUSTOM_BOT_NOTE: Bot detected the issue body's language is not English, translate it automatically.
CUSTOM_BOT_NOTE: Bot detected the issue body's language is not English, translate it automatically.
# not require. Customize the translation robot prefix message.

View File

@@ -18,10 +18,10 @@ on:
push:
branches: [main]
paths:
- '**/*.rs'
- '**/Cargo.toml'
- '**/Cargo.lock'
- '.github/workflows/performance.yml'
- "**/*.rs"
- "**/Cargo.toml"
- "**/Cargo.lock"
- ".github/workflows/performance.yml"
workflow_dispatch:
inputs:
profile_duration:
@@ -73,12 +73,11 @@ jobs:
echo "RUSTFS_VOLUMES=./target/volume/test{0...4}" >> $GITHUB_ENV
echo "RUST_LOG=rustfs=info,ecstore=info,s3s=info,iam=info,rustfs-obs=info" >> $GITHUB_ENV
- name: Download static files
- name: Verify console static assets
run: |
curl -L "https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip" \
-o tempfile.zip --retry 3 --retry-delay 5
unzip -o tempfile.zip -d ./rustfs/static
rm tempfile.zip
# Console static assets are already embedded in the repository
echo "Console static assets size: $(du -sh rustfs/static/)"
echo "Console static assets are embedded via rust-embed, no external download needed"
- name: Build with profiling optimizations
run: |

View File

@@ -1,78 +0,0 @@
## RustFS ${VERSION_CLEAN}
${ORIGINAL_NOTES}
---
### 🚀 Quick Download
**Linux (Static Binaries - No Dependencies):**
```bash
# x86_64 (Intel/AMD)
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-x86_64-unknown-linux-musl.zip
unzip rustfs-x86_64-unknown-linux-musl.zip
sudo mv rustfs /usr/local/bin/
# ARM64 (Graviton, Apple Silicon VMs)
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-aarch64-unknown-linux-musl.zip
unzip rustfs-aarch64-unknown-linux-musl.zip
sudo mv rustfs /usr/local/bin/
```
**macOS:**
```bash
# Apple Silicon (M1/M2/M3)
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-aarch64-apple-darwin.zip
unzip rustfs-aarch64-apple-darwin.zip
sudo mv rustfs /usr/local/bin/
# Intel
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/rustfs-x86_64-apple-darwin.zip
unzip rustfs-x86_64-apple-darwin.zip
sudo mv rustfs /usr/local/bin/
```
### 📁 Available Downloads
| Platform | Architecture | File | Description |
|----------|-------------|------|-------------|
| Linux | x86_64 | `rustfs-x86_64-unknown-linux-musl.zip` | Static binary, no dependencies |
| Linux | ARM64 | `rustfs-aarch64-unknown-linux-musl.zip` | Static binary, no dependencies |
| macOS | Apple Silicon | `rustfs-aarch64-apple-darwin.zip` | Native binary, ZIP archive |
| macOS | Intel | `rustfs-x86_64-apple-darwin.zip` | Native binary, ZIP archive |
### 🔐 Verification
Download checksums and verify your download:
```bash
# Download checksums
curl -LO https://github.com/rustfs/rustfs/releases/download/${VERSION}/SHA256SUMS
# Verify (Linux)
sha256sum -c SHA256SUMS --ignore-missing
# Verify (macOS)
shasum -a 256 -c SHA256SUMS --ignore-missing
```
### 🛠️ System Requirements
- **Linux**: Any distribution with glibc 2.17+ (CentOS 7+, Ubuntu 16.04+)
- **macOS**: 10.15+ (Catalina or later)
- **Windows**: Windows 10 version 1809 or later
### 📚 Documentation
- [Installation Guide](https://github.com/rustfs/rustfs#installation)
- [Quick Start](https://github.com/rustfs/rustfs#quick-start)
- [Configuration](https://github.com/rustfs/rustfs/blob/main/docs/)
- [API Documentation](https://docs.rs/rustfs)
### 🆘 Support
- 🐛 [Report Issues](https://github.com/rustfs/rustfs/issues)
- 💬 [Community Discussions](https://github.com/rustfs/rustfs/discussions)
- 📖 [Documentation](https://github.com/rustfs/rustfs/tree/main/docs)

1
.gitignore vendored
View File

@@ -19,3 +19,4 @@ deploy/certs/*
profile.json
.docker/openobserve-otel/data
*.zst
.secrets

685
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -63,7 +63,7 @@ rustfs-filemeta = { path = "crates/filemeta" }
rustfs-rio = { path = "crates/rio" }
[workspace.dependencies]
rustfs-ahm = { path = "crates/ahm", version = "0.0.3" }
rustfs-ahm = { path = "crates/ahm", version = "0.0.5" }
rustfs-s3select-api = { path = "crates/s3select-api", version = "0.0.5" }
rustfs-appauth = { path = "crates/appauth", version = "0.0.5" }
rustfs-common = { path = "crates/common", version = "0.0.5" }
@@ -89,7 +89,7 @@ aes-gcm = { version = "0.10.3", features = ["std"] }
arc-swap = "1.7.1"
argon2 = { version = "0.5.3", features = ["std"] }
atoi = "2.0.0"
async-channel = "2.4.0"
async-channel = "2.5.0"
async-recursion = "1.1.1"
async-trait = "0.1.88"
async-compression = { version = "0.4.0" }
@@ -107,7 +107,7 @@ byteorder = "1.5.0"
cfg-if = "1.0.1"
chacha20poly1305 = { version = "0.10.1" }
chrono = { version = "0.4.41", features = ["serde"] }
clap = { version = "4.5.40", features = ["derive", "env"] }
clap = { version = "4.5.41", features = ["derive", "env"] }
const-str = { version = "0.6.2", features = ["std", "proc"] }
crc32fast = "1.4.2"
criterion = { version = "0.5", features = ["html_reports"] }
@@ -116,7 +116,7 @@ datafusion = "46.0.1"
derive_builder = "0.20.2"
dioxus = { version = "0.6.3", features = ["router"] }
dirs = "6.0.0"
enumset = "1.1.6"
enumset = "1.1.7"
flatbuffers = "25.2.10"
flate2 = "1.1.2"
flexi_logger = { version = "0.31.2", features = ["trc", "dont_minimize_extra_stacks"] }
@@ -130,7 +130,7 @@ hex-simd = "0.8.0"
highway = { version = "1.3.0" }
hmac = "0.12.1"
hyper = "1.6.0"
hyper-util = { version = "0.1.14", features = [
hyper-util = { version = "0.1.15", features = [
"tokio",
"server-auto",
"server-graceful",
@@ -182,9 +182,9 @@ pbkdf2 = "0.12.2"
percent-encoding = "2.3.1"
pin-project-lite = "0.2.16"
prost = "0.13.5"
quick-xml = "0.37.5"
quick-xml = "0.38.0"
rand = "0.9.1"
rdkafka = { version = "0.37.0", features = ["tokio"] }
rdkafka = { version = "0.38.0", features = ["tokio"] }
reed-solomon-simd = { version = "3.0.1" }
regex = { version = "1.11.1" }
reqwest = { version = "0.12.22", default-features = false, features = [
@@ -207,10 +207,10 @@ rumqttc = { version = "0.24" }
rust-embed = { version = "8.7.2" }
rust-i18n = { version = "3.1.5" }
rustfs-rsc = "2025.506.1"
rustls = { version = "0.23.28" }
rustls = { version = "0.23.29" }
rustls-pki-types = "1.12.0"
rustls-pemfile = "2.2.0"
s3s = { version = "0.12.0-minio-preview.1" }
s3s = { version = "0.12.0-minio-preview.2" }
shadow-rs = { version = "1.2.0", default-features = false }
serde = { version = "1.0.219", features = ["derive"] }
serde_json = { version = "1.0.140", features = ["raw_value"] }
@@ -222,10 +222,12 @@ siphasher = "1.0.1"
smallvec = { version = "1.15.1", features = ["serde"] }
snafu = "0.8.6"
snap = "1.1.1"
socket2 = "0.5.10"
socket2 = "0.6.0"
strum = { version = "0.27.1", features = ["derive"] }
sysinfo = "0.35.2"
sysinfo = "0.36.0"
sysctl = "0.6.0"
tempfile = "3.20.0"
temp-env = "0.3.6"
test-case = "3.3.1"
thiserror = "2.0.12"
time = { version = "0.3.41", features = [
@@ -239,6 +241,7 @@ tokio = { version = "1.46.1", features = ["fs", "rt-multi-thread"] }
tokio-rustls = { version = "0.26.2", default-features = false }
tokio-stream = { version = "0.1.17" }
tokio-tar = "0.3.1"
tokio-test = "0.4.4"
tokio-util = { version = "0.7.15", features = ["io", "compat"] }
tonic = { version = "0.13.1", features = ["gzip"] }
tonic-build = { version = "0.13.1" }
@@ -263,7 +266,7 @@ winapi = { version = "0.3.9" }
xxhash-rust = { version = "0.8.15", features = ["xxh64", "xxh3"] }
zip = "2.4.2"
zstd = "0.13.3"
anyhow = "1.0.86"
anyhow = "1.0.98"
[profile.wasm-dev]
inherits = "dev"

View File

@@ -1,49 +1,121 @@
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Multi-stage build for RustFS production image
FROM alpine:latest AS build
FROM alpine:3.18 AS builder
# Build arguments - use TARGETPLATFORM for consistency with Dockerfile.source
ARG TARGETPLATFORM
ARG BUILDPLATFORM
ARG RELEASE=latest
RUN apk add -U --no-cache \
# Install dependencies for downloading and verifying binaries
RUN apk add --no-cache \
ca-certificates \
curl \
bash \
unzip
wget \
unzip \
jq
# Create build directory
WORKDIR /build
RUN curl -Lo /tmp/rustfs.zip https://dl.rustfs.com/artifacts/rustfs/rustfs-x86_64-unknown-linux-musl.zip && \
unzip -o /tmp/rustfs.zip -d /tmp && \
mv /tmp/rustfs /rustfs && \
chmod +x /rustfs && \
rm -rf /tmp/*
# Map TARGETPLATFORM to architecture format used in builds
RUN case "${TARGETPLATFORM}" in \
"linux/amd64") ARCH="x86_64" ;; \
"linux/arm64") ARCH="aarch64" ;; \
*) echo "Unsupported platform: ${TARGETPLATFORM}" && exit 1 ;; \
esac && \
echo "ARCH=${ARCH}" > /build/arch.env
FROM alpine:3.18
# Download rustfs binary from dl.rustfs.com (release channel only)
RUN . /build/arch.env && \
BASE_URL="https://dl.rustfs.com/artifacts/rustfs/release" && \
PLATFORM="linux" && \
if [ "${RELEASE}" = "latest" ]; then \
# Download latest release version \
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-latest.zip"; \
DOWNLOAD_URL="${BASE_URL}/${PACKAGE_NAME}"; \
echo "📥 Downloading latest release build: ${PACKAGE_NAME}"; \
else \
# Download specific release version \
PACKAGE_NAME="rustfs-${PLATFORM}-${ARCH}-v${RELEASE}.zip"; \
DOWNLOAD_URL="${BASE_URL}/${PACKAGE_NAME}"; \
echo "📥 Downloading specific release version: ${PACKAGE_NAME}"; \
fi && \
echo "🔗 Download URL: ${DOWNLOAD_URL}" && \
curl -f -L "${DOWNLOAD_URL}" -o /build/rustfs.zip && \
if [ ! -f /build/rustfs.zip ] || [ ! -s /build/rustfs.zip ]; then \
echo "❌ Failed to download binary package"; \
echo "💡 Make sure the package ${PACKAGE_NAME} exists"; \
echo "🔗 Check: ${DOWNLOAD_URL}"; \
exit 1; \
fi && \
unzip /build/rustfs.zip -d /build && \
chmod +x /build/rustfs && \
rm /build/rustfs.zip && \
echo "✅ Successfully downloaded and extracted rustfs binary"
RUN apk add -U --no-cache \
# Runtime stage
FROM alpine:latest
# Set build arguments and labels
ARG RELEASE=latest
ARG BUILD_DATE
ARG VCS_REF
LABEL name="RustFS" \
vendor="RustFS Team" \
maintainer="RustFS Team <dev@rustfs.com>" \
version="${RELEASE}" \
release="${RELEASE}" \
build-date="${BUILD_DATE}" \
vcs-ref="${VCS_REF}" \
summary="RustFS is a high-performance distributed object storage system written in Rust, compatible with S3 API." \
description="RustFS is a high-performance distributed object storage software built using Rust. It supports erasure coding storage, multi-tenant management, observability, and other enterprise-level features." \
url="https://rustfs.com" \
license="Apache-2.0"
# Install runtime dependencies
RUN apk add --no-cache \
ca-certificates \
bash
COPY --from=builder /rustfs /usr/local/bin/rustfs
curl \
tzdata \
bash \
&& addgroup -g 1000 rustfs \
&& adduser -u 1000 -G rustfs -s /bin/sh -D rustfs
# Environment variables
ENV RUSTFS_ACCESS_KEY=rustfsadmin \
RUSTFS_SECRET_KEY=rustfsadmin \
RUSTFS_ADDRESS=":9000" \
RUSTFS_CONSOLE_ENABLE=true \
RUSTFS_VOLUMES=/data \
RUST_LOG=warn
# Set permissions for /usr/bin (similar to MinIO's approach)
RUN chmod -R 755 /usr/bin
# Copy CA certificates and binaries from build stage
COPY --from=build /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=build /build/rustfs /usr/bin/
# Set executable permissions
RUN chmod +x /usr/bin/rustfs
# Create data directory
RUN mkdir -p /data /config && chown -R rustfs:rustfs /data /config
# Switch to non-root user
USER rustfs
# Set working directory
WORKDIR /data
# Expose port
EXPOSE 9000
RUN mkdir -p /data
VOLUME /data
CMD ["rustfs", "/data"]
# Volume for data
VOLUME ["/data"]
# Set entrypoint
ENTRYPOINT ["/usr/bin/rustfs"]

View File

@@ -1,21 +0,0 @@
FROM ubuntu:latest
# RUN apk add --no-cache <package-name>
# 如果 rustfs 有依赖,可以在这里添加,例如:
# RUN apk add --no-cache openssl
# RUN apk add --no-cache bash # 安装 Bash
WORKDIR /app
# 创建与 RUSTFS_VOLUMES 一致的目录
RUN mkdir -p /root/data/target/volume/test1 /root/data/target/volume/test2 /root/data/target/volume/test3 /root/data/target/volume/test4
# COPY ./target/x86_64-unknown-linux-musl/release/rustfs /app/rustfs
COPY ./target/x86_64-unknown-linux-gnu/release/rustfs /app/rustfs
RUN chmod +x /app/rustfs
EXPOSE 9000
EXPOSE 9002
CMD ["/app/rustfs"]

View File

@@ -1,10 +1,26 @@
# Multi-stage Dockerfile for RustFS
# Multi-stage Dockerfile for RustFS - LOCAL DEVELOPMENT ONLY
#
# ⚠️ IMPORTANT: This Dockerfile is for local development and testing only.
# ⚠️ It builds RustFS from source code and is NOT used in CI/CD pipelines.
# ⚠️ CI/CD pipeline uses pre-built binaries from Dockerfile instead.
#
# Usage for local development:
# docker build -f Dockerfile.source -t rustfs:dev-local .
# docker run --rm -p 9000:9000 rustfs:dev-local
#
# Supports cross-compilation for amd64 and arm64 architectures
ARG TARGETPLATFORM
ARG BUILDPLATFORM
# Build stage
FROM --platform=$BUILDPLATFORM rust:1.85-bookworm AS builder
FROM --platform=$BUILDPLATFORM rust:1.88-bookworm AS builder
# Re-declare build arguments after FROM (required for multi-stage builds)
ARG TARGETPLATFORM
ARG BUILDPLATFORM
# Debug: Print platform information
RUN echo "🐳 Build Info: BUILDPLATFORM=$BUILDPLATFORM, TARGETPLATFORM=$TARGETPLATFORM"
# Install required build dependencies
RUN apt-get update && apt-get install -y \
@@ -18,6 +34,8 @@ RUN apt-get update && apt-get install -y \
lld \
&& rm -rf /var/lib/apt/lists/*
# Note: sccache removed for simpler builds
# Install cross-compilation tools for ARM64
RUN if [ "$TARGETPLATFORM" = "linux/arm64" ]; then \
apt-get update && \
@@ -37,10 +55,13 @@ RUN wget https://github.com/google/flatbuffers/releases/download/v25.2.10/Linux.
&& mv flatc /usr/local/bin/ && chmod +x /usr/local/bin/flatc && rm -rf Linux.flatc.binary.g++-13.zip
# Set up Rust targets based on platform
RUN case "$TARGETPLATFORM" in \
RUN set -e && \
PLATFORM="${TARGETPLATFORM:-linux/amd64}" && \
echo "🎯 Setting up Rust target for platform: $PLATFORM" && \
case "$PLATFORM" in \
"linux/amd64") rustup target add x86_64-unknown-linux-gnu ;; \
"linux/arm64") rustup target add aarch64-unknown-linux-gnu ;; \
*) echo "Unsupported platform: $TARGETPLATFORM" && exit 1 ;; \
*) echo "Unsupported platform: $PLATFORM" && exit 1 ;; \
esac
# Set up environment for cross-compilation
@@ -50,37 +71,37 @@ ENV CXX_aarch64_unknown_linux_gnu=aarch64-linux-gnu-g++
WORKDIR /usr/src/rustfs
# Copy Cargo files for dependency caching
COPY Cargo.toml Cargo.lock ./
COPY */Cargo.toml ./*/
# Create dummy main.rs files for dependency compilation
RUN find . -name "Cargo.toml" -not -path "./Cargo.toml" | \
xargs -I {} dirname {} | \
xargs -I {} sh -c 'mkdir -p {}/src && echo "fn main() {}" > {}/src/main.rs'
# Build dependencies only (cache layer)
RUN case "$TARGETPLATFORM" in \
"linux/amd64") cargo build --release --target x86_64-unknown-linux-gnu ;; \
"linux/arm64") cargo build --release --target aarch64-unknown-linux-gnu ;; \
esac
# Copy source code
# Copy all source code
COPY . .
# Configure cargo for optimized builds
ENV CARGO_NET_GIT_FETCH_WITH_CLI=true \
CARGO_REGISTRIES_CRATES_IO_PROTOCOL=sparse \
CARGO_INCREMENTAL=0 \
CARGO_PROFILE_RELEASE_DEBUG=false \
CARGO_PROFILE_RELEASE_SPLIT_DEBUGINFO=off \
CARGO_PROFILE_RELEASE_STRIP=symbols
# Generate protobuf code
RUN cargo run --bin gproto
# Build the actual application
# Build the actual application with optimizations
RUN case "$TARGETPLATFORM" in \
"linux/amd64") \
cargo build --release --target x86_64-unknown-linux-gnu --bin rustfs && \
echo "🔨 Building for amd64..." && \
rustup target add x86_64-unknown-linux-gnu && \
cargo build --release --target x86_64-unknown-linux-gnu --bin rustfs -j $(nproc) && \
cp target/x86_64-unknown-linux-gnu/release/rustfs /usr/local/bin/rustfs \
;; \
"linux/arm64") \
cargo build --release --target aarch64-unknown-linux-gnu --bin rustfs && \
echo "🔨 Building for arm64..." && \
rustup target add aarch64-unknown-linux-gnu && \
cargo build --release --target aarch64-unknown-linux-gnu --bin rustfs -j $(nproc) && \
cp target/aarch64-unknown-linux-gnu/release/rustfs /usr/local/bin/rustfs \
;; \
*) \
echo "❌ Unsupported platform: $TARGETPLATFORM" && exit 1 \
;; \
esac
# Runtime stage - Ubuntu minimal for better compatibility
@@ -111,11 +132,19 @@ RUN chmod +x /app/rustfs && chown rustfs:rustfs /app/rustfs
USER rustfs
# Expose ports
EXPOSE 9000 9001
EXPOSE 9000
# Health check
HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
CMD wget --no-verbose --tries=1 --spider http://localhost:9000/health || exit 1
# Environment variables
ENV RUSTFS_ACCESS_KEY=rustfsadmin \
RUSTFS_SECRET_KEY=rustfsadmin \
RUSTFS_ADDRESS=":9000" \
RUSTFS_CONSOLE_ENABLE=true \
RUSTFS_VOLUMES=/data \
RUST_LOG=warn
# Volume for data
VOLUME ["/data"]
# Set default command
CMD ["/app/rustfs"]

322
Makefile
View File

@@ -5,7 +5,9 @@
DOCKER_CLI ?= docker
IMAGE_NAME ?= rustfs:v1.0.0
CONTAINER_NAME ?= rustfs-dev
DOCKERFILE_PATH = $(shell pwd)/.docker
# Docker build configurations
DOCKERFILE_PRODUCTION = Dockerfile
DOCKERFILE_SOURCE = Dockerfile.source
# Code quality and formatting targets
.PHONY: fmt
@@ -44,21 +46,6 @@ setup-hooks:
chmod +x .git/hooks/pre-commit
@echo "✅ Git hooks setup complete!"
.PHONY: init-devenv
init-devenv:
$(DOCKER_CLI) build -t $(IMAGE_NAME) -f $(DOCKERFILE_PATH)/Dockerfile.devenv .
$(DOCKER_CLI) stop $(CONTAINER_NAME)
$(DOCKER_CLI) rm $(CONTAINER_NAME)
$(DOCKER_CLI) run -d --name $(CONTAINER_NAME) -p 9010:9010 -p 9000:9000 -v $(shell pwd):/root/s3-rustfs -it $(IMAGE_NAME)
.PHONY: start
start:
$(DOCKER_CLI) start $(CONTAINER_NAME)
.PHONY: stop
stop:
$(DOCKER_CLI) stop $(CONTAINER_NAME)
.PHONY: e2e-server
e2e-server:
sh $(shell pwd)/scripts/run.sh
@@ -67,86 +54,184 @@ e2e-server:
probe-e2e:
sh $(shell pwd)/scripts/probe.sh
# make BUILD_OS=ubuntu22.04 build
# in target/ubuntu22.04/release/rustfs
# make BUILD_OS=rockylinux9.3 build
# in target/rockylinux9.3/release/rustfs
BUILD_OS ?= rockylinux9.3
# Native build using build-rustfs.sh script
.PHONY: build
build: ROCKYLINUX_BUILD_IMAGE_NAME = rustfs-$(BUILD_OS):v1
build: ROCKYLINUX_BUILD_CONTAINER_NAME = rustfs-$(BUILD_OS)-build
build: BUILD_CMD = /root/.cargo/bin/cargo build --release --bin rustfs --target-dir /root/s3-rustfs/target/$(BUILD_OS)
build:
$(DOCKER_CLI) build -t $(ROCKYLINUX_BUILD_IMAGE_NAME) -f $(DOCKERFILE_PATH)/Dockerfile.$(BUILD_OS) .
$(DOCKER_CLI) run --rm --name $(ROCKYLINUX_BUILD_CONTAINER_NAME) -v $(shell pwd):/root/s3-rustfs -it $(ROCKYLINUX_BUILD_IMAGE_NAME) $(BUILD_CMD)
@echo "🔨 Building RustFS using build-rustfs.sh script..."
./build-rustfs.sh
.PHONY: build-dev
build-dev:
@echo "🔨 Building RustFS in development mode..."
./build-rustfs.sh --dev
# Docker-based build (alternative approach)
# Usage: make BUILD_OS=ubuntu22.04 build-docker
# Output: target/ubuntu22.04/release/rustfs
BUILD_OS ?= rockylinux9.3
.PHONY: build-docker
build-docker: SOURCE_BUILD_IMAGE_NAME = rustfs-$(BUILD_OS):v1
build-docker: SOURCE_BUILD_CONTAINER_NAME = rustfs-$(BUILD_OS)-build
build-docker: BUILD_CMD = /root/.cargo/bin/cargo build --release --bin rustfs --target-dir /root/s3-rustfs/target/$(BUILD_OS)
build-docker:
@echo "🐳 Building RustFS using Docker ($(BUILD_OS))..."
$(DOCKER_CLI) build -t $(SOURCE_BUILD_IMAGE_NAME) -f $(DOCKERFILE_SOURCE) .
$(DOCKER_CLI) run --rm --name $(SOURCE_BUILD_CONTAINER_NAME) -v $(shell pwd):/root/s3-rustfs -it $(SOURCE_BUILD_IMAGE_NAME) $(BUILD_CMD)
.PHONY: build-musl
build-musl:
@echo "🔨 Building rustfs for x86_64-unknown-linux-musl..."
cargo build --target x86_64-unknown-linux-musl --bin rustfs -r
@echo "💡 On macOS/Windows, use 'make build-docker' or 'make docker-dev' instead"
./build-rustfs.sh --platform x86_64-unknown-linux-musl
.PHONY: build-gnu
build-gnu:
@echo "🔨 Building rustfs for x86_64-unknown-linux-gnu..."
cargo build --target x86_64-unknown-linux-gnu --bin rustfs -r
@echo "💡 On macOS/Windows, use 'make build-docker' or 'make docker-dev' instead"
./build-rustfs.sh --platform x86_64-unknown-linux-gnu
.PHONY: deploy-dev
deploy-dev: build-musl
@echo "🚀 Deploying to dev server: $${IP}"
./scripts/dev_deploy.sh $${IP}
# Multi-architecture Docker build targets
.PHONY: docker-build-multiarch
docker-build-multiarch:
@echo "🏗️ Building multi-architecture Docker images..."
./scripts/build-docker-multiarch.sh
# ========================================================================================
# Docker Multi-Architecture Builds (Primary Methods)
# ========================================================================================
.PHONY: docker-build-multiarch-push
docker-build-multiarch-push:
@echo "🚀 Building and pushing multi-architecture Docker images..."
./scripts/build-docker-multiarch.sh --push
# Production builds using docker-buildx.sh (for CI/CD and production)
.PHONY: docker-buildx
docker-buildx:
@echo "🏗️ Building multi-architecture production Docker images with buildx..."
./docker-buildx.sh
.PHONY: docker-build-multiarch-version
docker-build-multiarch-version:
.PHONY: docker-buildx-push
docker-buildx-push:
@echo "🚀 Building and pushing multi-architecture production Docker images with buildx..."
./docker-buildx.sh --push
.PHONY: docker-buildx-version
docker-buildx-version:
@if [ -z "$(VERSION)" ]; then \
echo "❌ 错误: 请指定版本, 例如: make docker-build-multiarch-version VERSION=v1.0.0"; \
echo "❌ 错误: 请指定版本, 例如: make docker-buildx-version VERSION=v1.0.0"; \
exit 1; \
fi
@echo "🏗️ Building multi-architecture Docker images (version: $(VERSION))..."
./scripts/build-docker-multiarch.sh --version $(VERSION)
@echo "🏗️ Building multi-architecture production Docker images (version: $(VERSION))..."
./docker-buildx.sh --release $(VERSION)
.PHONY: docker-push-multiarch-version
docker-push-multiarch-version:
.PHONY: docker-buildx-push-version
docker-buildx-push-version:
@if [ -z "$(VERSION)" ]; then \
echo "❌ 错误: 请指定版本, 例如: make docker-push-multiarch-version VERSION=v1.0.0"; \
echo "❌ 错误: 请指定版本, 例如: make docker-buildx-push-version VERSION=v1.0.0"; \
exit 1; \
fi
@echo "🚀 Building and pushing multi-architecture Docker images (version: $(VERSION))..."
./scripts/build-docker-multiarch.sh --version $(VERSION) --push
@echo "🚀 Building and pushing multi-architecture production Docker images (version: $(VERSION))..."
./docker-buildx.sh --release $(VERSION) --push
.PHONY: docker-build-ubuntu
docker-build-ubuntu:
@echo "🏗️ Building multi-architecture Ubuntu Docker images..."
./scripts/build-docker-multiarch.sh --type ubuntu
# Development/Source builds using direct buildx commands
.PHONY: docker-dev
docker-dev:
@echo "🏗️ Building multi-architecture development Docker images with buildx..."
@echo "💡 This builds from source code and is intended for local development and testing"
@echo "⚠️ Multi-arch images cannot be loaded locally, use docker-dev-push to push to registry"
$(DOCKER_CLI) buildx build \
--platform linux/amd64,linux/arm64 \
--file $(DOCKERFILE_SOURCE) \
--tag rustfs:source-latest \
--tag rustfs:dev-latest \
.
.PHONY: docker-build-rockylinux
docker-build-rockylinux:
@echo "🏗️ Building multi-architecture RockyLinux Docker images..."
./scripts/build-docker-multiarch.sh --type rockylinux
.PHONY: docker-dev-local
docker-dev-local:
@echo "🏗️ Building single-architecture development Docker image for local use..."
@echo "💡 This builds from source code for the current platform and loads locally"
$(DOCKER_CLI) buildx build \
--file $(DOCKERFILE_SOURCE) \
--tag rustfs:source-latest \
--tag rustfs:dev-latest \
--load \
.
.PHONY: docker-build-devenv
docker-build-devenv:
@echo "🏗️ Building multi-architecture development environment Docker images..."
./scripts/build-docker-multiarch.sh --type devenv
.PHONY: docker-dev-push
docker-dev-push:
@if [ -z "$(REGISTRY)" ]; then \
echo "❌ 错误: 请指定镜像仓库, 例如: make docker-dev-push REGISTRY=ghcr.io/username"; \
exit 1; \
fi
@echo "🚀 Building and pushing multi-architecture development Docker images..."
@echo "💡 推送到仓库: $(REGISTRY)"
$(DOCKER_CLI) buildx build \
--platform linux/amd64,linux/arm64 \
--file $(DOCKERFILE_SOURCE) \
--tag $(REGISTRY)/rustfs:source-latest \
--tag $(REGISTRY)/rustfs:dev-latest \
--push \
.
.PHONY: docker-build-all-types
docker-build-all-types:
@echo "🏗️ Building all multi-architecture Docker image types..."
./scripts/build-docker-multiarch.sh --type production
./scripts/build-docker-multiarch.sh --type ubuntu
./scripts/build-docker-multiarch.sh --type rockylinux
./scripts/build-docker-multiarch.sh --type devenv
# Local production builds using direct buildx (alternative to docker-buildx.sh)
.PHONY: docker-buildx-production-local
docker-buildx-production-local:
@echo "🏗️ Building single-architecture production Docker image locally..."
@echo "💡 Alternative to docker-buildx.sh for local testing"
$(DOCKER_CLI) buildx build \
--file $(DOCKERFILE_PRODUCTION) \
--tag rustfs:production-latest \
--tag rustfs:latest \
--load \
--build-arg RELEASE=latest \
.
# ========================================================================================
# Single Architecture Docker Builds (Traditional)
# ========================================================================================
.PHONY: docker-build-production
docker-build-production:
@echo "🏗️ Building single-architecture production Docker image..."
@echo "💡 Consider using 'make docker-buildx-production-local' for multi-arch support"
$(DOCKER_CLI) build -f $(DOCKERFILE_PRODUCTION) -t rustfs:latest .
.PHONY: docker-build-source
docker-build-source:
@echo "🏗️ Building single-architecture source Docker image..."
@echo "💡 Consider using 'make docker-dev-local' for multi-arch support"
$(DOCKER_CLI) build -f $(DOCKERFILE_SOURCE) -t rustfs:source .
# ========================================================================================
# Development Environment
# ========================================================================================
.PHONY: dev-env-start
dev-env-start:
@echo "🚀 Starting development environment..."
$(DOCKER_CLI) buildx build \
--file $(DOCKERFILE_SOURCE) \
--tag rustfs:dev \
--load \
.
$(DOCKER_CLI) stop $(CONTAINER_NAME) 2>/dev/null || true
$(DOCKER_CLI) rm $(CONTAINER_NAME) 2>/dev/null || true
$(DOCKER_CLI) run -d --name $(CONTAINER_NAME) \
-p 9010:9010 -p 9000:9000 \
-v $(shell pwd):/workspace \
-it rustfs:dev
.PHONY: dev-env-stop
dev-env-stop:
@echo "🛑 Stopping development environment..."
$(DOCKER_CLI) stop $(CONTAINER_NAME) 2>/dev/null || true
$(DOCKER_CLI) rm $(CONTAINER_NAME) 2>/dev/null || true
.PHONY: dev-env-restart
dev-env-restart: dev-env-stop dev-env-start
# ========================================================================================
# Build Utilities
# ========================================================================================
.PHONY: docker-inspect-multiarch
docker-inspect-multiarch:
@@ -160,41 +245,106 @@ docker-inspect-multiarch:
.PHONY: build-cross-all
build-cross-all:
@echo "🔧 Building all target architectures..."
@if ! command -v cross &> /dev/null; then \
echo "📦 Installing cross..."; \
cargo install cross; \
fi
@echo "💡 On macOS/Windows, use 'make docker-dev' for reliable multi-arch builds"
@echo "🔨 Generating protobuf code..."
cargo run --bin gproto || true
@echo "🔨 Building x86_64-unknown-linux-musl..."
cargo build --release --target x86_64-unknown-linux-musl --bin rustfs
./build-rustfs.sh --platform x86_64-unknown-linux-musl
@echo "🔨 Building aarch64-unknown-linux-gnu..."
cross build --release --target aarch64-unknown-linux-gnu --bin rustfs
./build-rustfs.sh --platform aarch64-unknown-linux-gnu
@echo "✅ All architectures built successfully!"
# ========================================================================================
# Help and Documentation
# ========================================================================================
.PHONY: help-build
help-build:
@echo "🔨 RustFS 构建帮助:"
@echo ""
@echo "🚀 本地构建 (推荐使用):"
@echo " make build # 构建 RustFS 二进制文件 (默认包含 console)"
@echo " make build-dev # 开发模式构建"
@echo " make build-musl # 构建 musl 版本"
@echo " make build-gnu # 构建 GNU 版本"
@echo ""
@echo "🐳 Docker 构建:"
@echo " make build-docker # 使用 Docker 容器构建"
@echo " make build-docker BUILD_OS=ubuntu22.04 # 指定构建系统"
@echo ""
@echo "🏗️ 跨架构构建:"
@echo " make build-cross-all # 构建所有架构的二进制文件"
@echo ""
@echo "🔧 直接使用 build-rustfs.sh 脚本:"
@echo " ./build-rustfs.sh --help # 查看脚本帮助"
@echo " ./build-rustfs.sh --no-console # 构建时跳过 console 资源"
@echo " ./build-rustfs.sh --force-console-update # 强制更新 console 资源"
@echo " ./build-rustfs.sh --dev # 开发模式构建"
@echo " ./build-rustfs.sh --sign # 签名二进制文件"
@echo " ./build-rustfs.sh --platform x86_64-unknown-linux-musl # 指定目标平台"
@echo " ./build-rustfs.sh --skip-verification # 跳过二进制验证"
@echo ""
@echo "💡 build-rustfs.sh 脚本提供了更多选项、智能检测和二进制验证功能"
.PHONY: help-docker
help-docker:
@echo "🐳 Docker 多架构构建帮助:"
@echo ""
@echo "基本构建:"
@echo " make docker-build-multiarch # 构建多架构镜像(不推送)"
@echo " make docker-build-multiarch-push # 构建并推送多架构镜像"
@echo "🚀 生产镜像构建 (推荐使用 docker-buildx.sh):"
@echo " make docker-buildx # 构建生产多架构镜像(不推送)"
@echo " make docker-buildx-push # 构建并推送生产多架构镜像"
@echo " make docker-buildx-version VERSION=v1.0.0 # 构建指定版本"
@echo " make docker-buildx-push-version VERSION=v1.0.0 # 构建并推送指定版本"
@echo ""
@echo "版本构建:"
@echo " make docker-build-multiarch-version VERSION=v1.0.0 # 构建指定版本"
@echo " make docker-push-multiarch-version VERSION=v1.0.0 # 构建并推送指定版本"
@echo "🔧 开发/源码镜像构建 (本地开发测试):"
@echo " make docker-dev # 构建开发多架构镜像(无法本地加载)"
@echo " make docker-dev-local # 构建开发单架构镜像(本地加载)"
@echo " make docker-dev-push REGISTRY=xxx # 构建并推送开发镜像"
@echo ""
@echo "镜像类型:"
@echo " make docker-build-ubuntu # 构建 Ubuntu 镜像"
@echo " make docker-build-rockylinux # 构建 RockyLinux 镜像"
@echo " make docker-build-devenv # 构建开发环境镜像"
@echo " make docker-build-all-types # 构建所有类型镜像"
@echo "🏗️ 本地生产镜像构建 (替代方案):"
@echo " make docker-buildx-production-local # 本地构建生产单架构镜像"
@echo ""
@echo "辅助工具:"
@echo "📦 单架构构建 (传统方式):"
@echo " make docker-build-production # 构建单架构生产镜像"
@echo " make docker-build-source # 构建单架构源码镜像"
@echo ""
@echo "🚀 开发环境管理:"
@echo " make dev-env-start # 启动开发容器环境"
@echo " make dev-env-stop # 停止开发容器环境"
@echo " make dev-env-restart # 重启开发容器环境"
@echo ""
@echo "🔧 辅助工具:"
@echo " make build-cross-all # 构建所有架构的二进制文件"
@echo " make docker-inspect-multiarch IMAGE=xxx # 检查镜像的架构支持"
@echo ""
@echo "环境变量 (在推送时需要设置):"
@echo "📋 环境变量:"
@echo " REGISTRY 镜像仓库地址 (推送时需要)"
@echo " DOCKERHUB_USERNAME Docker Hub 用户名"
@echo " DOCKERHUB_TOKEN Docker Hub 访问令牌"
@echo " GITHUB_TOKEN GitHub 访问令牌"
@echo ""
@echo "💡 建议:"
@echo " - 生产用途: 使用 docker-buildx* 命令 (基于预编译二进制)"
@echo " - 本地开发: 使用 docker-dev* 命令 (从源码构建)"
@echo " - 开发环境: 使用 dev-env-* 命令管理开发容器"
.PHONY: help
help:
@echo "🦀 RustFS Makefile 帮助:"
@echo ""
@echo "📋 主要命令分类:"
@echo " make help-build # 显示构建相关帮助"
@echo " make help-docker # 显示 Docker 相关帮助"
@echo ""
@echo "🔧 代码质量:"
@echo " make fmt # 格式化代码"
@echo " make clippy # 运行 clippy 检查"
@echo " make test # 运行测试"
@echo " make pre-commit # 运行所有预提交检查"
@echo ""
@echo "🚀 快速开始:"
@echo " make build # 构建 RustFS 二进制"
@echo " make docker-dev-local # 构建开发 Docker 镜像(本地)"
@echo " make dev-env-start # 启动开发环境"
@echo ""
@echo "💡 更多帮助请使用 'make help-build' 或 'make help-docker'"

View File

@@ -1,14 +1,13 @@
[![RustFS](https://rustfs.com/images/rustfs-github.png)](https://rustfs.com)
<p align="center">RustFS is a high-performance distributed object storage software built using Rust</p>
<p align="center">
<a href="https://github.com/rustfs/rustfs/actions/workflows/ci.yml"><img alt="CI" src="https://github.com/rustfs/rustfs/actions/workflows/ci.yml/badge.svg" /></a>
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
<a href="https://hellogithub.com/repository/rustfs/rustfs" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=b95bcb72bdc340b68f16fdf6790b7d5b&claim_uid=MsbvjYeLDKAH457&theme=small" alt="FeaturedHelloGitHub" /></a>
</p>
<p align="center">
@@ -19,20 +18,19 @@
</p>
<p align="center">
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a> |
English | <a href="https://github.com/rustfs/rustfs/blob/main/README_ZH.md">简体中文</a> |
<!-- Keep these links. Translations will automatically update with the README. -->
<a href="https://readme-i18n.com/rustfs/rustfs?lang=de">Deutsch</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=es">Español</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=fr">français</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ja">日本語</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ko">한국어</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=pt">Português</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=de">Deutsch</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=es">Español</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=fr">français</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ja">日本語</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ko">한국어</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=pt">Português</a> |
<a href="https://readme-i18n.com/rustfs/rustfs?lang=ru">Русский</a>
</p>
RustFS is a high-performance distributed object storage software built using Rust, one of the most popular languages worldwide. Along with MinIO, it shares a range of advantages such as simplicity, S3 compatibility, open-source nature, support for data lakes, AI, and big data. Furthermore, it has a better and more user-friendly open-source license in comparison to other storage systems, being constructed under the Apache license. As Rust serves as its foundation, RustFS provides faster speed and safer distributed features for high-performance object storage.
> ⚠️ **RustFS is under rapid development. Do NOT use in production environments!**
## Features
@@ -74,7 +72,7 @@ Stress test server parameters
To get started with RustFS, follow these steps:
1. **One-click installation script (Option 1)**
1. **One-click installation script (Option 1)**
```bash
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
@@ -83,13 +81,52 @@ To get started with RustFS, follow these steps:
2. **Docker Quick Start (Option 2)**
```bash
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
# Latest stable release
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:latest
# Development version (main branch)
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:main-latest
# Specific version
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:v1.0.0
```
3. **Build from Source (Option 3) - Advanced Users**
3. **Access the Console**: Open your web browser and navigate to `http://localhost:9001` to access the RustFS console, default username and password is `rustfsadmin` .
4. **Create a Bucket**: Use the console to create a new bucket for your objects.
5. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs to interact with your RustFS instance.
For developers who want to build RustFS Docker images from source with multi-architecture support:
```bash
# Build multi-architecture images locally
./docker-buildx.sh --build-arg RELEASE=latest
# Build and push to registry
./docker-buildx.sh --push
# Build specific version
./docker-buildx.sh --release v1.0.0 --push
# Build for custom registry
./docker-buildx.sh --registry your-registry.com --namespace yourname --push
```
The `docker-buildx.sh` script supports:
- **Multi-architecture builds**: `linux/amd64`, `linux/arm64`
- **Automatic version detection**: Uses git tags or commit hashes
- **Registry flexibility**: Supports Docker Hub, GitHub Container Registry, etc.
- **Build optimization**: Includes caching and parallel builds
You can also use Make targets for convenience:
```bash
make docker-buildx # Build locally
make docker-buildx-push # Build and push
make docker-buildx-version VERSION=v1.0.0 # Build specific version
make help-docker # Show all Docker-related commands
```
4. **Access the Console**: Open your web browser and navigate to `http://localhost:9000` to access the RustFS console, default username and password is `rustfsadmin` .
5. **Create a Bucket**: Use the console to create a new bucket for your objects.
6. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs to interact with your RustFS instance.
## Documentation
@@ -122,7 +159,7 @@ If you have any questions or need assistance, you can:
RustFS is a community-driven project, and we appreciate all contributions. Check out the [Contributors](https://github.com/rustfs/rustfs/graphs/contributors) page to see the amazing people who have helped make RustFS better.
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
<img src="https://opencollective.com/rustfs/contributors.svg?width=890&limit=500&button=false" />
</a>
## License

View File

@@ -7,6 +7,7 @@
<a href="https://github.com/rustfs/rustfs/actions/workflows/docker.yml"><img alt="Build and Push Docker Images" src="https://github.com/rustfs/rustfs/actions/workflows/docker.yml/badge.svg" /></a>
<img alt="GitHub commit activity" src="https://img.shields.io/github/commit-activity/m/rustfs/rustfs"/>
<img alt="Github Last Commit" src="https://img.shields.io/github/last-commit/rustfs/rustfs"/>
<a href="https://hellogithub.com/repository/rustfs/rustfs" target="_blank"><img src="https://abroad.hellogithub.com/v1/widgets/recommend.svg?rid=b95bcb72bdc340b68f16fdf6790b7d5b&claim_uid=MsbvjYeLDKAH457&theme=small" alt="FeaturedHelloGitHub" /></a>
</p >
<p align="center">
@@ -61,7 +62,7 @@ RustFS 是一个使用 Rust全球最受欢迎的编程语言之一构建
要开始使用 RustFS请按照以下步骤操作
1. **一键脚本快速启动 (方案一)**
1. **一键脚本快速启动 (方案一)**
```bash
curl -O https://rustfs.com/install_rustfs.sh && bash install_rustfs.sh
@@ -70,11 +71,10 @@ RustFS 是一个使用 Rust全球最受欢迎的编程语言之一构建
2. **Docker快速启动方案二**
```bash
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs
```
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9001` 以访问 RustFS 控制台,默认的用户名和密码是 `rustfsadmin` 。
3. **访问控制台**:打开 Web 浏览器并导航到 `http://localhost:9000` 以访问 RustFS 控制台,默认的用户名和密码是 `rustfsadmin` 。
4. **创建存储桶**:使用控制台为您的对象创建新的存储桶。
5. **上传对象**:您可以直接通过控制台上传文件,或使用 S3 兼容的 API 与您的 RustFS 实例交互。
@@ -109,7 +109,7 @@ RustFS 是一个使用 Rust全球最受欢迎的编程语言之一构建
RustFS 是一个社区驱动的项目,我们感谢所有的贡献。查看[贡献者](https://github.com/rustfs/rustfs/graphs/contributors)页面,了解帮助 RustFS 变得更好的杰出人员。
<a href="https://github.com/rustfs/rustfs/graphs/contributors">
<img src="https://contrib.rocks/image?repo=rustfs/rustfs" />
<img src="https://opencollective.com/rustfs/contributors.svg?width=890&limit=500&button=false" />
</a >
## 许可证

564
build-rustfs.sh Executable file
View File

@@ -0,0 +1,564 @@
#!/bin/bash
# RustFS Binary Build Script
# This script compiles RustFS binaries for different platforms and architectures
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Auto-detect current platform
detect_platform() {
local arch=$(uname -m)
local os=$(uname -s | tr '[:upper:]' '[:lower:]')
case "$os" in
"linux")
case "$arch" in
"x86_64")
echo "x86_64-unknown-linux-musl"
;;
"aarch64"|"arm64")
echo "aarch64-unknown-linux-musl"
;;
"armv7l")
echo "armv7-unknown-linux-musleabihf"
;;
*)
echo "unknown-platform"
;;
esac
;;
"darwin")
case "$arch" in
"x86_64")
echo "x86_64-apple-darwin"
;;
"arm64"|"aarch64")
echo "aarch64-apple-darwin"
;;
*)
echo "unknown-platform"
;;
esac
;;
*)
echo "unknown-platform"
;;
esac
}
# Cross-platform SHA256 checksum generation
generate_sha256() {
local file="$1"
local output_file="$2"
local os=$(uname -s | tr '[:upper:]' '[:lower:]')
case "$os" in
"linux")
if command -v sha256sum &> /dev/null; then
sha256sum "$file" > "$output_file"
elif command -v shasum &> /dev/null; then
shasum -a 256 "$file" > "$output_file"
else
print_message $RED "❌ No SHA256 command found (sha256sum or shasum)"
return 1
fi
;;
"darwin")
if command -v shasum &> /dev/null; then
shasum -a 256 "$file" > "$output_file"
elif command -v sha256sum &> /dev/null; then
sha256sum "$file" > "$output_file"
else
print_message $RED "❌ No SHA256 command found (shasum or sha256sum)"
return 1
fi
;;
*)
# Try common commands in order
if command -v sha256sum &> /dev/null; then
sha256sum "$file" > "$output_file"
elif command -v shasum &> /dev/null; then
shasum -a 256 "$file" > "$output_file"
else
print_message $RED "❌ No SHA256 command found"
return 1
fi
;;
esac
}
# Default values
OUTPUT_DIR="target/release"
PLATFORM=$(detect_platform) # Auto-detect current platform
BINARY_NAME="rustfs"
BUILD_TYPE="release"
SIGN=false
WITH_CONSOLE=true
FORCE_CONSOLE_UPDATE=false
CONSOLE_VERSION="latest"
SKIP_VERIFICATION=false
CUSTOM_PLATFORM=""
# Print usage
usage() {
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Description:"
echo " Build RustFS binary for the current platform. Designed for CI/CD pipelines"
echo " where different runners build platform-specific binaries natively."
echo " Includes automatic verification to ensure the built binary is functional."
echo ""
echo "Options:"
echo " -o, --output-dir DIR Output directory (default: target/release)"
echo " -b, --binary-name NAME Binary name (default: rustfs)"
echo " -p, --platform TARGET Target platform (default: auto-detect)"
echo " --dev Build in dev mode"
echo " --sign Sign binaries after build"
echo " --with-console Download console static assets (default)"
echo " --no-console Skip console static assets"
echo " --force-console-update Force update console assets even if they exist"
echo " --console-version VERSION Console version to download (default: latest)"
echo " --skip-verification Skip binary verification after build"
echo " -h, --help Show this help message"
echo ""
echo "Examples:"
echo " $0 # Build for current platform (includes console assets)"
echo " $0 --dev # Development build"
echo " $0 --sign # Build and sign binary (release CI)"
echo " $0 --no-console # Build without console static assets"
echo " $0 --force-console-update # Force update console assets"
echo " $0 --platform x86_64-unknown-linux-musl # Build for specific platform"
echo " $0 --skip-verification # Skip binary verification (for cross-compilation)"
echo ""
echo "Detected platform: $(detect_platform)"
echo "CI Usage: Run this script on each platform's runner to build native binaries"
}
# Print colored message
print_message() {
local color=$1
local message=$2
echo -e "${color}${message}${NC}"
}
# Get version from git
get_version() {
if git describe --abbrev=0 --tags >/dev/null 2>&1; then
git describe --abbrev=0 --tags
else
git rev-parse --short HEAD
fi
}
# Setup rust environment
setup_rust_environment() {
print_message $BLUE "🔧 Setting up Rust environment..."
# Install required target for current platform
print_message $YELLOW "Installing target: $PLATFORM"
rustup target add "$PLATFORM"
# Set up environment variables for musl targets
if [[ "$PLATFORM" == *"musl"* ]]; then
print_message $YELLOW "Setting up environment for musl target..."
export RUSTFLAGS="-C target-feature=-crt-static"
# For cargo-zigbuild, set up additional environment variables
if command -v cargo-zigbuild &> /dev/null; then
print_message $YELLOW "Configuring cargo-zigbuild for musl target..."
# Set environment variables for better musl support
export CC_x86_64_unknown_linux_musl="zig cc -target x86_64-linux-musl"
export CXX_x86_64_unknown_linux_musl="zig c++ -target x86_64-linux-musl"
export AR_x86_64_unknown_linux_musl="zig ar"
export CARGO_TARGET_X86_64_UNKNOWN_LINUX_MUSL_LINKER="zig cc -target x86_64-linux-musl"
export CC_aarch64_unknown_linux_musl="zig cc -target aarch64-linux-musl"
export CXX_aarch64_unknown_linux_musl="zig c++ -target aarch64-linux-musl"
export AR_aarch64_unknown_linux_musl="zig ar"
export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_MUSL_LINKER="zig cc -target aarch64-linux-musl"
# Set environment variables for zstd-sys to avoid target parsing issues
export ZSTD_SYS_USE_PKG_CONFIG=1
export PKG_CONFIG_ALLOW_CROSS=1
fi
fi
# Install required tools
if [ "$SIGN" = true ]; then
if ! command -v minisign &> /dev/null; then
print_message $YELLOW "Installing minisign for binary signing..."
cargo install minisign
fi
fi
}
# Download console static assets
download_console_assets() {
local static_dir="rustfs/static"
local console_exists=false
# Check if console assets already exist
if [ -d "$static_dir" ] && [ -f "$static_dir/index.html" ]; then
console_exists=true
local static_size=$(du -sh "$static_dir" 2>/dev/null | cut -f1 || echo "unknown")
print_message $YELLOW "Console static assets already exist ($static_size)"
fi
# Determine if we need to download
local should_download=false
if [ "$WITH_CONSOLE" = true ]; then
if [ "$console_exists" = false ]; then
print_message $BLUE "🎨 Console assets not found, downloading..."
should_download=true
elif [ "$FORCE_CONSOLE_UPDATE" = true ]; then
print_message $BLUE "🎨 Force updating console assets..."
should_download=true
else
print_message $GREEN "✅ Console assets already available, skipping download"
fi
else
if [ "$console_exists" = true ]; then
print_message $GREEN "✅ Using existing console assets"
else
print_message $YELLOW "⚠️ Console assets not found. Use --download-console to download them."
fi
fi
if [ "$should_download" = true ]; then
print_message $BLUE "📥 Downloading console static assets..."
# Create static directory
mkdir -p "$static_dir"
# Download from GitHub Releases (consistent with Docker build)
local download_url
if [ "$CONSOLE_VERSION" = "latest" ]; then
print_message $YELLOW "Getting latest console release info..."
# For now, use dl.rustfs.com as fallback until GitHub Releases includes console assets
download_url="https://dl.rustfs.com/artifacts/console/rustfs-console-latest.zip"
else
download_url="https://dl.rustfs.com/artifacts/console/rustfs-console-${CONSOLE_VERSION}.zip"
fi
print_message $YELLOW "Downloading from: $download_url"
# Download with retries
local temp_file="console-assets-temp.zip"
local download_success=false
for i in {1..3}; do
if curl -L "$download_url" -o "$temp_file" --retry 3 --retry-delay 5 --max-time 300; then
download_success=true
break
else
print_message $YELLOW "Download attempt $i failed, retrying..."
sleep 2
fi
done
if [ "$download_success" = true ]; then
# Verify the downloaded file
if [ -f "$temp_file" ] && [ -s "$temp_file" ]; then
print_message $BLUE "📦 Extracting console assets..."
# Extract to static directory
if unzip -o "$temp_file" -d "$static_dir"; then
rm "$temp_file"
local final_size=$(du -sh "$static_dir" 2>/dev/null | cut -f1 || echo "unknown")
print_message $GREEN "✅ Console assets downloaded successfully ($final_size)"
else
print_message $RED "❌ Failed to extract console assets"
rm -f "$temp_file"
return 1
fi
else
print_message $RED "❌ Downloaded file is empty or invalid"
rm -f "$temp_file"
return 1
fi
else
print_message $RED "❌ Failed to download console assets after 3 attempts"
print_message $YELLOW "💡 Console assets are optional. Build will continue without them."
rm -f "$temp_file"
fi
fi
}
# Verify binary functionality
verify_binary() {
local binary_path="$1"
# Check if binary exists
if [ ! -f "$binary_path" ]; then
print_message $RED "❌ Binary file not found: $binary_path"
return 1
fi
# Check if binary is executable
if [ ! -x "$binary_path" ]; then
print_message $RED "❌ Binary is not executable: $binary_path"
return 1
fi
# Check basic functionality - try to run help command
print_message $YELLOW " Testing --help command..."
if ! "$binary_path" --help >/dev/null 2>&1; then
print_message $RED "❌ Binary failed to run --help command"
return 1
fi
# Check version command
print_message $YELLOW " Testing --version command..."
if ! "$binary_path" --version >/dev/null 2>&1; then
print_message $YELLOW "⚠️ Binary does not support --version command (this is optional)"
fi
# Try to get some basic info about the binary
local file_info=$(file "$binary_path" 2>/dev/null || echo "unknown")
print_message $YELLOW " Binary info: $file_info"
# Check if it's a valid ELF/Mach-O binary
if command -v readelf >/dev/null 2>&1; then
if readelf -h "$binary_path" >/dev/null 2>&1; then
print_message $YELLOW " ELF binary structure: valid"
fi
elif command -v otool >/dev/null 2>&1; then
if otool -h "$binary_path" >/dev/null 2>&1; then
print_message $YELLOW " Mach-O binary structure: valid"
fi
fi
return 0
}
# Build binary for current platform
build_binary() {
local version=$(get_version)
local output_file="${OUTPUT_DIR}/${PLATFORM}/${BINARY_NAME}"
print_message $BLUE "🏗️ Building for platform: $PLATFORM"
print_message $YELLOW " Version: $version"
print_message $YELLOW " Output: $output_file"
# Create output directory
mkdir -p "${OUTPUT_DIR}/${PLATFORM}"
# Simple build logic matching the working version (4fb4b353)
# Force rebuild by touching build.rs
touch rustfs/build.rs
# Determine build command based on platform and cross-compilation needs
local build_cmd=""
local current_platform=$(detect_platform)
print_message $BLUE "📦 Using working version build logic..."
# Check if we need cross-compilation
if [ "$PLATFORM" != "$current_platform" ]; then
# Cross-compilation needed
if [[ "$PLATFORM" == *"apple-darwin"* ]]; then
print_message $RED "❌ macOS cross-compilation not supported"
print_message $YELLOW "💡 macOS targets must be built natively on macOS runners"
return 1
elif [[ "$PLATFORM" == *"windows"* ]]; then
# Use cross for Windows ARM64
if ! command -v cross &> /dev/null; then
print_message $YELLOW "📦 Installing cross tool..."
cargo install cross --git https://github.com/cross-rs/cross
fi
build_cmd="cross build"
else
# Use zigbuild for Linux ARM64 (matches working version)
if ! command -v cargo-zigbuild &> /dev/null; then
print_message $RED "❌ cargo-zigbuild not found. Please install it first."
return 1
fi
build_cmd="cargo zigbuild"
fi
else
# Native compilation
build_cmd="cargo build"
fi
if [ "$BUILD_TYPE" = "release" ]; then
build_cmd+=" --release"
fi
build_cmd+=" --target $PLATFORM"
build_cmd+=" -p rustfs --bins"
print_message $BLUE "📦 Executing: $build_cmd"
# Execute build (this matches exactly what the working version does)
if eval $build_cmd; then
print_message $GREEN "✅ Successfully built for $PLATFORM"
# Copy binary to output directory
cp "target/${PLATFORM}/${BUILD_TYPE}/${BINARY_NAME}" "$output_file"
# Generate checksums
print_message $BLUE "🔐 Generating checksums..."
(cd "${OUTPUT_DIR}/${PLATFORM}" && generate_sha256 "${BINARY_NAME}" "${BINARY_NAME}.sha256sum")
# Verify binary functionality (if not skipped)
if [ "$SKIP_VERIFICATION" = false ]; then
print_message $BLUE "🔍 Verifying binary functionality..."
if verify_binary "$output_file"; then
print_message $GREEN "✅ Binary verification passed"
else
print_message $RED "❌ Binary verification failed"
return 1
fi
else
print_message $YELLOW "⚠️ Binary verification skipped by user request"
fi
# Sign binary if requested
if [ "$SIGN" = true ]; then
print_message $BLUE "✍️ Signing binary..."
(cd "${OUTPUT_DIR}/${PLATFORM}" && minisign -S -m "${BINARY_NAME}" -s ~/.minisign/minisign.key)
fi
print_message $GREEN "✅ Build completed successfully"
else
print_message $RED "❌ Failed to build for $PLATFORM"
return 1
fi
}
# Main build function
build_rustfs() {
local version=$(get_version)
print_message $BLUE "🚀 Starting RustFS binary build process..."
print_message $YELLOW " Version: $version"
print_message $YELLOW " Platform: $PLATFORM"
print_message $YELLOW " Output Directory: $OUTPUT_DIR"
print_message $YELLOW " Build Type: $BUILD_TYPE"
print_message $YELLOW " Sign: $SIGN"
print_message $YELLOW " With Console: $WITH_CONSOLE"
if [ "$WITH_CONSOLE" = true ]; then
print_message $YELLOW " Console Version: $CONSOLE_VERSION"
print_message $YELLOW " Force Console Update: $FORCE_CONSOLE_UPDATE"
fi
print_message $YELLOW " Skip Verification: $SKIP_VERIFICATION"
echo ""
# Setup environment
setup_rust_environment
echo ""
# Download console assets if requested
download_console_assets
echo ""
# Build binary
build_binary
echo ""
print_message $GREEN "🎉 Build process completed successfully!"
# Show built binary
local binary_file="${OUTPUT_DIR}/${PLATFORM}/${BINARY_NAME}"
if [ -f "$binary_file" ]; then
local size=$(ls -lh "$binary_file" | awk '{print $5}')
print_message $BLUE "📋 Built binary: $binary_file ($size)"
fi
}
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
-o|--output-dir)
OUTPUT_DIR="$2"
shift 2
;;
-b|--binary-name)
BINARY_NAME="$2"
shift 2
;;
-p|--platform)
CUSTOM_PLATFORM="$2"
shift 2
;;
--dev)
BUILD_TYPE="debug"
shift
;;
--sign)
SIGN=true
shift
;;
--with-console)
WITH_CONSOLE=true
shift
;;
--no-console)
WITH_CONSOLE=false
shift
;;
--force-console-update)
FORCE_CONSOLE_UPDATE=true
WITH_CONSOLE=true # Auto-enable download when forcing update
shift
;;
--console-version)
CONSOLE_VERSION="$2"
shift 2
;;
--skip-verification)
SKIP_VERIFICATION=true
shift
;;
-h|--help)
usage
exit 0
;;
*)
print_message $RED "❌ Unknown option: $1"
usage
exit 1
;;
esac
done
# Main execution
main() {
print_message $BLUE "🦀 RustFS Binary Build Script"
echo ""
# Check if we're in a Rust project
if [ ! -f "Cargo.toml" ]; then
print_message $RED "❌ No Cargo.toml found. Are you in a Rust project directory?"
exit 1
fi
# Override platform if specified
if [ -n "$CUSTOM_PLATFORM" ]; then
PLATFORM="$CUSTOM_PLATFORM"
print_message $YELLOW "🎯 Using specified platform: $PLATFORM"
# Auto-enable skip verification for cross-compilation
if [ "$PLATFORM" != "$(detect_platform)" ]; then
SKIP_VERIFICATION=true
print_message $YELLOW "⚠️ Cross-compilation detected, enabling --skip-verification"
fi
fi
# Start build process
build_rustfs
}
# Run main function
main

View File

@@ -1,35 +0,0 @@
#!/bin/bash
# Copyright 2024 RustFS Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
clear
# Get the current platform architecture
ARCH=$(uname -m)
# Set the target directory according to the schema
if [ "$ARCH" == "x86_64" ]; then
TARGET_DIR="target/x86_64"
elif [ "$ARCH" == "aarch64" ]; then
TARGET_DIR="target/arm64"
else
TARGET_DIR="target/unknown"
fi
# Set CARGO_TARGET_DIR and build the project
CARGO_TARGET_DIR=$TARGET_DIR RUSTFLAGS="-C link-arg=-fuse-ld=mold" cargo build --release --package rustfs
echo -e "\a"
echo -e "\a"
echo -e "\a"

View File

@@ -26,7 +26,6 @@ dioxus = { workspace = true, features = ["router"] }
dirs = { workspace = true }
hex = { workspace = true }
keyring = { workspace = true }
lazy_static = { workspace = true }
rfd = { workspace = true }
rust-embed = { workspace = true, features = ["interpolate-folder-path"] }
rust-i18n = { workspace = true }

View File

@@ -14,12 +14,12 @@
use crate::utils::RustFSConfig;
use dioxus::logger::tracing::{debug, error, info};
use lazy_static::lazy_static;
use rust_embed::RustEmbed;
use sha2::{Digest, Sha256};
use std::error::Error;
use std::path::{Path, PathBuf};
use std::process::Command as StdCommand;
use std::sync::LazyLock;
use std::time::Duration;
use tokio::fs;
use tokio::fs::File;
@@ -31,15 +31,13 @@ use tokio::sync::{Mutex, mpsc};
#[folder = "$CARGO_MANIFEST_DIR/embedded-rustfs/"]
struct Asset;
// Use `lazy_static` to cache the checksum of embedded resources
lazy_static! {
static ref RUSTFS_HASH: Mutex<String> = {
let rustfs_file = if cfg!(windows) { "rustfs.exe" } else { "rustfs" };
let rustfs_data = Asset::get(rustfs_file).expect("RustFs binary not embedded");
let hash = hex::encode(Sha256::digest(&rustfs_data.data));
Mutex::new(hash)
};
}
// Use `LazyLock` to cache the checksum of embedded resources
static RUSTFS_HASH: LazyLock<Mutex<String>> = LazyLock::new(|| {
let rustfs_file = if cfg!(windows) { "rustfs.exe" } else { "rustfs" };
let rustfs_data = Asset::get(rustfs_file).expect("RustFs binary not embedded");
let hash = hex::encode(Sha256::digest(&rustfs_data.data));
Mutex::new(hash)
});
/// Service command
/// This enum represents the commands that can be sent to the service manager

View File

@@ -1,10 +1,16 @@
[package]
name = "rustfs-ahm"
version = "0.0.3"
edition = "2021"
version.workspace = true
edition.workspace = true
authors = ["RustFS Team"]
license = "Apache-2.0"
license.workspace = true
description = "RustFS AHM (Automatic Health Management) Scanner"
repository.workspace = true
rust-version.workspace = true
homepage.workspace = true
documentation = "https://docs.rs/rustfs-ahm/latest/rustfs_ahm/"
keywords = ["RustFS", "AHM", "health-management", "scanner", "Minio"]
categories = ["web-programming", "development-tools", "filesystem"]
[dependencies]
rustfs-ecstore = { workspace = true }
@@ -31,5 +37,5 @@ lazy_static = { workspace = true }
[dev-dependencies]
rmp-serde = { workspace = true }
tokio-test = "0.4"
serde_json = "1.0"
tokio-test = { workspace = true }
serde_json = { workspace = true }

View File

@@ -20,8 +20,8 @@ pub mod scanner;
pub use error::{Error, Result};
pub use scanner::{
load_data_usage_from_backend, store_data_usage_in_backend, BucketTargetUsageInfo, BucketUsageInfo, DataUsageInfo, Scanner,
ScannerMetrics,
BucketTargetUsageInfo, BucketUsageInfo, DataUsageInfo, Scanner, ScannerMetrics, load_data_usage_from_backend,
store_data_usage_in_backend,
};
// Global cancellation token for AHM services (scanner and other background tasks)

View File

@@ -1016,8 +1016,8 @@ mod tests {
use rustfs_ecstore::endpoints::{EndpointServerPools, Endpoints, PoolEndpoints};
use rustfs_ecstore::store::ECStore;
use rustfs_ecstore::{
store_api::{MakeBucketOptions, ObjectIO, PutObjReader},
StorageAPI,
store_api::{MakeBucketOptions, ObjectIO, PutObjReader},
};
use std::fs;
use std::net::SocketAddr;

View File

@@ -20,6 +20,6 @@ pub mod metrics;
// Re-export main types for convenience
pub use data_scanner::Scanner;
pub use data_usage::{
load_data_usage_from_backend, store_data_usage_in_backend, BucketTargetUsageInfo, BucketUsageInfo, DataUsageInfo,
BucketTargetUsageInfo, BucketUsageInfo, DataUsageInfo, load_data_usage_from_backend, store_data_usage_in_backend,
};
pub use metrics::ScannerMetrics;

View File

@@ -28,6 +28,5 @@ categories = ["web-programming", "development-tools", "data-structures"]
workspace = true
[dependencies]
lazy_static.workspace = true
tokio.workspace = true
tonic = { workspace = true }

View File

@@ -12,19 +12,19 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
#![allow(non_upper_case_globals)] // FIXME
use std::collections::HashMap;
use std::sync::LazyLock;
use lazy_static::lazy_static;
use tokio::sync::RwLock;
use tonic::transport::Channel;
lazy_static! {
pub static ref GLOBAL_Local_Node_Name: RwLock<String> = RwLock::new("".to_string());
pub static ref GLOBAL_Rustfs_Host: RwLock<String> = RwLock::new("".to_string());
pub static ref GLOBAL_Rustfs_Port: RwLock<String> = RwLock::new("9000".to_string());
pub static ref GLOBAL_Rustfs_Addr: RwLock<String> = RwLock::new("".to_string());
pub static ref GLOBAL_Conn_Map: RwLock<HashMap<String, Channel>> = RwLock::new(HashMap::new());
}
pub static GLOBAL_Local_Node_Name: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
pub static GLOBAL_Rustfs_Host: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
pub static GLOBAL_Rustfs_Port: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("9000".to_string()));
pub static GLOBAL_Rustfs_Addr: LazyLock<RwLock<String>> = LazyLock::new(|| RwLock::new("".to_string()));
pub static GLOBAL_Conn_Map: LazyLock<RwLock<HashMap<String, Channel>>> = LazyLock::new(|| RwLock::new(HashMap::new()));
pub async fn set_global_addr(addr: &str) {
*GLOBAL_Rustfs_Addr.write().await = addr.to_string();

View File

@@ -109,8 +109,8 @@ winapi = { workspace = true }
[dev-dependencies]
tokio = { workspace = true, features = ["rt-multi-thread", "macros"] }
criterion = { version = "0.5", features = ["html_reports"] }
temp-env = "0.3.6"
criterion = { workspace = true, features = ["html_reports"] }
temp-env = { workspace = true }
[build-dependencies]
shadow-rs = { workspace = true, features = ["build", "metadata"] }

View File

@@ -43,15 +43,16 @@ pub async fn create_bitrot_reader(
) -> disk::error::Result<Option<BitrotReader<Box<dyn AsyncRead + Send + Sync + Unpin>>>> {
// Calculate the total length to read, including the checksum overhead
let length = length.div_ceil(shard_size) * checksum_algo.size() + length;
let offset = offset.div_ceil(shard_size) * checksum_algo.size() + offset;
if let Some(data) = inline_data {
// Use inline data
let rd = Cursor::new(data.to_vec());
let mut rd = Cursor::new(data.to_vec());
rd.set_position(offset as u64);
let reader = BitrotReader::new(Box::new(rd) as Box<dyn AsyncRead + Send + Sync + Unpin>, shard_size, checksum_algo);
Ok(Some(reader))
} else if let Some(disk) = disk {
// Read from disk
match disk.read_file_stream(bucket, path, offset, length).await {
match disk.read_file_stream(bucket, path, offset, length - offset).await {
Ok(rd) => {
let reader = BitrotReader::new(rd, shard_size, checksum_algo);
Ok(Some(reader))

View File

@@ -18,7 +18,7 @@ use futures::future::join_all;
use rustfs_filemeta::{MetaCacheEntries, MetaCacheEntry, MetacacheReader, is_io_eof};
use std::{future::Future, pin::Pin, sync::Arc};
use tokio::{spawn, sync::broadcast::Receiver as B_Receiver};
use tracing::error;
use tracing::{error, warn};
pub type AgreedFn = Box<dyn Fn(MetaCacheEntry) -> Pin<Box<dyn Future<Output = ()> + Send>> + Send + 'static>;
pub type PartialFn =
@@ -118,10 +118,14 @@ pub async fn list_path_raw(mut rx: B_Receiver<bool>, opts: ListPathRawOptions) -
if let Some(disk) = d.clone() {
disk
} else {
warn!("list_path_raw: fallback disk is none");
break;
}
}
None => break,
None => {
warn!("list_path_raw: fallback disk is none2");
break;
}
};
match disk
.as_ref()

View File

@@ -288,6 +288,12 @@ impl From<rmp_serde::encode::Error> for DiskError {
}
}
impl From<rmp_serde::decode::Error> for DiskError {
fn from(e: rmp_serde::decode::Error) -> Self {
DiskError::other(e)
}
}
impl From<rmp::encode::ValueWriteError> for DiskError {
fn from(e: rmp::encode::ValueWriteError) -> Self {
DiskError::other(e)

View File

@@ -57,8 +57,8 @@ use bytes::Bytes;
use path_absolutize::Absolutize;
use rustfs_common::defer;
use rustfs_filemeta::{
Cache, FileInfo, FileInfoOpts, FileMeta, MetaCacheEntry, MetacacheWriter, Opts, RawFileInfo, UpdateFn, get_file_info,
read_xl_meta_no_data,
Cache, FileInfo, FileInfoOpts, FileMeta, MetaCacheEntry, MetacacheWriter, ObjectPartInfo, Opts, RawFileInfo, UpdateFn,
get_file_info, read_xl_meta_no_data,
};
use rustfs_utils::HashAlgorithm;
use rustfs_utils::os::get_info;
@@ -1312,6 +1312,67 @@ impl DiskAPI for LocalDisk {
Ok(resp)
}
#[tracing::instrument(skip(self))]
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>> {
let volume_dir = self.get_bucket_path(bucket)?;
let mut ret = vec![ObjectPartInfo::default(); paths.len()];
for (i, path_str) in paths.iter().enumerate() {
let path = Path::new(path_str);
let file_name = path.file_name().and_then(|v| v.to_str()).unwrap_or_default();
let num = file_name
.strip_prefix("part.")
.and_then(|v| v.strip_suffix(".meta"))
.and_then(|v| v.parse::<usize>().ok())
.unwrap_or_default();
if let Err(err) = access(
volume_dir
.clone()
.join(path.parent().unwrap_or(Path::new("")).join(format!("part.{num}"))),
)
.await
{
ret[i] = ObjectPartInfo {
number: num,
error: Some(err.to_string()),
..Default::default()
};
continue;
}
let data = match self
.read_all_data(bucket, volume_dir.clone(), volume_dir.clone().join(path))
.await
{
Ok(data) => data,
Err(err) => {
ret[i] = ObjectPartInfo {
number: num,
error: Some(err.to_string()),
..Default::default()
};
continue;
}
};
match ObjectPartInfo::unmarshal(&data) {
Ok(meta) => {
ret[i] = meta;
}
Err(err) => {
ret[i] = ObjectPartInfo {
number: num,
error: Some(err.to_string()),
..Default::default()
};
}
};
}
Ok(ret)
}
#[tracing::instrument(skip(self))]
async fn check_parts(&self, volume: &str, path: &str, fi: &FileInfo) -> Result<CheckPartsResp> {
let volume_dir = self.get_bucket_path(volume)?;
@@ -1550,11 +1611,6 @@ impl DiskAPI for LocalDisk {
#[tracing::instrument(level = "debug", skip(self))]
async fn read_file_stream(&self, volume: &str, path: &str, offset: usize, length: usize) -> Result<FileReader> {
// warn!(
// "disk read_file_stream: volume: {}, path: {}, offset: {}, length: {}",
// volume, path, offset, length
// );
let volume_dir = self.get_bucket_path(volume)?;
if !skip_access_checks(volume) {
access(&volume_dir)

View File

@@ -41,7 +41,7 @@ use endpoint::Endpoint;
use error::DiskError;
use error::{Error, Result};
use local::LocalDisk;
use rustfs_filemeta::{FileInfo, RawFileInfo};
use rustfs_filemeta::{FileInfo, ObjectPartInfo, RawFileInfo};
use rustfs_madmin::info_commands::DiskMetrics;
use serde::{Deserialize, Serialize};
use std::{fmt::Debug, path::PathBuf, sync::Arc};
@@ -331,6 +331,14 @@ impl DiskAPI for Disk {
}
}
#[tracing::instrument(skip(self))]
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>> {
match self {
Disk::Local(local_disk) => local_disk.read_parts(bucket, paths).await,
Disk::Remote(remote_disk) => remote_disk.read_parts(bucket, paths).await,
}
}
#[tracing::instrument(skip(self))]
async fn rename_part(&self, src_volume: &str, src_path: &str, dst_volume: &str, dst_path: &str, meta: Bytes) -> Result<()> {
match self {
@@ -513,7 +521,7 @@ pub trait DiskAPI: Debug + Send + Sync + 'static {
// CheckParts
async fn check_parts(&self, volume: &str, path: &str, fi: &FileInfo) -> Result<CheckPartsResp>;
// StatInfoFile
// ReadParts
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>>;
async fn read_multiple(&self, req: ReadMultipleReq) -> Result<Vec<ReadMultipleResp>>;
// CleanAbandonedData
async fn write_all(&self, volume: &str, path: &str, data: Bytes) -> Result<()>;

View File

@@ -22,8 +22,8 @@ use rustfs_protos::{
proto_gen::node_service::{
CheckPartsRequest, DeletePathsRequest, DeleteRequest, DeleteVersionRequest, DeleteVersionsRequest, DeleteVolumeRequest,
DiskInfoRequest, ListDirRequest, ListVolumesRequest, MakeVolumeRequest, MakeVolumesRequest, NsScannerRequest,
ReadAllRequest, ReadMultipleRequest, ReadVersionRequest, ReadXlRequest, RenameDataRequest, RenameFileRequest,
StatVolumeRequest, UpdateMetadataRequest, VerifyFileRequest, WriteAllRequest, WriteMetadataRequest,
ReadAllRequest, ReadMultipleRequest, ReadPartsRequest, ReadVersionRequest, ReadXlRequest, RenameDataRequest,
RenameFileRequest, StatVolumeRequest, UpdateMetadataRequest, VerifyFileRequest, WriteAllRequest, WriteMetadataRequest,
},
};
@@ -44,7 +44,7 @@ use crate::{
heal_commands::{HealScanMode, HealingTracker},
},
};
use rustfs_filemeta::{FileInfo, RawFileInfo};
use rustfs_filemeta::{FileInfo, ObjectPartInfo, RawFileInfo};
use rustfs_protos::proto_gen::node_service::RenamePartRequest;
use rustfs_rio::{HttpReader, HttpWriter};
use tokio::{
@@ -790,6 +790,27 @@ impl DiskAPI for RemoteDisk {
Ok(check_parts_resp)
}
#[tracing::instrument(skip(self))]
async fn read_parts(&self, bucket: &str, paths: &[String]) -> Result<Vec<ObjectPartInfo>> {
let mut client = node_service_time_out_client(&self.addr)
.await
.map_err(|err| Error::other(format!("can not get client, err: {err}")))?;
let request = Request::new(ReadPartsRequest {
disk: self.endpoint.to_string(),
bucket: bucket.to_string(),
paths: paths.to_vec(),
});
let response = client.read_parts(request).await?.into_inner();
if !response.success {
return Err(response.error.unwrap_or_default().into());
}
let read_parts_resp = rmp_serde::from_slice::<Vec<ObjectPartInfo>>(&response.object_part_infos)?;
Ok(read_parts_resp)
}
#[tracing::instrument(skip(self))]
async fn check_parts(&self, volume: &str, path: &str, fi: &FileInfo) -> Result<CheckPartsResp> {
info!("check_parts");

View File

@@ -404,7 +404,42 @@ impl Node for NodeService {
}))
}
}
async fn read_parts(&self, request: Request<ReadPartsRequest>) -> Result<Response<ReadPartsResponse>, Status> {
let request = request.into_inner();
if let Some(disk) = self.find_disk(&request.disk).await {
match disk.read_parts(&request.bucket, &request.paths).await {
Ok(data) => {
let data = match rmp_serde::to_vec(&data) {
Ok(data) => data,
Err(err) => {
return Ok(tonic::Response::new(ReadPartsResponse {
success: false,
object_part_infos: Bytes::new(),
error: Some(DiskError::other(format!("encode data failed: {err}")).into()),
}));
}
};
Ok(tonic::Response::new(ReadPartsResponse {
success: true,
object_part_infos: Bytes::copy_from_slice(&data),
error: None,
}))
}
Err(err) => Ok(tonic::Response::new(ReadPartsResponse {
success: false,
object_part_infos: Bytes::new(),
error: Some(err.into()),
})),
}
} else {
Ok(tonic::Response::new(ReadPartsResponse {
success: false,
object_part_infos: Bytes::new(),
error: Some(DiskError::other("can not find disk".to_string()).into()),
}))
}
}
async fn check_parts(&self, request: Request<CheckPartsRequest>) -> Result<Response<CheckPartsResponse>, Status> {
let request = request.into_inner();
if let Some(disk) = self.find_disk(&request.disk).await {

View File

@@ -24,13 +24,13 @@ use crate::disk::{
};
use crate::erasure_coding;
use crate::erasure_coding::bitrot_verify;
use crate::error::ObjectApiError;
use crate::error::{Error, Result};
use crate::error::{ObjectApiError, is_err_object_not_found};
use crate::global::GLOBAL_MRFState;
use crate::global::{GLOBAL_LocalNodeName, GLOBAL_TierConfigMgr};
use crate::heal::data_usage_cache::DataUsageCache;
use crate::heal::heal_ops::{HealEntryFn, HealSequence};
use crate::store_api::ObjectToDelete;
use crate::store_api::{ListPartsInfo, ObjectToDelete};
use crate::{
bucket::lifecycle::bucket_lifecycle_ops::{gen_transition_objname, get_transitioned_object_reader, put_restore_opts},
cache_value::metacache_set::{ListPathRawOptions, list_path_raw},
@@ -119,6 +119,7 @@ use tracing::{debug, info, warn};
use uuid::Uuid;
pub const DEFAULT_READ_BUFFER_SIZE: usize = 1024 * 1024;
pub const MAX_PARTS_COUNT: usize = 10000;
#[derive(Debug, Clone)]
pub struct SetDisks {
@@ -316,6 +317,9 @@ impl SetDisks {
.filter(|v| v.as_ref().is_some_and(|d| d.is_local()))
.collect()
}
fn default_read_quorum(&self) -> usize {
self.set_drive_count - self.default_parity_count
}
fn default_write_quorum(&self) -> usize {
let mut data_count = self.set_drive_count - self.default_parity_count;
if data_count == self.default_parity_count {
@@ -550,6 +554,183 @@ impl SetDisks {
}
}
async fn read_parts(
disks: &[Option<DiskStore>],
bucket: &str,
part_meta_paths: &[String],
part_numbers: &[usize],
read_quorum: usize,
) -> disk::error::Result<Vec<ObjectPartInfo>> {
let mut futures = Vec::with_capacity(disks.len());
for (i, disk) in disks.iter().enumerate() {
futures.push(async move {
if let Some(disk) = disk {
disk.read_parts(bucket, part_meta_paths).await
} else {
Err(DiskError::DiskNotFound)
}
});
}
let mut errs = Vec::with_capacity(disks.len());
let mut object_parts = Vec::with_capacity(disks.len());
let results = join_all(futures).await;
for result in results {
match result {
Ok(res) => {
errs.push(None);
object_parts.push(res);
}
Err(e) => {
errs.push(Some(e));
object_parts.push(vec![]);
}
}
}
if let Some(err) = reduce_read_quorum_errs(&errs, OBJECT_OP_IGNORED_ERRS, read_quorum) {
return Err(err);
}
let mut ret = vec![ObjectPartInfo::default(); part_meta_paths.len()];
for (part_idx, part_info) in part_meta_paths.iter().enumerate() {
let mut part_meta_quorum = HashMap::new();
let mut part_infos = Vec::new();
for (j, parts) in object_parts.iter().enumerate() {
if parts.len() != part_meta_paths.len() {
*part_meta_quorum.entry(part_info.clone()).or_insert(0) += 1;
continue;
}
if !parts[part_idx].etag.is_empty() {
*part_meta_quorum.entry(parts[part_idx].etag.clone()).or_insert(0) += 1;
part_infos.push(parts[part_idx].clone());
continue;
}
*part_meta_quorum.entry(part_info.clone()).or_insert(0) += 1;
}
let mut max_quorum = 0;
let mut max_etag = None;
let mut max_part_meta = None;
for (etag, quorum) in part_meta_quorum.iter() {
if quorum > &max_quorum {
max_quorum = *quorum;
max_etag = Some(etag);
max_part_meta = Some(etag);
}
}
let mut found = None;
for info in part_infos.iter() {
if let Some(etag) = max_etag
&& info.etag == *etag
{
found = Some(info.clone());
break;
}
if let Some(part_meta) = max_part_meta
&& info.etag.is_empty()
&& part_meta.ends_with(format!("part.{0}.meta", info.number).as_str())
{
found = Some(info.clone());
break;
}
}
if let (Some(found), Some(max_etag)) = (found, max_etag)
&& !found.etag.is_empty()
&& part_meta_quorum.get(max_etag).unwrap_or(&0) >= &read_quorum
{
ret[part_idx] = found;
} else {
ret[part_idx] = ObjectPartInfo {
number: part_numbers[part_idx],
error: Some(format!("part.{} not found", part_numbers[part_idx])),
..Default::default()
};
}
}
Ok(ret)
}
async fn list_parts(disks: &[Option<DiskStore>], part_path: &str, read_quorum: usize) -> disk::error::Result<Vec<usize>> {
let mut futures = Vec::with_capacity(disks.len());
for (i, disk) in disks.iter().enumerate() {
futures.push(async move {
if let Some(disk) = disk {
disk.list_dir(RUSTFS_META_MULTIPART_BUCKET, RUSTFS_META_MULTIPART_BUCKET, part_path, -1)
.await
} else {
Err(DiskError::DiskNotFound)
}
});
}
let mut errs = Vec::with_capacity(disks.len());
let mut object_parts = Vec::with_capacity(disks.len());
let results = join_all(futures).await;
for result in results {
match result {
Ok(res) => {
errs.push(None);
object_parts.push(res);
}
Err(e) => {
errs.push(Some(e));
object_parts.push(vec![]);
}
}
}
if let Some(err) = reduce_read_quorum_errs(&errs, OBJECT_OP_IGNORED_ERRS, read_quorum) {
return Err(err);
}
let mut part_quorum_map: HashMap<usize, usize> = HashMap::new();
for drive_parts in object_parts {
let mut parts_with_meta_count: HashMap<usize, usize> = HashMap::new();
// part files can be either part.N or part.N.meta
for part_path in drive_parts {
if let Some(num_str) = part_path.strip_prefix("part.") {
if let Some(meta_idx) = num_str.find(".meta") {
if let Ok(part_num) = num_str[..meta_idx].parse::<usize>() {
*parts_with_meta_count.entry(part_num).or_insert(0) += 1;
}
} else if let Ok(part_num) = num_str.parse::<usize>() {
*parts_with_meta_count.entry(part_num).or_insert(0) += 1;
}
}
}
// Include only part.N.meta files with corresponding part.N
for (&part_num, &cnt) in &parts_with_meta_count {
if cnt >= 2 {
*part_quorum_map.entry(part_num).or_insert(0) += 1;
}
}
}
let mut part_numbers = Vec::with_capacity(part_quorum_map.len());
for (part_num, count) in part_quorum_map {
if count >= read_quorum {
part_numbers.push(part_num);
}
}
part_numbers.sort();
Ok(part_numbers)
}
#[tracing::instrument(skip(disks, meta))]
async fn rename_part(
disks: &[Option<DiskStore>],
@@ -1942,6 +2123,8 @@ impl SetDisks {
let till_offset = erasure.shard_file_offset(part_offset, part_length, part_size);
let read_offset = (part_offset / erasure.block_size) * erasure.shard_size();
let mut readers = Vec::with_capacity(disks.len());
let mut errors = Vec::with_capacity(disks.len());
for (idx, disk_op) in disks.iter().enumerate() {
@@ -1950,7 +2133,7 @@ impl SetDisks {
disk_op.as_ref(),
bucket,
&format!("{}/{}/part.{}", object, files[idx].data_dir.unwrap_or_default(), part_number),
part_offset,
read_offset,
till_offset,
erasure.shard_size(),
HashAlgorithm::HighwayHash256,
@@ -4884,7 +5067,7 @@ impl StorageAPI for SetDisks {
) -> Result<PartInfo> {
let upload_id_path = Self::get_upload_id_dir(bucket, object, upload_id);
let (mut fi, _) = self.check_upload_id_exists(bucket, object, upload_id, true).await?;
let (fi, _) = self.check_upload_id_exists(bucket, object, upload_id, true).await?;
let write_quorum = fi.write_quorum(self.default_write_quorum());
@@ -5037,9 +5220,9 @@ impl StorageAPI for SetDisks {
// debug!("put_object_part part_info {:?}", part_info);
fi.parts = vec![part_info];
// fi.parts = vec![part_info.clone()];
let fi_buff = fi.marshal_msg()?;
let part_info_buff = part_info.marshal_msg()?;
drop(writers); // drop writers to close all files
@@ -5050,7 +5233,7 @@ impl StorageAPI for SetDisks {
&tmp_part_path,
RUSTFS_META_MULTIPART_BUCKET,
&part_path,
fi_buff.into(),
part_info_buff.into(),
write_quorum,
)
.await?;
@@ -5068,6 +5251,123 @@ impl StorageAPI for SetDisks {
Ok(ret)
}
#[tracing::instrument(skip(self))]
async fn list_object_parts(
&self,
bucket: &str,
object: &str,
upload_id: &str,
part_number_marker: Option<usize>,
mut max_parts: usize,
opts: &ObjectOptions,
) -> Result<ListPartsInfo> {
let (fi, _) = self.check_upload_id_exists(bucket, object, upload_id, false).await?;
let upload_id_path = Self::get_upload_id_dir(bucket, object, upload_id);
if max_parts > MAX_PARTS_COUNT {
max_parts = MAX_PARTS_COUNT;
}
let part_number_marker = part_number_marker.unwrap_or_default();
let mut ret = ListPartsInfo {
bucket: bucket.to_owned(),
object: object.to_owned(),
upload_id: upload_id.to_owned(),
max_parts,
part_number_marker,
user_defined: fi.metadata.clone(),
..Default::default()
};
if max_parts == 0 {
return Ok(ret);
}
let online_disks = self.get_disks_internal().await;
let read_quorum = fi.read_quorum(self.default_read_quorum());
let part_path = format!(
"{}{}",
path_join_buf(&[
&upload_id_path,
fi.data_dir.map(|v| v.to_string()).unwrap_or_default().as_str(),
]),
SLASH_SEPARATOR
);
let mut part_numbers = match Self::list_parts(&online_disks, &part_path, read_quorum).await {
Ok(parts) => parts,
Err(err) => {
if err == DiskError::FileNotFound {
return Ok(ret);
}
return Err(to_object_err(err.into(), vec![bucket, object]));
}
};
if part_numbers.is_empty() {
return Ok(ret);
}
let start_op = part_numbers.iter().find(|&&v| v != 0 && v == part_number_marker);
if part_number_marker > 0 && start_op.is_none() {
return Ok(ret);
}
if let Some(start) = start_op {
if start + 1 > part_numbers.len() {
return Ok(ret);
}
part_numbers = part_numbers[start + 1..].to_vec();
}
let mut parts = Vec::with_capacity(part_numbers.len());
let part_meta_paths = part_numbers
.iter()
.map(|v| format!("{part_path}part.{v}.meta"))
.collect::<Vec<String>>();
let object_parts =
Self::read_parts(&online_disks, RUSTFS_META_MULTIPART_BUCKET, &part_meta_paths, &part_numbers, read_quorum)
.await
.map_err(|e| to_object_err(e.into(), vec![bucket, object, upload_id]))?;
let mut count = max_parts;
for (i, part) in object_parts.iter().enumerate() {
if let Some(err) = &part.error {
warn!("list_object_parts part error: {:?}", &err);
}
parts.push(PartInfo {
etag: Some(part.etag.clone()),
part_num: part.number,
last_mod: part.mod_time,
size: part.size,
actual_size: part.actual_size,
});
count -= 1;
if count == 0 {
break;
}
}
ret.parts = parts;
if object_parts.len() > ret.parts.len() {
ret.is_truncated = true;
ret.next_part_number_marker = ret.parts.last().map(|v| v.part_num).unwrap_or_default();
}
Ok(ret)
}
#[tracing::instrument(skip(self))]
async fn list_multipart_uploads(
&self,
@@ -5143,8 +5443,8 @@ impl StorageAPI for SetDisks {
let splits: Vec<&str> = upload_id.split("x").collect();
if splits.len() == 2 {
if let Ok(unix) = splits[1].parse::<i64>() {
OffsetDateTime::from_unix_timestamp(unix)?
if let Ok(unix) = splits[1].parse::<i128>() {
OffsetDateTime::from_unix_timestamp_nanos(unix)?
} else {
now
}
@@ -5363,49 +5663,31 @@ impl StorageAPI for SetDisks {
let part_path = format!("{}/{}/", upload_id_path, fi.data_dir.unwrap_or(Uuid::nil()));
let files: Vec<String> = uploaded_parts.iter().map(|v| format!("part.{}.meta", v.part_num)).collect();
let part_meta_paths = uploaded_parts
.iter()
.map(|v| format!("{part_path}part.{0}.meta", v.part_num))
.collect::<Vec<String>>();
// readMultipleFiles
let part_numbers = uploaded_parts.iter().map(|v| v.part_num).collect::<Vec<usize>>();
let req = ReadMultipleReq {
bucket: RUSTFS_META_MULTIPART_BUCKET.to_string(),
prefix: part_path,
files,
max_size: 1 << 20,
metadata_only: true,
abort404: true,
max_results: 0,
};
let object_parts =
Self::read_parts(&disks, RUSTFS_META_MULTIPART_BUCKET, &part_meta_paths, &part_numbers, write_quorum).await?;
let part_files_resp = Self::read_multiple_files(&disks, req, write_quorum).await;
if part_files_resp.len() != uploaded_parts.len() {
if object_parts.len() != uploaded_parts.len() {
return Err(Error::other("part result number err"));
}
for (i, res) in part_files_resp.iter().enumerate() {
let part_id = uploaded_parts[i].part_num;
if !res.error.is_empty() || !res.exists {
error!("complete_multipart_upload part_id err {:?}, exists={}", res, res.exists);
return Err(Error::InvalidPart(part_id, bucket.to_owned(), object.to_owned()));
for (i, part) in object_parts.iter().enumerate() {
if let Some(err) = &part.error {
error!("complete_multipart_upload part error: {:?}", &err);
}
let part_fi = FileInfo::unmarshal(&res.data).map_err(|e| {
if uploaded_parts[i].part_num != part.number {
error!(
"complete_multipart_upload FileInfo::unmarshal err {:?}, part_id={}, bucket={}, object={}",
e, part_id, bucket, object
"complete_multipart_upload part_id err part_id != part_num {} != {}",
uploaded_parts[i].part_num, part.number
);
Error::InvalidPart(part_id, bucket.to_owned(), object.to_owned())
})?;
let part = &part_fi.parts[0];
let part_num = part.number;
// debug!("complete part {} file info {:?}", part_num, &part_fi);
// debug!("complete part {} object info {:?}", part_num, &part);
if part_id != part_num {
error!("complete_multipart_upload part_id err part_id != part_num {} != {}", part_id, part_num);
return Err(Error::InvalidPart(part_id, bucket.to_owned(), object.to_owned()));
return Err(Error::InvalidPart(uploaded_parts[i].part_num, bucket.to_owned(), object.to_owned()));
}
fi.add_object_part(

View File

@@ -17,6 +17,7 @@ use std::{collections::HashMap, sync::Arc};
use crate::disk::error_reduce::count_errs;
use crate::error::{Error, Result};
use crate::store_api::ListPartsInfo;
use crate::{
disk::{
DiskAPI, DiskInfo, DiskOption, DiskStore,
@@ -619,6 +620,20 @@ impl StorageAPI for Sets {
Ok((del_objects, del_errs))
}
async fn list_object_parts(
&self,
bucket: &str,
object: &str,
upload_id: &str,
part_number_marker: Option<usize>,
max_parts: usize,
opts: &ObjectOptions,
) -> Result<ListPartsInfo> {
self.get_disks_by_key(object)
.list_object_parts(bucket, object, upload_id, part_number_marker, max_parts, opts)
.await
}
#[tracing::instrument(skip(self))]
async fn list_multipart_uploads(
&self,

View File

@@ -38,7 +38,7 @@ use crate::new_object_layer_fn;
use crate::notification_sys::get_global_notification_sys;
use crate::pools::PoolMeta;
use crate::rebalance::RebalanceMeta;
use crate::store_api::{ListMultipartsInfo, ListObjectVersionsInfo, MultipartInfo, ObjectIO};
use crate::store_api::{ListMultipartsInfo, ListObjectVersionsInfo, ListPartsInfo, MultipartInfo, ObjectIO};
use crate::store_init::{check_disk_fatal_errs, ec_drives_no_config};
use crate::{
bucket::{lifecycle::bucket_lifecycle_ops::TransitionState, metadata::BucketMetadata},
@@ -1810,6 +1810,47 @@ impl StorageAPI for ECStore {
Ok((del_objects, del_errs))
}
#[tracing::instrument(skip(self))]
async fn list_object_parts(
&self,
bucket: &str,
object: &str,
upload_id: &str,
part_number_marker: Option<usize>,
max_parts: usize,
opts: &ObjectOptions,
) -> Result<ListPartsInfo> {
check_list_parts_args(bucket, object, upload_id)?;
// TODO: nslock
if self.single_pool() {
return self.pools[0]
.list_object_parts(bucket, object, upload_id, part_number_marker, max_parts, opts)
.await;
}
for pool in self.pools.iter() {
if self.is_suspended(pool.pool_idx).await {
continue;
}
match pool
.list_object_parts(bucket, object, upload_id, part_number_marker, max_parts, opts)
.await
{
Ok(res) => return Ok(res),
Err(err) => {
if is_err_invalid_upload_id(&err) {
continue;
}
return Err(err);
}
};
}
Err(StorageError::InvalidUploadID(bucket.to_owned(), object.to_owned(), upload_id.to_owned()))
}
#[tracing::instrument(skip(self))]
async fn list_multipart_uploads(
&self,

View File

@@ -201,7 +201,7 @@ impl GetObjectReader {
}
}
#[derive(Debug)]
#[derive(Debug, Clone)]
pub struct HTTPRangeSpec {
pub is_suffix_length: bool,
pub start: i64,
@@ -548,6 +548,7 @@ impl ObjectInfo {
mod_time: part.mod_time,
checksums: part.checksums.clone(),
number: part.number,
error: part.error.clone(),
})
.collect();
@@ -844,6 +845,48 @@ pub struct ListMultipartsInfo {
// encoding_type: String, // Not supported yet.
}
/// ListPartsInfo - represents list of all parts.
#[derive(Debug, Clone, Default)]
pub struct ListPartsInfo {
/// Name of the bucket.
pub bucket: String,
/// Name of the object.
pub object: String,
/// Upload ID identifying the multipart upload whose parts are being listed.
pub upload_id: String,
/// The class of storage used to store the object.
pub storage_class: String,
/// Part number after which listing begins.
pub part_number_marker: usize,
/// When a list is truncated, this element specifies the last part in the list,
/// as well as the value to use for the part-number-marker request parameter
/// in a subsequent request.
pub next_part_number_marker: usize,
/// Maximum number of parts that were allowed in the response.
pub max_parts: usize,
/// Indicates whether the returned list of parts is truncated.
pub is_truncated: bool,
/// List of all parts.
pub parts: Vec<PartInfo>,
/// Any metadata set during InitMultipartUpload, including encryption headers.
pub user_defined: HashMap<String, String>,
/// ChecksumAlgorithm if set
pub checksum_algorithm: String,
/// ChecksumType if set
pub checksum_type: String,
}
#[derive(Debug, Default, Clone)]
pub struct ObjectToDelete {
pub object_name: String,
@@ -923,10 +966,7 @@ pub trait StorageAPI: ObjectIO {
) -> Result<ListObjectVersionsInfo>;
// Walk TODO:
// GetObjectNInfo ObjectIO
async fn get_object_info(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<ObjectInfo>;
// PutObject ObjectIO
// CopyObject
async fn copy_object(
&self,
src_bucket: &str,
@@ -949,7 +989,6 @@ pub trait StorageAPI: ObjectIO {
// TransitionObject TODO:
// RestoreTransitionedObject TODO:
// ListMultipartUploads
async fn list_multipart_uploads(
&self,
bucket: &str,
@@ -960,7 +999,6 @@ pub trait StorageAPI: ObjectIO {
max_uploads: usize,
) -> Result<ListMultipartsInfo>;
async fn new_multipart_upload(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<MultipartUploadResult>;
// CopyObjectPart
async fn copy_object_part(
&self,
src_bucket: &str,
@@ -984,7 +1022,6 @@ pub trait StorageAPI: ObjectIO {
data: &mut PutObjReader,
opts: &ObjectOptions,
) -> Result<PartInfo>;
// GetMultipartInfo
async fn get_multipart_info(
&self,
bucket: &str,
@@ -992,7 +1029,15 @@ pub trait StorageAPI: ObjectIO {
upload_id: &str,
opts: &ObjectOptions,
) -> Result<MultipartInfo>;
// ListObjectParts
async fn list_object_parts(
&self,
bucket: &str,
object: &str,
upload_id: &str,
part_number_marker: Option<usize>,
max_parts: usize,
opts: &ObjectOptions,
) -> Result<ListPartsInfo>;
async fn abort_multipart_upload(&self, bucket: &str, object: &str, upload_id: &str, opts: &ObjectOptions) -> Result<()>;
async fn complete_multipart_upload(
self: Arc<Self>,
@@ -1002,13 +1047,10 @@ pub trait StorageAPI: ObjectIO {
uploaded_parts: Vec<CompletePart>,
opts: &ObjectOptions,
) -> Result<ObjectInfo>;
// GetDisks
async fn get_disks(&self, pool_idx: usize, set_idx: usize) -> Result<Vec<Option<DiskStore>>>;
// SetDriveCounts
fn set_drive_counts(&self) -> Vec<usize>;
// Health TODO:
// PutObjectMetadata
async fn put_object_metadata(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<ObjectInfo>;
// DecomTieredObject
async fn get_object_tags(&self, bucket: &str, object: &str, opts: &ObjectOptions) -> Result<String>;

View File

@@ -46,6 +46,20 @@ pub struct ObjectPartInfo {
pub index: Option<Bytes>,
// Checksums holds checksums of the part
pub checksums: Option<HashMap<String, String>>,
pub error: Option<String>,
}
impl ObjectPartInfo {
pub fn marshal_msg(&self) -> Result<Vec<u8>> {
let mut buf = Vec::new();
self.serialize(&mut Serializer::new(&mut buf))?;
Ok(buf)
}
pub fn unmarshal(buf: &[u8]) -> Result<Self> {
let t: ObjectPartInfo = rmp_serde::from_slice(buf)?;
Ok(t)
}
}
#[derive(Serialize, Deserialize, Debug, PartialEq, Default, Clone)]
@@ -287,6 +301,7 @@ impl FileInfo {
actual_size,
index,
checksums: None,
error: None,
};
for p in self.parts.iter_mut() {

View File

@@ -45,7 +45,6 @@ base64-simd = { workspace = true }
jsonwebtoken = { workspace = true }
tracing.workspace = true
rustfs-madmin.workspace = true
lazy_static.workspace = true
rustfs-utils = { workspace = true, features = ["path"] }
[dev-dependencies]

View File

@@ -13,6 +13,7 @@
// limitations under the License.
use crate::error::{Error, Result, is_err_config_not_found};
use crate::sys::get_claims_from_token_with_secret;
use crate::{
cache::{Cache, CacheEntity},
error::{Error as IamError, is_err_no_such_group, is_err_no_such_policy, is_err_no_such_user},
@@ -26,7 +27,7 @@ use rustfs_ecstore::global::get_global_action_cred;
use rustfs_madmin::{AccountStatus, AddOrUpdateUserReq, GroupDesc};
use rustfs_policy::{
arn::ARN,
auth::{self, Credentials, UserIdentity, get_claims_from_token_with_secret, is_secret_key_valid, jwt_sign},
auth::{self, Credentials, UserIdentity, is_secret_key_valid, jwt_sign},
format::Format,
policy::{
EMBEDDED_POLICY_TYPE, INHERITED_POLICY_TYPE, Policy, PolicyDoc, default::DEFAULT_POLICIES, iam_policy_claim_name_sa,

View File

@@ -20,7 +20,6 @@ use crate::{
manager::{extract_jwt_claims, get_default_policyes},
};
use futures::future::join_all;
use lazy_static::lazy_static;
use rustfs_ecstore::{
config::{
RUSTFS_CONFIG_PREFIX,
@@ -34,25 +33,28 @@ use rustfs_ecstore::{
use rustfs_policy::{auth::UserIdentity, policy::PolicyDoc};
use rustfs_utils::path::{SLASH_SEPARATOR, path_join_buf};
use serde::{Serialize, de::DeserializeOwned};
use std::sync::LazyLock;
use std::{collections::HashMap, sync::Arc};
use tokio::sync::broadcast::{self, Receiver as B_Receiver};
use tokio::sync::mpsc::{self, Sender};
use tracing::{debug, info, warn};
lazy_static! {
pub static ref IAM_CONFIG_PREFIX: String = format!("{}/iam", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_USERS_PREFIX: String = format!("{}/iam/users/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_SERVICE_ACCOUNTS_PREFIX: String = format!("{}/iam/service-accounts/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_GROUPS_PREFIX: String = format!("{}/iam/groups/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_POLICIES_PREFIX: String = format!("{}/iam/policies/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_STS_PREFIX: String = format!("{}/iam/sts/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_POLICY_DB_PREFIX: String = format!("{}/iam/policydb/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_POLICY_DB_USERS_PREFIX: String = format!("{}/iam/policydb/users/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_POLICY_DB_STS_USERS_PREFIX: String = format!("{}/iam/policydb/sts-users/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_POLICY_DB_SERVICE_ACCOUNTS_PREFIX: String =
format!("{}/iam/policydb/service-accounts/", RUSTFS_CONFIG_PREFIX);
pub static ref IAM_CONFIG_POLICY_DB_GROUPS_PREFIX: String = format!("{}/iam/policydb/groups/", RUSTFS_CONFIG_PREFIX);
}
pub static IAM_CONFIG_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam"));
pub static IAM_CONFIG_USERS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/users/"));
pub static IAM_CONFIG_SERVICE_ACCOUNTS_PREFIX: LazyLock<String> =
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/service-accounts/"));
pub static IAM_CONFIG_GROUPS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/groups/"));
pub static IAM_CONFIG_POLICIES_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policies/"));
pub static IAM_CONFIG_STS_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/sts/"));
pub static IAM_CONFIG_POLICY_DB_PREFIX: LazyLock<String> = LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/"));
pub static IAM_CONFIG_POLICY_DB_USERS_PREFIX: LazyLock<String> =
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/users/"));
pub static IAM_CONFIG_POLICY_DB_STS_USERS_PREFIX: LazyLock<String> =
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/sts-users/"));
pub static IAM_CONFIG_POLICY_DB_SERVICE_ACCOUNTS_PREFIX: LazyLock<String> =
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/service-accounts/"));
pub static IAM_CONFIG_POLICY_DB_GROUPS_PREFIX: LazyLock<String> =
LazyLock::new(|| format!("{RUSTFS_CONFIG_PREFIX}/iam/policydb/groups/"));
const IAM_IDENTITY_FILE: &str = "identity.json";
const IAM_POLICY_FILE: &str = "policy.json";

View File

@@ -23,6 +23,7 @@ use crate::store::GroupInfo;
use crate::store::MappedPolicy;
use crate::store::Store;
use crate::store::UserType;
use crate::utils::extract_claims;
use rustfs_ecstore::global::get_global_action_cred;
use rustfs_madmin::AddOrUpdateUserReq;
use rustfs_madmin::GroupDesc;
@@ -542,7 +543,7 @@ impl<T: Store> IamSys<T> {
}
};
if policies.is_empty() {
if !is_owner && policies.is_empty() {
return false;
}
@@ -732,3 +733,18 @@ pub struct UpdateServiceAccountOpts {
pub expiration: Option<OffsetDateTime>,
pub status: Option<String>,
}
pub fn get_claims_from_token_with_secret(token: &str, secret: &str) -> Result<HashMap<String, Value>> {
let mut ms =
extract_claims::<HashMap<String, Value>>(token, secret).map_err(|e| Error::other(format!("extract claims err {e}")))?;
if let Some(session_policy) = ms.claims.get(SESSION_POLICY_NAME) {
let policy_str = session_policy.as_str().unwrap_or_default();
let policy = base64_decode(policy_str.as_bytes()).map_err(|e| Error::other(format!("base64 decode err {e}")))?;
ms.claims.insert(
SESSION_POLICY_NAME_EXTRACTED.to_string(),
Value::String(String::from_utf8(policy).map_err(|e| Error::other(format!("utf8 decode err {e}")))?),
);
}
Ok(ms.claims)
}

View File

@@ -30,7 +30,6 @@ workspace = true
[dependencies]
async-trait.workspace = true
lazy_static.workspace = true
rustfs-protos.workspace = true
rand.workspace = true
serde.workspace = true

View File

@@ -14,12 +14,12 @@
// limitations under the License.
use async_trait::async_trait;
use lazy_static::lazy_static;
use local_locker::LocalLocker;
use lock_args::LockArgs;
use remote_client::RemoteClient;
use std::io::Result;
use std::sync::Arc;
use std::sync::LazyLock;
use tokio::sync::RwLock;
pub mod drwmutex;
@@ -29,9 +29,7 @@ pub mod lrwmutex;
pub mod namespace_lock;
pub mod remote_client;
lazy_static! {
pub static ref GLOBAL_LOCAL_SERVER: Arc<RwLock<LocalLocker>> = Arc::new(RwLock::new(LocalLocker::new()));
}
pub static GLOBAL_LOCAL_SERVER: LazyLock<Arc<RwLock<LocalLocker>>> = LazyLock::new(|| Arc::new(RwLock::new(LocalLocker::new())));
type LockClient = dyn Locker;

View File

@@ -16,8 +16,6 @@ use crate::error::Error as IamError;
use crate::error::{Error, Result};
use crate::policy::{INHERITED_POLICY_TYPE, Policy, Validator, iam_policy_claim_name_sa};
use crate::utils;
use crate::utils::extract_claims;
use serde::de::DeserializeOwned;
use serde::{Deserialize, Serialize};
use serde_json::{Value, json};
use std::collections::HashMap;
@@ -253,12 +251,6 @@ pub fn create_new_credentials_with_metadata(
})
}
pub fn get_claims_from_token_with_secret<T: DeserializeOwned>(token: &str, secret: &str) -> Result<T> {
let ms = extract_claims::<T>(token, secret)?;
// TODO SessionPolicyName
Ok(ms.claims)
}
pub fn jwt_sign<T: Serialize>(claims: &T, token_secret: &str) -> Result<String> {
let token = utils::generate_jwt(claims, token_secret)?;
Ok(token)

View File

@@ -1,15 +1 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod models;

View File

@@ -1,17 +1,3 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// automatically generated by the FlatBuffers compiler, do not modify
// @generated

View File

@@ -1,17 +1,4 @@
#![allow(unused_imports)]
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(clippy::all)]
pub mod proto_gen;

View File

@@ -1,15 +1 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod node_service;

View File

@@ -1,17 +1,3 @@
// Copyright 2024 RustFS Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// This file is @generated by prost-build.
/// --------------------------------------------------------------------
#[derive(Clone, PartialEq, ::prost::Message)]
@@ -184,6 +170,24 @@ pub struct VerifyFileResponse {
pub error: ::core::option::Option<Error>,
}
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct ReadPartsRequest {
#[prost(string, tag = "1")]
pub disk: ::prost::alloc::string::String,
#[prost(string, tag = "2")]
pub bucket: ::prost::alloc::string::String,
#[prost(string, repeated, tag = "3")]
pub paths: ::prost::alloc::vec::Vec<::prost::alloc::string::String>,
}
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct ReadPartsResponse {
#[prost(bool, tag = "1")]
pub success: bool,
#[prost(bytes = "bytes", tag = "2")]
pub object_part_infos: ::prost::bytes::Bytes,
#[prost(message, optional, tag = "3")]
pub error: ::core::option::Option<Error>,
}
#[derive(Clone, PartialEq, ::prost::Message)]
pub struct CheckPartsRequest {
/// indicate which one in the disks
#[prost(string, tag = "1")]
@@ -1295,6 +1299,21 @@ pub mod node_service_client {
.insert(GrpcMethod::new("node_service.NodeService", "VerifyFile"));
self.inner.unary(req, path, codec).await
}
pub async fn read_parts(
&mut self,
request: impl tonic::IntoRequest<super::ReadPartsRequest>,
) -> std::result::Result<tonic::Response<super::ReadPartsResponse>, tonic::Status> {
self.inner
.ready()
.await
.map_err(|e| tonic::Status::unknown(format!("Service was not ready: {}", e.into())))?;
let codec = tonic::codec::ProstCodec::default();
let path = http::uri::PathAndQuery::from_static("/node_service.NodeService/ReadParts");
let mut req = request.into_request();
req.extensions_mut()
.insert(GrpcMethod::new("node_service.NodeService", "ReadParts"));
self.inner.unary(req, path, codec).await
}
pub async fn check_parts(
&mut self,
request: impl tonic::IntoRequest<super::CheckPartsRequest>,
@@ -2338,6 +2357,10 @@ pub mod node_service_server {
&self,
request: tonic::Request<super::VerifyFileRequest>,
) -> std::result::Result<tonic::Response<super::VerifyFileResponse>, tonic::Status>;
async fn read_parts(
&self,
request: tonic::Request<super::ReadPartsRequest>,
) -> std::result::Result<tonic::Response<super::ReadPartsResponse>, tonic::Status>;
async fn check_parts(
&self,
request: tonic::Request<super::CheckPartsRequest>,
@@ -2972,6 +2995,34 @@ pub mod node_service_server {
};
Box::pin(fut)
}
"/node_service.NodeService/ReadParts" => {
#[allow(non_camel_case_types)]
struct ReadPartsSvc<T: NodeService>(pub Arc<T>);
impl<T: NodeService> tonic::server::UnaryService<super::ReadPartsRequest> for ReadPartsSvc<T> {
type Response = super::ReadPartsResponse;
type Future = BoxFuture<tonic::Response<Self::Response>, tonic::Status>;
fn call(&mut self, request: tonic::Request<super::ReadPartsRequest>) -> Self::Future {
let inner = Arc::clone(&self.0);
let fut = async move { <T as NodeService>::read_parts(&inner, request).await };
Box::pin(fut)
}
}
let accept_compression_encodings = self.accept_compression_encodings;
let send_compression_encodings = self.send_compression_encodings;
let max_decoding_message_size = self.max_decoding_message_size;
let max_encoding_message_size = self.max_encoding_message_size;
let inner = self.inner.clone();
let fut = async move {
let method = ReadPartsSvc(inner);
let codec = tonic::codec::ProstCodec::default();
let mut grpc = tonic::server::Grpc::new(codec)
.apply_compression_config(accept_compression_encodings, send_compression_encodings)
.apply_max_message_size_config(max_decoding_message_size, max_encoding_message_size);
let res = grpc.unary(method, req).await;
Ok(res)
};
Box::pin(fut)
}
"/node_service.NodeService/CheckParts" => {
#[allow(non_camel_case_types)]
struct CheckPartsSvc<T: NodeService>(pub Arc<T>);

View File

@@ -45,7 +45,7 @@ fn main() -> Result<(), AnyError> {
}
// path of proto file
let project_root_dir = env::current_dir()?.join("");
let project_root_dir = env::current_dir()?.join("crates/protos/src");
let proto_dir = project_root_dir.clone();
let proto_files = &["node.proto"];
let proto_out_dir = project_root_dir.join("generated").join("proto_gen");
@@ -268,7 +268,7 @@ fn protobuf_compiler_version() -> Result<Version, String> {
}
fn fmt() {
let output = Command::new("cargo").arg("fmt").arg("-p").arg("protos").status();
let output = Command::new("cargo").arg("fmt").arg("-p").arg("rustfs-protos").status();
match output {
Ok(status) => {

View File

@@ -130,6 +130,18 @@ message VerifyFileResponse {
optional Error error = 3;
}
message ReadPartsRequest {
string disk = 1;
string bucket = 2;
repeated string paths = 3;
}
message ReadPartsResponse {
bool success = 1;
bytes object_part_infos = 2;
optional Error error = 3;
}
message CheckPartsRequest {
string disk = 1; // indicate which one in the disks
string volume = 2;
@@ -768,6 +780,7 @@ service NodeService {
rpc WriteAll(WriteAllRequest) returns (WriteAllResponse) {};
rpc Delete(DeleteRequest) returns (DeleteResponse) {};
rpc VerifyFile(VerifyFileRequest) returns (VerifyFileResponse) {};
rpc ReadParts(ReadPartsRequest) returns (ReadPartsResponse) {};
rpc CheckParts(CheckPartsRequest) returns (CheckPartsResponse) {};
rpc RenamePart(RenamePartRequest) returns (RenamePartResponse) {};
rpc RenameFile(RenameFileRequest) returns (RenameFileResponse) {};

View File

@@ -45,5 +45,4 @@ serde_json.workspace = true
md-5 = { workspace = true }
[dev-dependencies]
#criterion = { version = "0.5.1", features = ["async", "async_tokio", "tokio"] }
tokio-test = "0.4"
tokio-test = { workspace = true }

View File

@@ -32,7 +32,6 @@ async-trait.workspace = true
datafusion = { workspace = true }
derive_builder = { workspace = true }
futures = { workspace = true }
lazy_static = { workspace = true }
parking_lot = { workspace = true }
s3s.workspace = true
snafu = { workspace = true, features = ["backtrace"] }

View File

@@ -33,7 +33,6 @@ use datafusion::{
execution::{RecordBatchStream, SendableRecordBatchStream},
};
use futures::{Stream, StreamExt};
use lazy_static::lazy_static;
use rustfs_s3select_api::{
QueryError, QueryResult,
query::{
@@ -48,6 +47,7 @@ use rustfs_s3select_api::{
},
};
use s3s::dto::{FileHeaderInfo, SelectObjectContentInput};
use std::sync::LazyLock;
use crate::{
execution::factory::QueryExecutionFactoryRef,
@@ -55,11 +55,9 @@ use crate::{
sql::logical::planner::DefaultLogicalPlanner,
};
lazy_static! {
static ref IGNORE: FileHeaderInfo = FileHeaderInfo::from_static(FileHeaderInfo::IGNORE);
static ref NONE: FileHeaderInfo = FileHeaderInfo::from_static(FileHeaderInfo::NONE);
static ref USE: FileHeaderInfo = FileHeaderInfo::from_static(FileHeaderInfo::USE);
}
static IGNORE: LazyLock<FileHeaderInfo> = LazyLock::new(|| FileHeaderInfo::from_static(FileHeaderInfo::IGNORE));
static NONE: LazyLock<FileHeaderInfo> = LazyLock::new(|| FileHeaderInfo::from_static(FileHeaderInfo::NONE));
static USE: LazyLock<FileHeaderInfo> = LazyLock::new(|| FileHeaderInfo::from_static(FileHeaderInfo::USE));
#[derive(Clone)]
pub struct SimpleQueryDispatcher {

View File

@@ -27,7 +27,6 @@ documentation = "https://docs.rs/rustfs-signer/latest/rustfs_signer/"
[dependencies]
tracing.workspace = true
lazy_static.workspace = true
bytes = { workspace = true }
http.workspace = true
time.workspace = true

View File

@@ -13,8 +13,6 @@
// limitations under the License.
use http::{HeaderMap, HeaderValue, request};
use lazy_static::lazy_static;
use std::collections::HashMap;
use time::{OffsetDateTime, macros::format_description};
use super::request_signature_v4::{SERVICE_TYPE_S3, get_scope, get_signature, get_signing_key};
@@ -32,15 +30,13 @@ const _CRLF_LEN: i64 = 2;
const _TRAILER_KV_SEPARATOR: &str = ":";
const _TRAILER_SIGNATURE: &str = "x-amz-trailer-signature";
lazy_static! {
static ref ignored_streaming_headers: HashMap<String, bool> = {
let mut m = <HashMap<String, bool>>::new();
m.insert("authorization".to_string(), true);
m.insert("user-agent".to_string(), true);
m.insert("content-type".to_string(), true);
m
};
}
// static ignored_streaming_headers: LazyLock<HashMap<String, bool>> = LazyLock::new(|| {
// let mut m = <HashMap<String, bool>>::new();
// m.insert("authorization".to_string(), true);
// m.insert("user-agent".to_string(), true);
// m.insert("content-type".to_string(), true);
// m
// });
#[allow(dead_code)]
fn build_chunk_string_to_sign(t: OffsetDateTime, region: &str, previous_sig: &str, chunk_check_sum: &str) -> String {

View File

@@ -16,9 +16,9 @@ use bytes::BytesMut;
use http::HeaderMap;
use http::Uri;
use http::request;
use lazy_static::lazy_static;
use std::collections::HashMap;
use std::fmt::Write;
use std::sync::LazyLock;
use time::{OffsetDateTime, macros::format_description};
use tracing::debug;
@@ -32,15 +32,14 @@ pub const SIGN_V4_ALGORITHM: &str = "AWS4-HMAC-SHA256";
pub const SERVICE_TYPE_S3: &str = "s3";
pub const SERVICE_TYPE_STS: &str = "sts";
lazy_static! {
static ref v4_ignored_headers: HashMap<String, bool> = {
let mut m = <HashMap<String, bool>>::new();
m.insert("accept-encoding".to_string(), true);
m.insert("authorization".to_string(), true);
m.insert("user-agent".to_string(), true);
m
};
}
#[allow(non_upper_case_globals)] // FIXME
static v4_ignored_headers: LazyLock<HashMap<String, bool>> = LazyLock::new(|| {
let mut m = <HashMap<String, bool>>::new();
m.insert("accept-encoding".to_string(), true);
m.insert("authorization".to_string(), true);
m.insert("user-agent".to_string(), true);
m
});
pub fn get_signing_key(secret: &str, loc: &str, t: OffsetDateTime, service_type: &str) -> [u8; 32] {
let mut s = "AWS4".to_string();

View File

@@ -30,7 +30,6 @@ blake3 = { workspace = true, optional = true }
crc32fast.workspace = true
hex-simd = { workspace = true, optional = true }
highway = { workspace = true, optional = true }
lazy_static = { workspace = true, optional = true }
local-ip-address = { workspace = true, optional = true }
md-5 = { workspace = true, optional = true }
netif = { workspace = true, optional = true }
@@ -77,12 +76,12 @@ workspace = true
default = ["ip"] # features that are enabled by default
ip = ["dep:local-ip-address"] # ip characteristics and their dependencies
tls = ["dep:rustls", "dep:rustls-pemfile", "dep:rustls-pki-types"] # tls characteristics and their dependencies
net = ["ip", "dep:url", "dep:netif", "dep:lazy_static", "dep:futures", "dep:transform-stream", "dep:bytes", "dep:s3s", "dep:hyper", "dep:hyper-util"] # empty network features
net = ["ip", "dep:url", "dep:netif", "dep:futures", "dep:transform-stream", "dep:bytes", "dep:s3s", "dep:hyper", "dep:hyper-util"] # empty network features
io = ["dep:tokio"]
path = []
notify = ["dep:hyper", "dep:s3s"] # file system notification features
compress = ["dep:flate2", "dep:brotli", "dep:snap", "dep:lz4", "dep:zstd"]
string = ["dep:regex", "dep:lazy_static", "dep:rand"]
string = ["dep:regex", "dep:rand"]
crypto = ["dep:base64-simd", "dep:hex-simd", "dep:hmac", "dep:hyper", "dep:sha1"]
hash = ["dep:highway", "dep:md-5", "dep:sha2", "dep:blake3", "dep:serde", "dep:siphasher", "dep:hex-simd", "dep:base64-simd"]
os = ["dep:nix", "dep:tempfile", "winapi"] # operating system utilities

View File

@@ -17,8 +17,8 @@ use futures::pin_mut;
use futures::{Stream, StreamExt};
use hyper::client::conn::http2::Builder;
use hyper_util::rt::TokioExecutor;
use lazy_static::lazy_static;
use std::net::Ipv6Addr;
use std::sync::LazyLock;
use std::{
collections::HashSet,
fmt::Display,
@@ -27,9 +27,7 @@ use std::{
use transform_stream::AsyncTryStream;
use url::{Host, Url};
lazy_static! {
static ref LOCAL_IPS: Vec<IpAddr> = must_get_local_ips().unwrap();
}
static LOCAL_IPS: LazyLock<Vec<IpAddr>> = LazyLock::new(|| must_get_local_ips().unwrap());
/// helper for validating if the provided arg is an ip address.
pub fn is_socket_addr(addr: &str) -> bool {
@@ -178,7 +176,7 @@ impl Display for XHost {
impl TryFrom<String> for XHost {
type Error = std::io::Error;
fn try_from(value: String) -> std::result::Result<Self, Self::Error> {
fn try_from(value: String) -> Result<Self, Self::Error> {
if let Some(addr) = value.to_socket_addrs()?.next() {
Ok(Self {
name: addr.ip().to_string(),
@@ -214,9 +212,9 @@ pub fn parse_and_resolve_address(addr_str: &str) -> std::io::Result<SocketAddr>
}
#[allow(dead_code)]
pub fn bytes_stream<S, E>(stream: S, content_length: usize) -> impl Stream<Item = std::result::Result<Bytes, E>> + Send + 'static
pub fn bytes_stream<S, E>(stream: S, content_length: usize) -> impl Stream<Item = Result<Bytes, E>> + Send + 'static
where
S: Stream<Item = std::result::Result<Bytes, E>> + Send + 'static,
S: Stream<Item = Result<Bytes, E>> + Send + 'static,
E: Send + 'static,
{
AsyncTryStream::<Bytes, E, _>::new(|mut y| async move {

View File

@@ -28,7 +28,7 @@ pub fn get_info(p: impl AsRef<Path>) -> std::io::Result<DiskInfo> {
let path_display = p.as_ref().display();
let path_wide: Vec<WCHAR> = p
.as_ref()
.canonicalize()?
.to_path_buf()
.into_os_string()
.encode_wide()
.chain(std::iter::once(0)) // Null-terminate the string
@@ -83,12 +83,21 @@ pub fn get_info(p: impl AsRef<Path>) -> std::io::Result<DiskInfo> {
used: total - free,
files: lp_total_number_of_clusters as u64,
ffree: lp_number_of_free_clusters as u64,
fstype: get_fs_type(&path_wide)?,
// TODO This field is currently unused, and since this logic causes a
// NotFound error during startup on Windows systems, it has been commented out here
//
// The error occurs in GetVolumeInformationW where the path parameter
// is of type [WCHAR; MAX_PATH]. For a drive letter, there are excessive
// trailing zeros, which causes the failure here.
//
// fstype: get_fs_type(&path_wide)?,
..Default::default()
})
}
/// Returns leading volume name.
#[allow(dead_code)]
fn get_volume_name(v: &[WCHAR]) -> std::io::Result<LPCWSTR> {
let volume_name_size: DWORD = MAX_PATH as _;
let mut lp_volume_name_buffer: [WCHAR; MAX_PATH] = [0; MAX_PATH];
@@ -102,12 +111,14 @@ fn get_volume_name(v: &[WCHAR]) -> std::io::Result<LPCWSTR> {
Ok(lp_volume_name_buffer.as_ptr())
}
#[allow(dead_code)]
fn utf16_to_string(v: &[WCHAR]) -> String {
let len = v.iter().position(|&x| x == 0).unwrap_or(v.len());
String::from_utf16_lossy(&v[..len])
}
/// Returns the filesystem type of the underlying mounted filesystem
#[allow(dead_code)]
fn get_fs_type(p: &[WCHAR]) -> std::io::Result<String> {
let path = get_volume_name(p)?;

View File

@@ -12,10 +12,10 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use lazy_static::*;
use rand::{Rng, RngCore};
use regex::Regex;
use std::io::{Error, Result};
use std::sync::LazyLock;
pub fn parse_bool(str: &str) -> Result<bool> {
match str {
@@ -116,9 +116,7 @@ pub fn match_as_pattern_prefix(pattern: &str, text: &str) -> bool {
text.len() <= pattern.len()
}
lazy_static! {
static ref ELLIPSES_RE: Regex = Regex::new(r"(.*)(\{[0-9a-z]*\.\.\.[0-9a-z]*\})(.*)").unwrap();
}
static ELLIPSES_RE: LazyLock<Regex> = LazyLock::new(|| Regex::new(r"(.*)(\{[0-9a-z]*\.\.\.[0-9a-z]*\})(.*)").unwrap());
/// Ellipses constants
const OPEN_BRACES: &str = "{";

View File

@@ -25,17 +25,16 @@ managing and monitoring the system.
|--certs
| ├── rustfs_cert.pem // Defaultfallback certificate
| ├── rustfs_key.pem // Defaultfallback private key
| ├── example.com/ // certificate directory of specific domain names
| ├── rustfs.com/ // certificate directory of specific domain names
| │ ├── rustfs_cert.pem
| │ └── rustfs_key.pem
| ├── api.example.com/
| ├── api.rustfs.com/
| │ ├── rustfs_cert.pem
| │ └── rustfs_key.pem
| └── cdn.example.com/
| └── cdn.rustfs.com/
| ├── rustfs_cert.pem
| └── rustfs_key.pem
|--config
| |--rustfs.env // env config
| |--rustfs-zh.env // env config in Chinese
| |--event.example.toml // event config
```

View File

@@ -36,15 +36,11 @@ Environment=RUSTFS_SECRET_KEY=rustfsadmin
ExecStart=/usr/local/bin/rustfs \
--address 0.0.0.0:9000 \
--volumes /data/rustfs/vol1,/data/rustfs/vol2 \
--obs-config /etc/rustfs/obs.yaml \
--console-enable \
--console-address 0.0.0.0:9001
--console-enable
# 定义启动命令,运行 /usr/local/bin/rustfs带参数
# --address 0.0.0.0:9000服务监听所有接口的 9000 端口。
# --volumes指定存储卷路径为 /data/rustfs/vol1 和 /data/rustfs/vol2。
# --obs-config指定配置文件路径为 /etc/rustfs/obs.yaml。
# --console-enable启用控制台功能。
# --console-address 0.0.0.0:9001控制台监听所有接口的 9001 端口。
# 定义环境变量配置,用于传递给服务程序,推荐使用且简洁
# rustfs 示例文件 详见: `../config/rustfs-zh.env`

View File

@@ -83,7 +83,6 @@ sudo journalctl -u rustfs --since today
```bash
# 检查服务端口
ss -tunlp | grep 9000
ss -tunlp | grep 9001
# 测试服务可用性
curl -I http://localhost:9000

View File

@@ -83,7 +83,6 @@ sudo journalctl -u rustfs --since today
```bash
# Check service ports
ss -tunlp | grep 9000
ss -tunlp | grep 9001
# Test service availability
curl -I http://localhost:9000

View File

@@ -22,9 +22,7 @@ Environment=RUSTFS_SECRET_KEY=rustfsadmin
ExecStart=/usr/local/bin/rustfs \
--address 0.0.0.0:9000 \
--volumes /data/rustfs/vol1,/data/rustfs/vol2 \
--obs-config /etc/rustfs/obs.yaml \
--console-enable \
--console-address 0.0.0.0:9001
--console-enable
# environment variable configuration (Option 2: Use environment variables)
# rustfs example file see: `../config/rustfs.env`

View File

@@ -36,13 +36,13 @@ cd deploy/certs/
ls -la
├── rustfs_cert.pem // Defaultfallback certificate
├── rustfs_key.pem // Defaultfallback private key
├── example.com/ // certificate directory of specific domain names
├── rustfs.com/ // certificate directory of specific domain names
│ ├── rustfs_cert.pem
│ └── rustfs_key.pem
├── api.example.com/
├── api.rustfs.com/
│ ├── rustfs_cert.pem
│ └── rustfs_key.pem
└── cdn.example.com/
└── cdn.rustfs.com/
├── rustfs_cert.pem
└── rustfs_key.pem
```

View File

@@ -7,22 +7,16 @@ RUSTFS_ROOT_PASSWORD=rustfsadmin
# RustFS 数据卷存储路径支持多卷配置vol1 到 vol4
RUSTFS_VOLUMES="./deploy/deploy/vol{1...4}"
# RustFS 服务启动参数,指定监听地址和端口
RUSTFS_OPTS="--address 0.0.0.0:9000"
RUSTFS_OPTS="--address :9000"
# RustFS 服务监听地址和端口
RUSTFS_ADDRESS="0.0.0.0:9000"
RUSTFS_ADDRESS=":9000"
# 是否启用 RustFS 控制台功能
RUSTFS_CONSOLE_ENABLE=true
# RustFS 控制台监听地址和端口
RUSTFS_CONSOLE_ADDRESS="0.0.0.0:9001"
# RustFS 服务端点地址,用于客户端访问
RUSTFS_SERVER_ENDPOINT="http://127.0.0.1:9000"
# RustFS 服务域名配置
RUSTFS_SERVER_DOMAINS=127.0.0.1:9001
RUSTFS_SERVER_DOMAINS=127.0.0.1:9000
# RustFS 许可证内容
RUSTFS_LICENSE="license content"
# 可观测性配置Endpointhttp://localhost:4317
RUSTFS_OBS_ENDPOINT=http://localhost:4317
# TLS 证书目录路径deploy/certs
RUSTFS_TLS_PATH=/etc/default/tls
# 事件通知配置文件路径deploy/config/event.example.toml
RUSTFS_EVENT_CONFIG=/etc/default/event.toml
RUSTFS_TLS_PATH=/etc/default/tls

View File

@@ -7,22 +7,16 @@ RUSTFS_ROOT_PASSWORD=rustfsadmin
# RustFS data volume storage paths, supports multiple volumes from vol1 to vol4
RUSTFS_VOLUMES="./deploy/deploy/vol{1...4}"
# RustFS service startup parameters, specifying listen address and port
RUSTFS_OPTS="--address 0.0.0.0:9000"
RUSTFS_OPTS="--address :9000"
# RustFS service listen address and port
RUSTFS_ADDRESS="0.0.0.0:9000"
RUSTFS_ADDRESS=":9000"
# Enable RustFS console functionality
RUSTFS_CONSOLE_ENABLE=true
# RustFS console listen address and port
RUSTFS_CONSOLE_ADDRESS="0.0.0.0:9001"
# RustFS service endpoint for client access
RUSTFS_SERVER_ENDPOINT="http://127.0.0.1:9000"
# RustFS service domain configuration
RUSTFS_SERVER_DOMAINS=127.0.0.1:9001
RUSTFS_SERVER_DOMAINS=127.0.0.1:9000
# RustFS license content
RUSTFS_LICENSE="license content"
# Observability configuration endpoint: RUSTFS_OBS_ENDPOINT
RUSTFS_OBS_ENDPOINT=http://localhost:4317
# TLS certificates directory path: deploy/certs
RUSTFS_TLS_PATH=/etc/default/tls
# event notification configuration file path: deploy/config/event.example.toml
RUSTFS_EVENT_CONFIG=/etc/default/event.toml
RUSTFS_TLS_PATH=/etc/default/tls

275
docker-buildx.sh Executable file
View File

@@ -0,0 +1,275 @@
#!/bin/bash
set -e
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
NC='\033[0m' # No Color
# Default values
REGISTRY="ghcr.io"
NAMESPACE="rustfs"
PLATFORMS="linux/amd64,linux/arm64"
PUSH=false
NO_CACHE=false
RELEASE=""
CHANNEL="release"
# Print usage
usage() {
echo "Usage: $0 [OPTIONS]"
echo ""
echo "Options:"
echo " -r, --registry REGISTRY Docker registry (default: ghcr.io)"
echo " -n, --namespace NAMESPACE Image namespace (default: rustfs)"
echo " -p, --platforms PLATFORMS Target platforms (default: linux/amd64,linux/arm64)"
echo " --push Push images to registry"
echo " --no-cache Disable build cache"
echo " --release VERSION Specify release version (default: auto-detect from git)"
echo " --channel CHANNEL Download channel: release or dev (default: release)"
echo " -h, --help Show this help message"
echo ""
echo "Examples:"
echo " $0 # Build all variants locally"
echo " $0 --push # Build and push all variants"
echo " $0 --push --no-cache # Build and push with no cache"
echo " $0 --release v1.0.0 # Build specific release version"
echo " $0 --channel dev # Build with dev channel binaries"
echo " $0 --release latest --channel dev # Build latest dev build"
}
# Print colored message
print_message() {
local color=$1
local message=$2
echo -e "${color}${message}${NC}"
}
# Check if Docker buildx is available
check_buildx() {
if ! docker buildx version >/dev/null 2>&1; then
print_message $RED "❌ Docker buildx is not available. Please install Docker with buildx support."
exit 1
fi
}
# Setup buildx builder
setup_builder() {
local builder_name="rustfs-builder"
print_message $BLUE "🔧 Setting up Docker buildx builder..."
# Check if builder exists
if docker buildx ls | grep -q "$builder_name"; then
print_message $YELLOW "⚠️ Builder '$builder_name' already exists, using existing one"
docker buildx use "$builder_name"
else
# Create new builder
docker buildx create --name "$builder_name" --driver docker-container --bootstrap
docker buildx use "$builder_name"
print_message $GREEN "✅ Created and activated builder '$builder_name'"
fi
# Inspect builder
docker buildx inspect --bootstrap
}
# Get version from git
get_version() {
if [ -n "$RELEASE" ]; then
echo "$RELEASE"
return
fi
# Try to get version from git tag
if git describe --abbrev=0 --tags >/dev/null 2>&1; then
git describe --abbrev=0 --tags
else
# Fallback to commit hash
git rev-parse --short HEAD
fi
}
# Build and push images
build_and_push() {
local version=$(get_version)
local image_base="${REGISTRY}/${NAMESPACE}/rustfs"
local build_date=$(date -u +"%Y-%m-%dT%H:%M:%SZ")
local vcs_ref=$(git rev-parse --short HEAD)
print_message $BLUE "🚀 Building RustFS Docker images..."
print_message $YELLOW " Version: $version"
print_message $YELLOW " Registry: $REGISTRY"
print_message $YELLOW " Namespace: $NAMESPACE"
print_message $YELLOW " Platforms: $PLATFORMS"
print_message $YELLOW " Channel: $CHANNEL"
print_message $YELLOW " Build Date: $build_date"
print_message $YELLOW " VCS Ref: $vcs_ref"
print_message $YELLOW " Push: $PUSH"
print_message $YELLOW " No Cache: $NO_CACHE"
echo ""
# Build command base
local build_cmd="docker buildx build"
build_cmd+=" --platform $PLATFORMS"
build_cmd+=" --build-arg RELEASE=$version"
build_cmd+=" --build-arg CHANNEL=$CHANNEL"
build_cmd+=" --build-arg BUILD_DATE=$build_date"
build_cmd+=" --build-arg VCS_REF=$vcs_ref"
if [ "$NO_CACHE" = true ]; then
build_cmd+=" --no-cache"
fi
if [ "$PUSH" = true ]; then
build_cmd+=" --push"
else
build_cmd+=" --load"
fi
# Build latest variant
print_message $BLUE "🏗️ Building latest variant..."
local latest_cmd="$build_cmd"
# Add channel-specific tags
if [ "$CHANNEL" = "dev" ]; then
latest_cmd+=" -t ${image_base}:dev-latest"
else
latest_cmd+=" -t ${image_base}:latest"
fi
latest_cmd+=" --build-arg RELEASE=latest"
latest_cmd+=" -f Dockerfile ."
print_message $BLUE "📦 Executing: $latest_cmd"
if eval $latest_cmd; then
print_message $GREEN "✅ Successfully built latest variant"
else
print_message $RED "❌ Failed to build latest variant"
print_message $YELLOW "💡 Note: Make sure rustfs binaries are available at:"
if [ "$CHANNEL" = "dev" ]; then
print_message $YELLOW " https://dl.rustfs.com/artifacts/rustfs/dev/"
else
print_message $YELLOW " https://dl.rustfs.com/artifacts/rustfs/release/"
fi
exit 1
fi
# Prune build cache
docker buildx prune -f
# Build release variant (only if not latest)
if [ "$RELEASE" != "latest" ]; then
print_message $BLUE "🏗️ Building release variant..."
local release_cmd="$build_cmd"
release_cmd+=" -t ${image_base}:${version}"
# Add channel-specific tags
if [ "$CHANNEL" = "dev" ]; then
release_cmd+=" -t ${image_base}:dev-${version}"
else
release_cmd+=" -t ${image_base}:release"
fi
release_cmd+=" --build-arg RELEASE=${version}"
release_cmd+=" -f Dockerfile ."
print_message $BLUE "📦 Executing: $release_cmd"
if eval $release_cmd; then
print_message $GREEN "✅ Successfully built release variant"
else
print_message $RED "❌ Failed to build release variant"
print_message $YELLOW "💡 Note: Make sure rustfs binaries are available at:"
if [ "$CHANNEL" = "dev" ]; then
print_message $YELLOW " https://dl.rustfs.com/artifacts/rustfs/dev/"
else
print_message $YELLOW " https://dl.rustfs.com/artifacts/rustfs/release/"
fi
exit 1
fi
else
print_message $BLUE "⏭️ Skipping release variant (already built as latest)"
fi
# Final cleanup
docker buildx prune -f
}
# Parse command line arguments
while [[ $# -gt 0 ]]; do
case $1 in
-r|--registry)
REGISTRY="$2"
shift 2
;;
-n|--namespace)
NAMESPACE="$2"
shift 2
;;
-p|--platforms)
PLATFORMS="$2"
shift 2
;;
--push)
PUSH=true
shift
;;
--no-cache)
NO_CACHE=true
shift
;;
--release)
RELEASE="$2"
shift 2
;;
--channel)
CHANNEL="$2"
if [ "$CHANNEL" != "release" ] && [ "$CHANNEL" != "dev" ]; then
print_message $RED "❌ Invalid channel: $CHANNEL. Must be 'release' or 'dev'"
exit 1
fi
shift 2
;;
-h|--help)
usage
exit 0
;;
*)
print_message $RED "❌ Unknown option: $1"
usage
exit 1
;;
esac
done
# Main execution
main() {
print_message $BLUE "🐳 RustFS Docker Buildx Build Script"
print_message $YELLOW "📋 Build Strategy: Uses pre-built binaries from dl.rustfs.com"
print_message $YELLOW "🚀 Production images only - optimized for distribution"
echo ""
# Check prerequisites
check_buildx
# Setup builder
setup_builder
echo ""
# Start build process
build_and_push
print_message $GREEN "🎉 Build process completed successfully!"
# Show built images if not pushing
if [ "$PUSH" = false ]; then
print_message $BLUE "📋 Built images:"
docker images | grep "${NAMESPACE}/rustfs" | head -10
fi
}
# Run main function
main

View File

@@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
version: '3.8'
version: "3.8"
services:
# RustFS main service
@@ -23,17 +23,15 @@ services:
container_name: rustfs-server
build:
context: .
dockerfile: Dockerfile.multi-stage
dockerfile: Dockerfile.source
args:
TARGETPLATFORM: linux/amd64
ports:
- "9000:9000" # S3 API port
- "9001:9001" # Console port
- "9000:9000" # S3 API port
environment:
- RUSTFS_VOLUMES=/data/rustfs0,/data/rustfs1,/data/rustfs2,/data/rustfs3
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9001
- RUSTFS_ACCESS_KEY=rustfsadmin
- RUSTFS_SECRET_KEY=rustfsadmin
- RUSTFS_LOG_LEVEL=info
@@ -48,7 +46,15 @@ services:
- rustfs-network
restart: unless-stopped
healthcheck:
test: [ "CMD", "wget", "--no-verbose", "--tries=1", "--spider", "http://localhost:9000/health" ]
test:
[
"CMD",
"wget",
"--no-verbose",
"--tries=1",
"--spider",
"http://localhost:9000/health",
]
interval: 30s
timeout: 10s
retries: 3
@@ -62,20 +68,19 @@ services:
container_name: rustfs-dev
build:
context: .
dockerfile: .docker/Dockerfile.devenv
dockerfile: Dockerfile.source
# Pure development environment
ports:
- "9010:9000"
- "9011:9001"
environment:
- RUSTFS_VOLUMES=/data/rustfs0,/data/rustfs1
- RUSTFS_ADDRESS=0.0.0.0:9000
- RUSTFS_CONSOLE_ENABLE=true
- RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9001
- RUSTFS_ACCESS_KEY=devadmin
- RUSTFS_SECRET_KEY=devadmin
- RUSTFS_LOG_LEVEL=debug
volumes:
- .:/root/s3-rustfs
- .:/app # Mount source code to /app for development
- rustfs_dev_data:/data
networks:
- rustfs-network
@@ -92,10 +97,10 @@ services:
volumes:
- ./.docker/observability/otel-collector.yml:/etc/otelcol-contrib/otel-collector.yml:ro
ports:
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
- "8888:8888" # Prometheus metrics
- "8889:8889" # Prometheus exporter metrics
- "4317:4317" # OTLP gRPC receiver
- "4318:4318" # OTLP HTTP receiver
- "8888:8888" # Prometheus metrics
- "8889:8889" # Prometheus exporter metrics
networks:
- rustfs-network
restart: unless-stopped
@@ -107,8 +112,8 @@ services:
image: jaegertracing/all-in-one:latest
container_name: jaeger
ports:
- "16686:16686" # Jaeger UI
- "14250:14250" # Jaeger gRPC
- "16686:16686" # Jaeger UI
- "14250:14250" # Jaeger gRPC
environment:
- COLLECTOR_OTLP_ENABLED=true
networks:
@@ -127,12 +132,12 @@ services:
- ./.docker/observability/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=200h'
- '--web.enable-lifecycle'
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--web.console.libraries=/etc/prometheus/console_libraries"
- "--web.console.templates=/etc/prometheus/consoles"
- "--storage.tsdb.retention.time=200h"
- "--web.enable-lifecycle"
networks:
- rustfs-network
restart: unless-stopped

View File

@@ -1,530 +0,0 @@
# RustFS Docker Build and Deployment Guide
This document describes how to build and deploy RustFS using Docker, including the automated GitHub Actions workflow for building and pushing images to Docker Hub and GitHub Container Registry.
## 🚀 Quick Start
### Using Pre-built Images
```bash
# Pull and run the latest RustFS image
docker run -d \
--name rustfs \
-p 9000:9000 \
-p 9001:9001 \
-v rustfs_data:/data \
-e RUSTFS_VOLUMES=/data/rustfs0,/data/rustfs1,/data/rustfs2,/data/rustfs3 \
-e RUSTFS_ACCESS_KEY=rustfsadmin \
-e RUSTFS_SECRET_KEY=rustfsadmin \
-e RUSTFS_CONSOLE_ENABLE=true \
rustfs/rustfs:latest
```
### Using Docker Compose
```bash
# Basic deployment
docker-compose up -d
# Development environment
docker-compose --profile dev up -d
# With observability stack
docker-compose --profile observability up -d
# Full stack with all services
docker-compose --profile dev --profile observability --profile testing up -d
```
## 📦 Available Images
Our GitHub Actions workflow builds multiple image variants:
### Image Registries
- **Docker Hub**: `rustfs/rustfs`
- **GitHub Container Registry**: `ghcr.io/rustfs/s3-rustfs`
### Image Variants
| Variant | Tag Suffix | Description | Use Case |
|---------|------------|-------------|----------|
| Production | *(none)* | Minimal Ubuntu-based runtime | Production deployment |
| Ubuntu | `-ubuntu22.04` | Ubuntu 22.04 based build environment | Development/Testing |
| Rocky Linux | `-rockylinux9.3` | Rocky Linux 9.3 based build environment | Enterprise environments |
| Development | `-devenv` | Full development environment | Development/Debugging |
### Supported Architectures
All images support multi-architecture:
- `linux/amd64` (x86_64-unknown-linux-musl)
- `linux/arm64` (aarch64-unknown-linux-gnu)
### Tag Examples
```bash
# Latest production image
rustfs/rustfs:latest
rustfs/rustfs:main
# Specific version
rustfs/rustfs:v1.0.0
rustfs/rustfs:v1.0.0-ubuntu22.04
# Development environment
rustfs/rustfs:latest-devenv
rustfs/rustfs:main-devenv
```
## 🔧 GitHub Actions Workflow
The Docker build workflow (`.github/workflows/docker.yml`) automatically:
1. **Builds cross-platform binaries** for `amd64` and `arm64`
2. **Creates Docker images** for all variants
3. **Pushes to registries** (Docker Hub and GitHub Container Registry)
4. **Creates multi-arch manifests** for seamless platform selection
5. **Performs security scanning** using Trivy
### Cross-Compilation Strategy
To handle complex native dependencies, we use different compilation strategies:
- **x86_64**: Native compilation with `x86_64-unknown-linux-musl` for static linking
- **aarch64**: Cross-compilation with `aarch64-unknown-linux-gnu` using the `cross` tool
This approach ensures compatibility with various C libraries while maintaining performance.
### Workflow Triggers
- **Push to main branch**: Builds and pushes `main` and `latest` tags
- **Tag push** (`v*`): Builds and pushes version tags
- **Pull requests**: Builds images without pushing
- **Manual trigger**: Workflow dispatch with options
### Required Secrets
Configure these secrets in your GitHub repository:
```bash
# Docker Hub credentials
DOCKERHUB_USERNAME=your-dockerhub-username
DOCKERHUB_TOKEN=your-dockerhub-access-token
# GitHub token is automatically available
GITHUB_TOKEN=automatically-provided
```
## 🏗️ Building Locally
### Prerequisites
- Docker with BuildKit enabled
- Rust toolchain (1.85+)
- Protocol Buffers compiler (protoc 31.1+)
- FlatBuffers compiler (flatc 25.2.10+)
- `cross` tool for ARM64 compilation
### Installation Commands
```bash
# Install Rust targets
rustup target add x86_64-unknown-linux-musl
rustup target add aarch64-unknown-linux-gnu
# Install cross for ARM64 compilation
cargo install cross --git https://github.com/cross-rs/cross
# Install protoc (macOS)
brew install protobuf
# Install protoc (Ubuntu)
sudo apt-get install protobuf-compiler
# Install flatc
# Download from: https://github.com/google/flatbuffers/releases
```
### Build Commands
```bash
# Test cross-compilation setup
./scripts/test-cross-build.sh
# Build production image for local platform
docker build -t rustfs:local .
# Build multi-stage production image
docker build -f Dockerfile.multi-stage -t rustfs:multi-stage .
# Build specific variant
docker build -f .docker/Dockerfile.ubuntu22.04 -t rustfs:ubuntu .
# Build for specific platform
docker build --platform linux/amd64 -t rustfs:amd64 .
docker build --platform linux/arm64 -t rustfs:arm64 .
# Build multi-platform image
docker buildx build --platform linux/amd64,linux/arm64 -t rustfs:multi .
```
### Cross-Compilation
```bash
# Generate protobuf code first
cargo run --bin gproto
# Native x86_64 build
cargo build --release --target x86_64-unknown-linux-musl --bin rustfs
# Cross-compile for ARM64
cross build --release --target aarch64-unknown-linux-gnu --bin rustfs
```
### Build with Docker Compose
```bash
# Build all services
docker-compose build
# Build specific service
docker-compose build rustfs
# Build development environment
docker-compose build rustfs-dev
```
## 🚀 Deployment Options
### 1. Single Container
```bash
docker run -d \
--name rustfs \
--restart unless-stopped \
-p 9000:9000 \
-p 9001:9001 \
-v /data/rustfs:/data \
-e RUSTFS_VOLUMES=/data/rustfs0,/data/rustfs1,/data/rustfs2,/data/rustfs3 \
-e RUSTFS_ADDRESS=0.0.0.0:9000 \
-e RUSTFS_CONSOLE_ENABLE=true \
-e RUSTFS_CONSOLE_ADDRESS=0.0.0.0:9001 \
-e RUSTFS_ACCESS_KEY=rustfsadmin \
-e RUSTFS_SECRET_KEY=rustfsadmin \
rustfs/rustfs:latest
```
### 2. Docker Compose Profiles
```bash
# Production deployment
docker-compose up -d
# Development with debugging
docker-compose --profile dev up -d
# With monitoring stack
docker-compose --profile observability up -d
# Complete testing environment
docker-compose --profile dev --profile observability --profile testing up -d
```
### 3. Kubernetes Deployment
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: rustfs
spec:
replicas: 3
selector:
matchLabels:
app: rustfs
template:
metadata:
labels:
app: rustfs
spec:
containers:
- name: rustfs
image: rustfs/rustfs:latest
ports:
- containerPort: 9000
- containerPort: 9001
env:
- name: RUSTFS_VOLUMES
value: "/data/rustfs0,/data/rustfs1,/data/rustfs2,/data/rustfs3"
- name: RUSTFS_ADDRESS
value: "0.0.0.0:9000"
- name: RUSTFS_CONSOLE_ENABLE
value: "true"
- name: RUSTFS_CONSOLE_ADDRESS
value: "0.0.0.0:9001"
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
persistentVolumeClaim:
claimName: rustfs-data
```
## ⚙️ Configuration
### Environment Variables
| Variable | Description | Default |
|----------|-------------|---------|
| `RUSTFS_VOLUMES` | Comma-separated list of data volumes | Required |
| `RUSTFS_ADDRESS` | Server bind address | `0.0.0.0:9000` |
| `RUSTFS_CONSOLE_ENABLE` | Enable web console | `false` |
| `RUSTFS_CONSOLE_ADDRESS` | Console bind address | `0.0.0.0:9001` |
| `RUSTFS_ACCESS_KEY` | S3 access key | `rustfsadmin` |
| `RUSTFS_SECRET_KEY` | S3 secret key | `rustfsadmin` |
| `RUSTFS_LOG_LEVEL` | Log level | `info` |
| `RUSTFS_OBS_ENDPOINT` | Observability endpoint | `""` |
| `RUSTFS_TLS_PATH` | TLS certificates path | `""` |
### Volume Mounts
- **Data volumes**: `/data/rustfs{0,1,2,3}` - RustFS data storage
- **Logs**: `/app/logs` - Application logs
- **Config**: `/etc/rustfs/` - Configuration files
- **TLS**: `/etc/ssl/rustfs/` - TLS certificates
### Ports
- **9000**: S3 API endpoint
- **9001**: Web console (if enabled)
- **9002**: Admin API (if enabled)
- **50051**: gRPC API (if enabled)
## 🔍 Monitoring and Observability
### Health Checks
The Docker images include built-in health checks:
```bash
# Check container health
docker ps --filter "name=rustfs" --format "table {{.Names}}\t{{.Status}}"
# View health check logs
docker inspect rustfs --format='{{json .State.Health}}'
```
### Metrics and Tracing
When using the observability profile:
- **Prometheus**: http://localhost:9090
- **Grafana**: http://localhost:3000 (admin/admin)
- **Jaeger**: http://localhost:16686
- **OpenTelemetry Collector**: http://localhost:8888/metrics
### Log Collection
```bash
# View container logs
docker logs rustfs -f
# Export logs
docker logs rustfs > rustfs.log 2>&1
```
## 🛠️ Development
### Development Environment
```bash
# Start development container
docker-compose --profile dev up -d rustfs-dev
# Access development container
docker exec -it rustfs-dev bash
# Mount source code for live development
docker run -it --rm \
-v $(pwd):/root/s3-rustfs \
-p 9000:9000 \
rustfs/rustfs:devenv \
bash
```
### Building from Source in Container
```bash
# Use development image for building
docker run --rm \
-v $(pwd):/root/s3-rustfs \
-w /root/s3-rustfs \
rustfs/rustfs:ubuntu22.04 \
cargo build --release --bin rustfs
```
### Testing Cross-Compilation
```bash
# Run the test script to verify cross-compilation setup
./scripts/test-cross-build.sh
# This will test:
# - x86_64-unknown-linux-musl compilation
# - aarch64-unknown-linux-gnu cross-compilation
# - Docker builds for both architectures
```
## 🔐 Security
### Security Scanning
The workflow includes Trivy security scanning:
```bash
# Run security scan locally
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock \
-v $HOME/Library/Caches:/root/.cache/ \
aquasec/trivy:latest image rustfs/rustfs:latest
```
### Security Best Practices
1. **Use non-root user**: Images run as `rustfs` user (UID 1000)
2. **Minimal base images**: Ubuntu minimal for production
3. **Security updates**: Regular base image updates
4. **Secret management**: Use Docker secrets or environment files
5. **Network security**: Use Docker networks and proper firewall rules
## 📝 Troubleshooting
### Common Issues
#### 1. Cross-Compilation Failures
**Problem**: ARM64 build fails with linking errors
```bash
error: linking with `aarch64-linux-gnu-gcc` failed
```
**Solution**: Use the `cross` tool instead of native cross-compilation:
```bash
# Install cross tool
cargo install cross --git https://github.com/cross-rs/cross
# Use cross for ARM64 builds
cross build --release --target aarch64-unknown-linux-gnu --bin rustfs
```
#### 2. Protobuf Generation Issues
**Problem**: Missing protobuf definitions
```bash
error: failed to run custom build command for `protos`
```
**Solution**: Generate protobuf code first:
```bash
cargo run --bin gproto
```
#### 3. Docker Build Failures
**Problem**: Binary not found in Docker build
```bash
COPY failed: file not found in build context
```
**Solution**: Ensure binaries are built before Docker build:
```bash
# Build binaries first
cargo build --release --target x86_64-unknown-linux-musl --bin rustfs
cross build --release --target aarch64-unknown-linux-gnu --bin rustfs
# Then build Docker image
docker build .
```
### Debug Commands
```bash
# Check container status
docker ps -a
# View container logs
docker logs rustfs --tail 100
# Access container shell
docker exec -it rustfs bash
# Check resource usage
docker stats rustfs
# Inspect container configuration
docker inspect rustfs
# Test cross-compilation setup
./scripts/test-cross-build.sh
```
## 🔄 CI/CD Integration
### GitHub Actions
The provided workflow can be customized:
```yaml
# Override image names
env:
REGISTRY_IMAGE_DOCKERHUB: myorg/rustfs
REGISTRY_IMAGE_GHCR: ghcr.io/myorg/rustfs
```
### GitLab CI
```yaml
build:
stage: build
image: docker:latest
services:
- docker:dind
script:
- docker build -t $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
```
### Jenkins Pipeline
```groovy
pipeline {
agent any
stages {
stage('Build') {
steps {
script {
docker.build("rustfs:${env.BUILD_ID}")
}
}
}
stage('Push') {
steps {
script {
docker.withRegistry('https://registry.hub.docker.com', 'dockerhub-credentials') {
docker.image("rustfs:${env.BUILD_ID}").push()
}
}
}
}
}
}
```
## 📚 Additional Resources
- [Docker Official Documentation](https://docs.docker.com/)
- [Docker Compose Reference](https://docs.docker.com/compose/)
- [GitHub Actions Documentation](https://docs.github.com/en/actions)
- [Cross-compilation with Rust](https://rust-lang.github.io/rustup/cross-compilation.html)
- [Cross tool documentation](https://github.com/cross-rs/cross)
- [RustFS Configuration Guide](../README.md)

View File

@@ -1,57 +0,0 @@
## Summary
This PR modifies the GitHub Actions workflows to ensure that **version releases never get skipped** during CI/CD execution, addressing the issue where duplicate action detection could skip important release processes.
## Changes Made
### 🔧 Core Modifications
1. **Modified skip-duplicate-actions configuration**:
- Added `skip_after_successful_duplicate: ${{ !startsWith(github.ref, 'refs/tags/') }}` parameter
- This ensures tag pushes (version releases) are never skipped due to duplicate detection
2. **Updated workflow job conditions**:
- **CI Workflow** (`ci.yml`): Modified `test-and-lint` and `e2e-tests` jobs
- **Build Workflow** (`build.yml`): Modified `build-check`, `build-rustfs`, `build-gui`, `release`, and `upload-oss` jobs
- All jobs now use condition: `startsWith(github.ref, 'refs/tags/') || needs.skip-check.outputs.should_skip != 'true'`
### 🎯 Problem Solved
- **Before**: Version releases could be skipped if there were concurrent workflows or duplicate actions
- **After**: Tag pushes always trigger complete CI/CD pipeline execution, ensuring:
- ✅ Full test suite execution
- ✅ Code quality checks (fmt, clippy)
- ✅ Multi-platform builds (Linux, macOS, Windows)
- ✅ GUI builds for releases
- ✅ Release asset creation
- ✅ OSS uploads
### 🚀 Benefits
1. **Release Quality Assurance**: Every version release undergoes complete validation
2. **Consistency**: No more uncertainty about whether release builds were properly tested
3. **Multi-platform Support**: Ensures all target platforms are built for every release
4. **Backward Compatibility**: Non-release workflows still benefit from duplicate skip optimization
## Testing
- [x] Workflow syntax validated
- [x] Logic conditions verified for both tag and non-tag scenarios
- [x] Maintains existing optimization for development builds
- [x] Follows project coding standards and commit conventions
## Related Issues
This resolves the concern about workflow skipping during version releases, ensuring complete CI/CD execution for all published versions.
## Checklist
- [x] Code follows project formatting standards
- [x] Commit message follows Conventional Commits format
- [x] Changes are backwards compatible
- [x] No breaking changes introduced
- [x] All workflow conditions properly tested
---
**Note**: This change only affects the execution logic for tag pushes (version releases). Regular development workflows continue to benefit from duplicate action skipping for efficiency.

View File

@@ -67,7 +67,6 @@ hyper.workspace = true
hyper-util.workspace = true
http.workspace = true
http-body.workspace = true
lazy_static.workspace = true
matchit = { workspace = true }
mime_guess = { workspace = true }
opentelemetry = { workspace = true }
@@ -108,6 +107,9 @@ urlencoding = { workspace = true }
uuid = { workspace = true }
zip = { workspace = true }
[target.'cfg(any(target_os = "macos", target_os = "freebsd", target_os = "netbsd", target_os = "openbsd"))'.dependencies]
sysctl = { workspace = true }
[target.'cfg(target_os = "linux")'.dependencies]
libsystemd.workspace = true

View File

@@ -74,10 +74,14 @@ To get started with RustFS, follow these steps:
2. **Docker Quick Start (Option 2)**
```bash
podman run -d -p 9000:9000 -p 9001:9001 -v /data:/data quay.io/rustfs/rustfs
# Docker Hub (recommended)
docker run -d -p 9000:9000 -v /data:/data rustfs/rustfs:latest
# Alternative using Podman
podman run -d -p 9000:9000 -v /data:/data rustfs/rustfs:latest
```
3. **Access the Console**: Open your web browser and navigate to `http://localhost:9001` to access the RustFS console,
3. **Access the Console**: Open your web browser and navigate to `http://localhost:9000` to access the RustFS console,
default username and password is `rustfsadmin` .
4. **Create a Bucket**: Use the console to create a new bucket for your objects.
5. **Upload Objects**: You can upload files directly through the console or use S3-compatible APIs to interact with your

View File

@@ -596,6 +596,7 @@ impl Operation for ImportBucketMetadata {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}

View File

@@ -19,6 +19,7 @@ use matchit::Params;
use rustfs_config::notify::{NOTIFY_MQTT_SUB_SYS, NOTIFY_WEBHOOK_SUB_SYS};
use rustfs_notify::EventName;
use rustfs_notify::rules::{BucketNotificationConfig, PatternRules};
use s3s::header::CONTENT_LENGTH;
use s3s::{Body, S3Error, S3ErrorCode, S3Request, S3Response, S3Result, header::CONTENT_TYPE, s3_error};
use serde::{Deserialize, Serialize};
use serde_urlencoded::from_bytes;
@@ -103,6 +104,7 @@ impl Operation for SetNotificationTarget {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}
@@ -181,6 +183,7 @@ impl Operation for RemoveNotificationTarget {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}
@@ -226,6 +229,7 @@ impl Operation for SetBucketNotification {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}
@@ -289,6 +293,7 @@ impl Operation for RemoveBucketNotification {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}

View File

@@ -17,7 +17,11 @@ use matchit::Params;
use rustfs_ecstore::global::get_global_action_cred;
use rustfs_iam::error::{is_err_no_such_group, is_err_no_such_user};
use rustfs_madmin::GroupAddRemove;
use s3s::{Body, S3Error, S3ErrorCode, S3Request, S3Response, S3Result, header::CONTENT_TYPE, s3_error};
use s3s::{
Body, S3Error, S3ErrorCode, S3Request, S3Response, S3Result,
header::{CONTENT_LENGTH, CONTENT_TYPE},
s3_error,
};
use serde::Deserialize;
use serde_urlencoded::from_bytes;
use tracing::warn;
@@ -129,7 +133,7 @@ impl Operation for SetGroupStatus {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}
@@ -214,7 +218,7 @@ impl Operation for UpdateGroupMembers {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}

View File

@@ -19,7 +19,11 @@ use rustfs_ecstore::global::get_global_action_cred;
use rustfs_iam::error::is_err_no_such_user;
use rustfs_iam::store::MappedPolicy;
use rustfs_policy::policy::Policy;
use s3s::{Body, S3Error, S3ErrorCode, S3Request, S3Response, S3Result, header::CONTENT_TYPE, s3_error};
use s3s::{
Body, S3Error, S3ErrorCode, S3Request, S3Response, S3Result,
header::{CONTENT_LENGTH, CONTENT_TYPE},
s3_error,
};
use serde::Deserialize;
use serde_urlencoded::from_bytes;
use std::collections::HashMap;
@@ -123,7 +127,7 @@ impl Operation for AddCannedPolicy {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}
@@ -198,7 +202,7 @@ impl Operation for RemoveCannedPolicy {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}
@@ -284,7 +288,7 @@ impl Operation for SetPolicyForUserOrGroup {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}

View File

@@ -22,7 +22,11 @@ use rustfs_ecstore::{
rebalance::{DiskStat, RebalSaveOpt},
store_api::BucketOptions,
};
use s3s::{Body, S3Request, S3Response, S3Result, header::CONTENT_TYPE, s3_error};
use s3s::{
Body, S3Request, S3Response, S3Result,
header::{CONTENT_LENGTH, CONTENT_TYPE},
s3_error,
};
use serde::{Deserialize, Serialize};
use std::time::Duration;
use time::OffsetDateTime;
@@ -265,7 +269,10 @@ impl Operation for RebalanceStop {
warn!("handle RebalanceStop notification_sys load_rebalance_meta done");
}
Ok(S3Response::new((StatusCode::OK, Body::empty())))
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}

View File

@@ -28,6 +28,7 @@ use rustfs_madmin::{
use rustfs_policy::policy::action::{Action, AdminAction};
use rustfs_policy::policy::{Args, Policy};
use s3s::S3ErrorCode::InvalidRequest;
use s3s::header::CONTENT_LENGTH;
use s3s::{Body, S3Error, S3ErrorCode, S3Request, S3Response, S3Result, header::CONTENT_TYPE, s3_error};
use serde::Deserialize;
use serde_urlencoded::from_bytes;
@@ -306,7 +307,7 @@ impl Operation for UpdateServiceAccount {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}
@@ -607,7 +608,7 @@ impl Operation for DeleteServiceAccount {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}

View File

@@ -16,7 +16,11 @@
use http::{HeaderMap, StatusCode};
//use iam::get_global_action_cred;
use matchit::Params;
use s3s::{Body, S3Error, S3ErrorCode, S3Request, S3Response, S3Result, header::CONTENT_TYPE, s3_error};
use s3s::{
Body, S3Error, S3ErrorCode, S3Request, S3Response, S3Result,
header::{CONTENT_LENGTH, CONTENT_TYPE},
s3_error,
};
use serde_urlencoded::from_bytes;
use time::OffsetDateTime;
use tracing::{debug, warn};
@@ -169,7 +173,7 @@ impl Operation for AddTier {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}
@@ -236,7 +240,7 @@ impl Operation for EditTier {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}
@@ -332,7 +336,7 @@ impl Operation for RemoveTier {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}
@@ -366,7 +370,7 @@ impl Operation for VerifyTier {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}
@@ -457,7 +461,7 @@ impl Operation for ClearTier {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}
@@ -636,7 +640,7 @@ impl Operation for PostRestoreObject {
let mut header = HeaderMap::new();
header.insert(CONTENT_TYPE, "application/json".parse().unwrap());
header.insert(CONTENT_LENGTH, "0".parse().unwrap());
Ok(S3Response::with_headers((StatusCode::OK, Body::empty()), header))
}
}*/

Some files were not shown because too many files have changed in this diff Show More